Motor Vehicle Accidents Caused by Autonomous Vehicles: Exploring AI Liability in the Tort System

By Morgan Sansone. 

Those who have recently visited downtown Phoenix have undoubtedly seen several Waymo autonomous vehicles driving and zooming about. For those who are unfamiliar, Waymo is the name of Google’s self-driving car (autonomous vehicle) company. Waymo cars are readily identifiable by their black and white colors and enormous overhead sensors and are just one of a myriad of self-driving cars being tested. Arizona has become a hotspot for autonomous vehicle (AV) testing, in part because of its dry climate and grid-like street systems. As of November 2022, anyone 18 years or older can use Waymo’s ride-hailing service in downtown Phoenix and experience a ride without anyone in the driver’s seat. 

Waymo aspires to foster transportation safety, as 94% of crashes in the United States involve human error. While AV technology seems promising and may prove extremely beneficial, the technology is not infallible. This is evidenced by the tragic incident in which a self-driving Uber struck and killed a pedestrian near Arizona State University in Tempe, back in 2018. The presence of such vehicles makes it pertinent for Arizona residents, whether they be pedestrians or drivers, to know who is liable should they be involved in an accident with an AV, at no fault of their own. 

Traditional Tort Liability Framework 

Collisions caused by AVs serve as a prime case study for exploring the applicability of the existing tort liability framework to Artificial Intelligence (AI). Most Americans interact with AI daily, whether it be through Google Maps, social media, chatbots, Alexa, or even self-driving cars. AI technology is rapidly evolving and becoming increasingly prevalent in everyday life, making it necessary to have an adaptable liability framework that can remedy situations in which AI harms us. Unfortunately, the existing tort liability framework is ill-suited for the task of addressing AI harms. Liability for collisions caused by AVs remains largely uncertain, especially in the future context of privately owned AVs. 

The existing tort liability framework would allow victims of AV-caused collisions to hold the operator or the manufacturer liable. A tort is a civil damages claim for injuries caused by the wrongful conduct of another. In the case of Waymo, Google is both the operator and manufacturer, as Waymo vehicles are not yet owned by private individuals. However, if such vehicles are sold for private consumption in the future, the manufacturer would still be Google and the operator would become the individual end-user or owner of the AV. The owner of the AV could be liable under a negligent supervision theory. But this assumes the owner had a duty to supervise the vehicle and failed to do so, causing a collision. The difficulty in proving duty and causation could leave many tort victims of AVs without much-deserved compensation. 

Alternatively, the manufacturer could be held liable for writing ineffective or defective code. However, holding the manufacturer liable under a professional negligence or product liability standard is problematic in that AI is often capable of evolving independently through machine learning. AI is often characterized as a black box, in which coders can see the inputs and results generated by the AI, but cannot see or understand how the AI arrived at its decision internally. This can make proving causation problematic. Additionally, foreseeability, a fundamental prerequisite for the imposition of tort liability, is difficult to establish when the AI’s internal decision making is not discernible to the programmer and therefore capable of unanticipated conduct. 

The existing tort liability framework seems ill suited for addressing wrongdoing caused by AI such as self-driving cars because it is not clear who to impose liability on. However, the tort system could be reformed to hold the machine itself liable.

Liability for Self-Driving Cars: The Case for AI Personhood 

A promising solution for addressing harms caused by AI is to grant legal personhood to the AI machine itself, similar to how we treat corporations. The AI would be required to carry liability insurance to compensate any successful tort claimants. This approach is desirable as it removes the difficulty in determining the appropriate standard of care for programmers and eliminates the need to impose liability on diligent programmers who code an AI machine that causes subsequent, unforeseeable harm. Notably, the European Commission is exploring potential AI liability frameworks and has noted the possibility of requiring personhood for AI machines. While personhood for AI machines would allow tort victims greater access to compensation, it could prove logistically difficult in the context of privately owned AVs. Requiring personhood and liability insurance for each privately owned AV might raise administrative difficulties and cost prohibitions, hindering the development of the technology. Ultimately, any future framework intended to address AI tort liability needs to be adaptable and applicable enough to redress harms in a variety of situations but still allow for the efficient and safe evolution of the technology. 

Conclusion 

The rapid evolution and expansion of self-driving cars and other AI technologies require an adaptable liability framework for compensating tort victims harmed by such technologies. Future developments in this area are likely to prove both intriguing and controversial, as propositions such as personhood for AI machines are advocated.

"Waymo self-driving car. (52194843144)" by Daniel Ramirez from Honolulu, USA is licensed under CC BY 2.0.

By Morgan Sansone

J.D. Candidate, 2024

Morgan Sansone is a 2L Staff Writer for the Arizona State Law Journal. Before attending law school, she earned an Economics degree from Barrett, the Honors College, at Arizona State University. In her free time, Morgan likes to escape the Phoenix heat and enjoy the nice weather in her hometown of Flagstaff, Arizona.