Skip to content

Self-Driving Cars and Developer Liability for Auto Accidents

Self-Driving Cars and Developer Liability for Auto Accidents

In March of 2018, a fatal accident occurred in Tempe, Arizona when a woman was struck by a Volvo SUV that was outfitted with a self-driving system developed by Uber Advanced Technologies Group.  What happened afterwards provides some insight on the legal exposure that developers of self-driving systems may face when the car is involved in an accident. 

In the Tempe case, the accident occurred when a woman who was walking a bicycle across the road at night was struck by a vehicle that had a self-driving system and a human operator behind the wheel.  The vehicle was in computer control mode at the time of the crash.  Prior to  impact the self-driving system classified the pedestrian as an unknown object, then as a vehicle, and then as a bicycle.  Although the system identified that an emergency braking maneuver was needed to avoid or mitigate a collision, the system was set up to not activate emergency braking when under computer control.  Instead, it relied upon the human operator to intervene as needed.  There was evidence that the driver was looking down immediately before the crash.  

The National Transportation Safety Bureau (NTSB) investigated the accident and concluded that its probable cause was the failure of the driver  to monitor the driving environment and the operation of the automated driving system because she was visually distracted throughout the trip.   According to the NTSB, the crash occurred because automation complacency led to prolonged visual distraction which led to the failure to detect the pedestrian in time to avoid collision.  

However, the NTSB also analyzed Uber’s role in the accident and its findings illustrate that there are multiple factors on which a developer’s liability could be based.  For example, the NTSB found that Uber had failed to recognize the risk of automation complacency and had failed to develop counter measures to control the risk of vehicle operator disengagement.  Moreover,  the vehicle’s sensor systems were unable to determine whether the decedent was a pedestrian, a vehicle or a bicycle and failed to correctly predict her path.  Additionally, Uber’s automated system was not designed to apply maximum braking for collision mitigation.  

Uber’s potential liability did not stop there, however.  In the criminal action against the driver alleging negligent homicide, the driver claimed that Uber’s irresponsible safety practices were more to blame than she was.  Specifically, she alleged that she was lulled into complacency by the tedious nature of the job and Uber’s assurances that the automatic braking system was working in tandem with autonomous functions.  In other words, she claims that she was unaware that the vehicle was programmed to disable the auto-braking features when in autonomous mode because Uber had advised drivers that it was activated full time.   

Based on what happened in this case, it is clear that developers of automated driving systems can be subject to liability when an accident occurs, and a self-driving car is involved.  Thus, in developing self-driving systems, developers will be held to a standard of care, the violation of which could very well result in significant liability.  

The information presented here is for general educational purposes only. It does not constitute legal advice and does not create an attorney-client relationship. 

Connect

Back To Top