Unsafe At Any Level

man in driver's seat of autonomous vehicle looks at cellphone - Credit: Shutterstock.com

Unsafe At Any Level
Communications of the ACM, March 2020, Vol. 63 No. 3, Pages 31-34
Viewpoint
By Marc Canellas, Rachel Haga

Drivers are sold the fantasy of being a passenger at times, but to the manufacturer they never stopped being the fully liable driver. – Designers of automated vehicles face the same decisions today that aircraft designers have faced for decades.

 

Walter Huang, a 38-year-old Apple Inc. engineer, died on March 23, 2018, after his Tesla Model X crashed into a highway barrier in Mountain View, CA. Tesla disavowed responsibility for the accident. “The fundamental premise of both moral and legal liability is a broken promise, and there was none here: [Mr. Huang] was well aware that the Autopilot was not perfect [and the] only way for this accident to have occurred is if Mr. Huang was not paying attention to the road, despite the car providing multiple warnings to do so.”

 

This is the standard response from Tesla and Uber, the manufacturers of the automated vehicles involved in the six fatal accidents to date: the automated vehicle is not perfect, the driver knew it was not perfect, and if only the driver had been paying attention and heeded the vehicle’s warnings, the accident would never have occurred. However, as researchers focused on human-automation interaction in aviation and military operations, we cannot help but wonder if there really are no broken promises and no legal liabilities.

 

These automated vehicle accidents are predicted by the science of human-automation interaction and the major aviation accidents caused, in large part, by naïve implementation of automation in the cockpit and airspace. Aviation has historically been plagued by designers ignoring defects until they have caused fatal accidents. We even have a term for this attitude: tombstone design. Acknowledging tragedies and the need to better understand their causes led aviation to become the canonical domain for understanding human-automation interaction in complex, safety-critical operations. Today, aviation is an incredibly safe mode of transportation, but we are constantly reminded of why we must respect the realities of human-automation interaction. A recent tragic example is Boeing 737 MAX 8’s MCAS automation that contributed to two crashes and the deaths of 346 people before the human-automation aspect interaction failure was publicly acknowledged.

 

Science, like human-automation interaction, has a critical role in determining legal liability, and courts appropriately rely on scientists and engineers to determine whether an accident, or harm, was foreseeable. Specifically, a designer could be found liable if, at the time of the accident, scientists knew there was a systematic relationship between the accident and the designer’s untaken precaution.

 

The scientific evidence is undeniable. There is a systematic relationship between the design of automated vehicles and the types of accidents that are occurring now and will inevitably continue to occur in the future. These accidents were not unforeseeable and the drivers were not exclusively to blame. In fact, the vehicle designs and fatalities are both symptoms of a larger failed system: the five levels of automation (LOA) for automated vehicles.

Read the Full Article »

About the Authors:

Marc Canellas is the Vice-Chair of the IEEE-USA Artificial Intelligence and Autonomous Systems Policy Committee, and a Cybersecurity Service Scholar and Jacobson Business Scholar at the New York University School of Law in New York, NY, USA.

Rachel Haga is a member of the IEEE-USA Artificial Intelligence and Autonomous Systems Policy Committee, and a Data Scientist at Elicit Insights in New York, NY, USA.

See also: