Unsafe At Any Level: NHTSA's levels of automation are a liability for autonomous vehicle design and regulation

by   Marc Canellas, et al.

Walter Huang, a 38-year-old Apple Inc. engineer, died on March 23, 2018, after his Tesla Model X crashed into a highway barrier in Mountain View, California. Tesla immediately disavowed responsibility for the accident. "The fundamental premise of both moral and legal liability is a broken promise, and there was none here: [Mr. Huang] was well aware that the Autopilot was not perfect [and the] only way for this accident to have occurred is if Mr. Huang was not paying attention to the road, despite the car providing multiple warnings to do so." This is the standard response from Tesla and Uber, the manufacturers of the automated vehicles involved in the six fatal accidents to date: the automated vehicle isn't perfect, the driver knew it wasn't perfect, and if only the driver had been paying attention and heeded the vehicle's warnings, the accident would never have occurred. However, as researchers focused on human-automation interaction in aviation and military operations, we cannot help but wonder if there really are no broken promises and no legal liabilities. Science has a critical role in determining legal liability, and courts appropriately rely on scientists and engineers to determine whether an accident, or harm, was foreseeable. Specifically, a designer could be found liable if, at the time of the accident, scientists knew there was a systematic relationship between the accident and the designer's untaken precaution. Nearly 70 years of research provides an undeniable answer: It is insufficient, inappropriate, and dangerous to automate everything you can and leave the rest to the human. There is a systematic relationship between the design of automated vehicles and the types of accidents that are occurring now and will inevitably continue to occur in the future. These accidents were not unforeseeable and the drivers were not exclusively to blame.



There are no comments yet.


page 1

page 2

page 3

page 4


DeepTake: Prediction of Driver Takeover Behavior using Multimodal Data

Automated vehicles promise a future where drivers can engage in non-driv...

"Part Man, Part Machine, All Cop": Automation in Policing

Digitisation, automation and datafication permeate policing and justice ...

Machine Ethics and Automated Vehicles

Road vehicle travel at a reasonable speed involves some risk, even when ...

A discrete-event simulation model for driver performance assessment: application to autonomous vehicle cockpit design optimization

The latest advances in the design of vehicles with the adaptive level of...

Waymo Public Road Safety Performance Data

Waymo's mission to reduce traffic injuries and fatalities and improve mo...

Survey and synthesis of state of the art in driver monitoring

Road-vehicle accidents are mostly due to human errors, and many such acc...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.


  • [1] (2018-10) Automated vehicles 3.0: preparing for the future of transportation. Federal Policy Framework National Highway Transportation Safety Administration, U.S. Department of Transportation. Cited by: Unsafe At Any Level, Unsafe At Any Level.
  • L. Bainbridge (1983) Ironies of automation. Automatica 19 (6), pp. 775–779. Cited by: Unsafe At Any Level, Unsafe At Any Level.
  • C. E. Billings (1997) Aviation automation: the search for a human-centered approach. Mahway, NJ. Cited by: Unsafe At Any Level.
  • J. M. Bradshaw, R. R. Hoffman, M. Johnson, and D. D. Woods (2013) The seven deadly myths of “autonomous systems”. IEEE Intelligent Systems 13, pp. 2–9. Cited by: Unsafe At Any Level.
  • J. L. Campbell, J. L. Brown, J. S. Graving, C. M. Richard, M. G. Lichty, L. P. Bacon, J. F. Morgan, H. Li, D. N. Williams, and T. Sanquist (2018) Human factors design guidance for level 2 and level 3 automated driving concepts. Report No. DOT HS 812 555 National Highway Transportation Safety Administration. Cited by: Unsafe At Any Level.
  • K. M. Feigh and A. R. Pritchett (2014) Requirements for effective function allocation a critical review. Journal of Cognitive Engineering and Decision Making 8 (1), pp. 23–32. Cited by: Unsafe At Any Level, Unsafe At Any Level, Unsafe At Any Level.
  • P. M. Fitts (1951) Human engineering for an effective air-navigation and traffic-control system.. Technical report National Research Council, Div. of, Division of National Research Council. Cited by: Unsafe At Any Level.
  • M. F. Grady (2002) Proximate cause decoded. UCLA Law Review 50, pp. 293–335. Cited by: Unsafe At Any Level.
  • [9] (2018-06) J3016: taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles [jun 2018]. Surface Vehicle Recommended Practice SAE International. Cited by: Unsafe At Any Level.
  • G. A. Jamieson and G. Skraaning Jr. (2018) Levels of automation in human factors models for automation design: why we might consider throwing the baby out with the bathwater. Journal of Cognitive Engineering and Decision Making 12 (1), pp. 42–49. Cited by: Unsafe At Any Level.
  • J. D. Lee (2018) Perspectives on automotive automation and autonomy. Journal of Cognitive Engineering and Decision Making 12 (1), pp. 53–57. Cited by: Unsafe At Any Level, Unsafe At Any Level.
  • A. McDonald, C. Carney, and D. McGehee (2018) Vehicle owners’ experiences with and reactions to advanced driver assistance systems. AAA Foundation for Traffic Safety. Cited by: footnote 6.
  • R. Molloy and R. Parasuraman (1996) Monitoring an automated system for a single failure: vigilance and task complexity effects. Human Factors 38, pp. 311–322. Cited by: Unsafe At Any Level.
  • D. A. Norman (1990) The ‘problem’ with automation: inappropriate feedback and interaction, not ‘over-automation’. Philosophical Transactions of the Royal Society B: Biological Sciences 327 (1241), pp. 585–593. Cited by: Unsafe At Any Level.
  • R. Parasuraman and V. Riley (1997) Humans and automation: use, misuse, disuse, abuse. Human Factors: The Journal of the Human Factors and Ergonomics Society 39 (2), pp. 230–253. Cited by: Unsafe At Any Level.
  • T. B. Sheridan and W. L. Verplank (1978) Human and computer control of undersea teleoperators. Cambridge, MA: MIT Man-Machine Systems Laboratory Cited by: Unsafe At Any Level.
  • T. B. Sheridan (2018) Comments on “issues in human–automation interaction modeling: presumptive aspects of frameworks of types and levels of automation” by david b. kaber. Journal of Cognitive Engineering and Decision Making 12 (1), pp. 25–28. Cited by: Unsafe At Any Level, Unsafe At Any Level.
  • E. L. Wiener and R. E. Curry (1980) Flight-deck automation: promises and problems. Ergonomics 23, pp. 995–1011. Cited by: Unsafe At Any Level.