Why do drivers and automation disengage the automation? Results from a study among Tesla users
A better understanding of automation disengagements can impact the safety and efficiency of automated systems. This study investigates the factors contributing to driver- and system-initiated disengagements by analyzing semi-structured interviews with 103 users of Tesla's Autopilot and FSD Beta. Through an examination of the data, main categories and sub-categories of disengagements were identified, which led to the development of a triadic model of automation disengagements. The model treats automation and human operators as equivalent agents. It suggests that human operators disengage automation when they anticipate failure, observe unnatural or unwanted automation behavior (e.g., erratic steering, running red lights), or believe the automation is not suited for certain environments (e.g., inclement weather, non-standard roads). Human operators' negative experiences, such as frustration, feelings of unsafety, and distrust, are also incorporated into the model, as these emotions can be triggered by (anticipated) automation behaviors. The automation, in turn, monitors human operators and may disengage itself if it detects insufficient vigilance or traffic rule violations. Moreover, human operators can be influenced by the reactions of passengers and other road users, leading them to disengage automation if they sense discomfort, anger, or embarrassment due to the system's actions. This research offers insights into the factors contributing to automation disengagements, highlighting not only the concerns of human operators but also the social aspects of the phenomenon. Furthermore, the findings provide information on potential edge cases of automated vehicle technology, which may help to enhance the safety and efficiency of such systems.
READ FULL TEXT