To Explain or Not to Explain: A Study on the Necessity of Explanations for Autonomous Vehicles

06/21/2020
by   Yuan Shen, et al.
0

Explainable AI, in the context of autonomous systems, like self driving cars, has drawn broad interests from researchers. Recent studies have found that providing explanations for an autonomous vehicle actions has many benefits, e.g., increase trust and acceptance, but put little emphasis on when an explanation is needed and how the content of explanation changes with context. In this work, we investigate which scenarios people need explanations and how the critical degree of explanation shifts with situations and driver types. Through a user experiment, we ask participants to evaluate how necessary an explanation is and measure the impact on their trust in the self driving cars in different contexts. We also present a self driving explanation dataset with first person explanations and associated measure of the necessity for 1103 video clips, augmenting the Berkeley Deep Drive Attention dataset. Additionally, we propose a learning based model that predicts how necessary an explanation for a given situation in real time, using camera data inputs. Our research reveals that driver types and context dictates whether or not an explanation is necessary and what is helpful for improved interaction and understanding.

READ FULL TEXT

page 6

page 7

page 8

research
05/21/2019

Look Who's Talking Now: Implications of AV's Explanations on Driver's Trust, AV Preference, Anxiety and Mental Workload

Explanations given by automation are often used to promote automation ad...
research
04/12/2023

Textual Explanations for Automated Commentary Driving

The provision of natural language explanations for the predictions of de...
research
09/28/2022

From Specification Models to Explanation Models: An Extraction and Refinement Process for Timed Automata

Autonomous systems control many tasks in our daily lives. To increase tr...
research
06/17/2022

A Human-Centric Method for Generating Causal Explanations in Natural Language for Autonomous Vehicle Motion Planning

Inscrutable AI systems are difficult to trust, especially if they operat...
research
07/15/2022

Investigating Explanations in Conditional and Highly Automated Driving: The Effects of Situation Awareness and Modality

With the level of automation increases in vehicles, such as conditional ...
research
09/27/2020

Measure Utility, Gain Trust: Practical Advice for XAI Researcher

Research into the explanation of machine learning models, i.e., explaina...
research
07/30/2018

Textual Explanations for Self-Driving Vehicles

Deep neural perception and control networks have become key components o...

Please sign up or login with your details

Forgot password? Click here to reset