Textual Explanations for Self-Driving Vehicles

07/30/2018
by   Jinkyu Kim, et al.
2

Deep neural perception and control networks have become key components of self-driving vehicles. User acceptance is likely to benefit from easy-to-interpret textual explanations which allow end-users to understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. We propose a new approach to introspective explanations which consists of two parts. First, we use a visual (spatial) attention model to train a convolutional network end-to-end from images to the vehicle control commands, i.e., acceleration and change of course. The controller's attention identifies image regions that potentially influence the network's output. Second, we use an attention-based video-to-text model to produce textual explanations of model actions. The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. Finally, we explore a version of our model that generates rationalizations, and compare with introspective explanations on the same video segments. We evaluate these models on a novel driving dataset with ground-truth human explanations, the Berkeley DeepDrive eXplanation (BDD-X) dataset. Code is available at https://github.com/JinkyuKimUCB/explainable-deep-driving.

READ FULL TEXT

page 2

page 8

page 10

page 14

page 18

page 23

page 24

research
03/30/2017

Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention

Deep neural perception and control networks are likely to be a key compo...
research
11/16/2019

Grounding Human-to-Vehicle Advice for Self-driving Vehicles

Recent success suggests that deep neural control networks are likely to ...
research
04/29/2021

A First Look: Towards Explainable TextVQA Models via Visual and Textual Explanations

Explainable deep learning models are advantageous in many situations. Pr...
research
05/08/2020

Attentional Bottleneck: Towards an Interpretable Deep Driving Network

Deep neural networks are a key component of behavior prediction and moti...
research
06/21/2020

To Explain or Not to Explain: A Study on the Necessity of Explanations for Autonomous Vehicles

Explainable AI, in the context of autonomous systems, like self driving ...
research
05/26/2022

Efficient textual explanations for complex road and traffic scenarios based on semantic segmentation

The complex driving environment brings great challenges to the visual pe...
research
12/14/2016

Interpretable Semantic Textual Similarity: Finding and explaining differences between sentences

User acceptance of artificial intelligence agents might depend on their ...

Please sign up or login with your details

Forgot password? Click here to reset