Attentional Bottleneck: Towards an Interpretable Deep Driving Network

05/08/2020
by   Jinkyu Kim, et al.
5

Deep neural networks are a key component of behavior prediction and motion generation for self-driving cars. One of their main drawbacks is a lack of transparency: they should provide easy to interpret rationales for what triggers certain behaviors. We propose an architecture called Attentional Bottleneck with the goal of improving transparency. Our key idea is to combine visual attention, which identifies what aspects of the input the model is using, with an information bottleneck that enables the model to only use aspects of the input which are important. This not only provides sparse and interpretable attention maps (e.g. focusing only on specific vehicles in the scene), but it adds this transparency at no cost to model accuracy. In fact, we find slight improvements in accuracy when applying Attentional Bottleneck to the ChauffeurNet model, whereas we find that the accuracy deteriorates with a traditional visual attention model.

READ FULL TEXT

page 2

page 5

page 6

page 10

page 11

page 12

page 13

page 14

research
03/30/2017

Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention

Deep neural perception and control networks are likely to be a key compo...
research
07/31/2022

Learning an Interpretable Model for Driver Behavior Prediction with Inductive Biases

To plan safe maneuvers and act with foresight, autonomous vehicles must ...
research
07/30/2018

Textual Explanations for Self-Driving Vehicles

Deep neural perception and control networks have become key components o...
research
12/07/2022

Towards Explainable Motion Prediction using Heterogeneous Graph Representations

Motion prediction systems aim to capture the future behavior of traffic ...
research
11/16/2019

Grounding Human-to-Vehicle Advice for Self-driving Vehicles

Recent success suggests that deep neural control networks are likely to ...
research
07/01/2019

Do Transformer Attention Heads Provide Transparency in Abstractive Summarization?

Learning algorithms become more powerful, often at the cost of increased...
research
03/04/2021

Checkpointing SPAdes for Metagenome Assembly: Transparency versus Performance in Production

The SPAdes assembler for metagenome assembly is a long-running applicati...

Please sign up or login with your details

Forgot password? Click here to reset