Perceive, Attend, and Drive: Learning Spatial Attention for Safe Self-Driving

by   Bob Wei, et al.

In this paper, we propose an end-to-end self-driving network featuring a sparse attention module that learns to automatically attend to important regions of the input. The attention module specifically targets motion planning, whereas prior literature only applied attention in perception tasks. Learning an attention mask directly targeted for motion planning significantly improves the planner safety by performing more focused computation. Furthermore, visualizing the attention improves interpretability of end-to-end self-driving.


page 1

page 6


Perceive, Predict, and Plan: Safe Motion Planning Through Interpretable Semantic Representations

In this paper we propose a novel end-to-end learnable network that perfo...

Task-Motion Planning for Safe and Efficient Urban Driving

Autonomous vehicles need to plan at the task level to compute a sequence...

DeepGoal: Learning to Drive with driving intention from Human Control Demonstration

Recent research on automotive driving developed an efficient end-to-end ...

DSDNet: Deep Structured self-Driving Network

In this paper, we propose the Deep Structured self-Driving Network (DSDN...

Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention

Deep neural perception and control networks are likely to be a key compo...

The Importance of Prior Knowledge in Precise Multimodal Prediction

Roads have well defined geometries, topologies, and traffic rules. While...

Self-Supervised Simultaneous Multi-Step Prediction of Road Dynamics and Cost Map

While supervised learning is widely used for perception modules in conve...