Perceive, Attend, and Drive: Learning Spatial Attention for Safe Self-Driving

11/02/2020
by   Bob Wei, et al.
0

In this paper, we propose an end-to-end self-driving network featuring a sparse attention module that learns to automatically attend to important regions of the input. The attention module specifically targets motion planning, whereas prior literature only applied attention in perception tasks. Learning an attention mask directly targeted for motion planning significantly improves the planner safety by performing more focused computation. Furthermore, visualizing the attention improves interpretability of end-to-end self-driving.

READ FULL TEXT

page 1

page 6

research
08/13/2020

Perceive, Predict, and Plan: Safe Motion Planning Through Interpretable Semantic Representations

In this paper we propose a novel end-to-end learnable network that perfo...
research
03/08/2020

Task-Motion Planning for Safe and Efficient Urban Driving

Autonomous vehicles need to plan at the task level to compute a sequence...
research
11/28/2019

DeepGoal: Learning to Drive with driving intention from Human Control Demonstration

Recent research on automotive driving developed an efficient end-to-end ...
research
08/13/2020

DSDNet: Deep Structured self-Driving Network

In this paper, we propose the Deep Structured self-Driving Network (DSDN...
research
08/13/2020

Testing the Safety of Self-driving Vehicles by Simulating Perception and Prediction

We present a novel method for testing the safety of self-driving vehicle...
research
06/04/2020

The Importance of Prior Knowledge in Precise Multimodal Prediction

Roads have well defined geometries, topologies, and traffic rules. While...
research
02/04/2020

Tackling Existence Probabilities of Objects with Motion Planning for Automated Urban Driving

Motion planners take uncertain information about the environment as an i...

Please sign up or login with your details

Forgot password? Click here to reset