Log In Sign Up

Exploring Imitation Learning for Autonomous Driving with Feedback Synthesizer and Differentiable Rasterization

by   Jinyun Zhou, et al.

We present a learning-based planner that aims to robustly drive a vehicle by mimicking human drivers' driving behavior. We leverage a mid-to-mid approach that allows us to manipulate the input to our imitation learning network freely. With that in mind, we propose a novel feedback synthesizer for data augmentation. It allows our agent to gain more driving experience in various previously unseen environments that are likely to encounter, thus improving overall performance. This is in contrast to prior works that rely purely on random synthesizers. Furthermore, rather than completely commit to imitating, we introduce task losses that penalize undesirable behaviors, such as collision, off-road, and so on. Unlike prior works, this is done by introducing a differentiable vehicle rasterizer that directly converts the waypoints output by the network into images. This effectively avoids the usage of heavyweight ConvLSTM networks, therefore, yields a faster model inference time. About the network architecture, we exploit an attention mechanism that allows the network to reason critical objects in the scene and produce better interpretable attention heatmaps. To further enhance the safety and robustness of the network, we add an optional optimization-based post-processing planner improving the driving comfort. We comprehensively validate our method's effectiveness in different scenarios that are specifically created for evaluating self-driving vehicles. Results demonstrate that our learning-based planner achieves high intelligence and can handle complex situations. Detailed ablation and visualization analysis are included to further demonstrate each of our proposed modules' effectiveness in our method.


page 1

page 2

page 3

page 4

page 7


ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst

Our goal is to train a policy for autonomous driving via imitation learn...

Parallelized and Randomized Adversarial Imitation Learning for Safety-Critical Self-Driving Vehicles

Self-driving cars and autonomous driving research has been receiving con...

Traffic-Aware Autonomous Driving with Differentiable Traffic Simulation

While there have been advancements in autonomous driving control and tra...

Improving Robustness of Learning-based Autonomous Steering Using Adversarial Images

For safety of autonomous driving, vehicles need to be able to drive unde...

Explaining Autonomous Driving by Learning End-to-End Visual Attention

Current deep learning based autonomous driving approaches yield impressi...

Evaluating Automated Driving Planner Robustness against Adversarial Influence

Evaluating the robustness of automated driving planners is a critical an...

Enhanced Behavioral Cloning with Environmental Losses for Self-Driving Vehicles

Learned path planners have attracted research interest due to their abil...