Adversarial Mixture Density Networks: Learning to Drive Safely from Collision Data

07/09/2021
by   Sampo Kuutti, et al.
0

Imitation learning has been widely used to learn control policies for autonomous driving based on pre-recorded data. However, imitation learning based policies have been shown to be susceptible to compounding errors when encountering states outside of the training distribution. Further, these agents have been demonstrated to be easily exploitable by adversarial road users aiming to create collisions. To overcome these shortcomings, we introduce Adversarial Mixture Density Networks (AMDN), which learns two distributions from separate datasets. The first is a distribution of safe actions learned from a dataset of naturalistic human driving. The second is a distribution representing unsafe actions likely to lead to collision, learned from a dataset of collisions. During training, we leverage these two distributions to provide an additional loss based on the similarity of the two distributions. By penalising the safe action distribution based on its similarity to the unsafe action distribution when training on the collision dataset, a more robust and safe control policy is obtained. We demonstrate the proposed AMDN approach in a vehicle following use-case, and evaluate under naturalistic and adversarial testing environments. We show that despite its simplicity, AMDN provides significant benefits for the safety of the learned control policy, when compared to pure imitation learning or standard mixture density network approaches.

READ FULL TEXT

page 1

page 5

research
09/18/2017

DropoutDAgger: A Bayesian Approach to Safe Imitation Learning

While imitation learning is becoming common practice in robotics, this a...
research
09/29/2022

A Benchmark Comparison of Imitation Learning-based Control Policies for Autonomous Racing

Autonomous racing with scaled race cars has gained increasing attention ...
research
05/16/2018

ChoiceNet: Robust Learning by Revealing Output Correlations

In this paper, we focus on the supervised learning problem with corrupte...
research
05/05/2021

Density-Aware Federated Imitation Learning for Connected and Automated Vehicles with Unsignalized Intersection

Intelligent Transportation System (ITS) has become one of the essential ...
research
03/03/2022

Fail-Safe Generative Adversarial Imitation Learning

For flexible yet safe imitation learning (IL), we propose a modular appr...
research
07/09/2021

ARC: Adversarially Robust Control Policies for Autonomous Vehicles

Deep neural networks have demonstrated their capability to learn control...
research
09/14/2021

Reactive and Safe Road User Simulations using Neural Barrier Certificates

Reactive and safe agent modelings are important for nowadays traffic sim...

Please sign up or login with your details

Forgot password? Click here to reset