Hierarchical Generative Adversarial Imitation Learning with Mid-level Input Generation for Autonomous Driving on Urban Environments

Deriving robust control policies for realistic urban navigation scenarios is not a trivial task. In an end-to-end approach, these policies must map high-dimensional images from the vehicle's cameras to low-level actions such as steering and throttle. While pure Reinforcement Learning (RL) approaches are based exclusively on rewards,Generative Adversarial Imitation Learning (GAIL) agents learn from expert demonstrations while interacting with the environment, which favors GAIL on tasks for which a reward signal is difficult to derive. In this work, the hGAIL architecture was proposed to solve the autonomous navigation of a vehicle in an end-to-end approach, mapping sensory perceptions directly to low-level actions, while simultaneously learning mid-level input representations of the agent's environment. The proposed hGAIL consists of an hierarchical Adversarial Imitation Learning architecture composed of two main modules: the GAN (Generative Adversarial Nets) which generates the Bird's-Eye View (BEV) representation mainly from the images of three frontal cameras of the vehicle, and the GAIL which learns to control the vehicle based mainly on the BEV predictions from the GAN as input.Our experiments have shown that GAIL exclusively from cameras (without BEV) fails to even learn the task, while hGAIL, after training, was able to autonomously navigate successfully in all intersections of the city.

READ FULL TEXT

page 2

page 6

page 9

page 11

page 12

page 13

research
10/16/2021

Generative Adversarial Imitation Learning for End-to-End Autonomous Driving on Urban Environments

Autonomous driving is a complex task, which has been tackled since the f...
research
08/18/2021

End-to-End Urban Driving by Imitating a Reinforcement Learning Coach

End-to-end approaches to autonomous driving commonly rely on expert demo...
research
09/02/2019

Conditional Vehicle Trajectories Prediction in CARLA Urban Environment

Imitation learning is becoming more and more successful for autonomous d...
research
07/01/2020

Reinforcement Learning based Control of Imitative Policies for Near-Accident Driving

Autonomous driving has achieved significant progress in recent years, bu...
research
04/21/2018

Event Extraction with Generative Adversarial Imitation Learning

We propose a new method for event extraction (EE) task based on an imita...
research
09/20/2019

Safer End-to-End Autonomous Driving via Conditional Imitation Learning and Command Augmentation

Imitation learning is a promising approach to end-to-end training of aut...
research
03/05/2020

A Geometric Perspective on Visual Imitation Learning

We consider the problem of visual imitation learning without human super...

Please sign up or login with your details

Forgot password? Click here to reset