End-to-End Race Driving with Deep Reinforcement Learning

07/06/2018
by   Maximilian Jaritz, et al.
0

We present research using the latest reinforcement learning algorithm for end-to-end driving without any mediated perception (object recognition, scene understanding). The newly proposed reward and learning strategies lead together to faster convergence and more robust driving using only RGB image from a forward facing camera. An Asynchronous Actor Critic (A3C) framework is used to learn the car control in a physically and graphically realistic rally game, with the agents evolving simultaneously on tracks with a variety of road structures (turns, hills), graphics (seasons, location) and physics (road adherence). A thorough evaluation is conducted and generalization is proven on unseen tracks and using legal speed limits. Open loop tests on real sequences of images show some domain adaption capability of our method.

READ FULL TEXT

page 1

page 4

page 5

research
07/06/2019

Playing Flappy Bird via Asynchronous Advantage Actor Critic Algorithm

Flappy Bird, which has a very high popularity, has been trained in many ...
research
09/09/2015

Continuous control with deep reinforcement learning

We adapt the ideas underlying the success of Deep Q-Learning to the cont...
research
03/16/2021

Sparse Curriculum Reinforcement Learning for End-to-End Driving

Deep reinforcement Learning for end-to-end driving is limited by the nee...
research
01/06/2020

High-speed Autonomous Drifting with Deep Reinforcement Learning

Drifting is a complicated task for autonomous vehicle control. Most trad...
research
08/18/2020

Super-Human Performance in Gran Turismo Sport Using Deep Reinforcement Learning

Autonomous car racing raises fundamental robotics challenges such as pla...
research
08/03/2020

Learning to Drive Small Scale Cars from Scratch

We consider the problem of learning to drive low-cost small scale cars u...

Please sign up or login with your details

Forgot password? Click here to reset