Meta learning Framework for Automated Driving

06/11/2017
by   Ahmad El Sallab, et al.
0

The success of automated driving deployment is highly depending on the ability to develop an efficient and safe driving policy. The problem is well formulated under the framework of optimal control as a cost optimization problem. Model based solutions using traditional planning are efficient, but require the knowledge of the environment model. On the other hand, model free solutions suffer sample inefficiency and require too many interactions with the environment, which is infeasible in practice. Methods under the Reinforcement Learning framework usually require the notion of a reward function, which is not available in the real world. Imitation learning helps in improving sample efficiency by introducing prior knowledge obtained from the demonstrated behavior, on the risk of exact behavior cloning without generalizing to unseen environments. In this paper we propose a Meta learning framework, based on data set aggregation, to improve generalization of imitation learning algorithms. Under the proposed framework, we propose MetaDAgger, a novel algorithm which tackles the generalization issues in traditional imitation learning. We use The Open Race Car Simulator (TORCS) to test our algorithm. Results on unseen test tracks show significant improvement over traditional imitation learning algorithms, improving the learning time and sample efficiency in the same time. The results are also supported by visualization of the learnt features to prove generalization of the captured details.

READ FULL TEXT

page 5

page 6

research
03/08/2019

Dyna-AIL : Adversarial Imitation Learning by Planning

Adversarial methods for imitation learning have been shown to perform we...
research
03/14/2019

Simulating Emergent Properties of Human Driving Behavior Using Multi-Agent Reward Augmented Imitation Learning

Recent developments in multi-agent imitation learning have shown promisi...
research
03/31/2021

DEALIO: Data-Efficient Adversarial Learning for Imitation from Observation

In imitation learning from observation IfO, a learning agent seeks to im...
research
11/02/2020

NEARL: Non-Explicit Action Reinforcement Learning for Robotic Control

Traditionally, reinforcement learning methods predict the next action ba...
research
06/18/2019

Sample-efficient Adversarial Imitation Learning from Observation

Imitation from observation is the framework of learning tasks by observi...
research
12/02/2020

DERAIL: Diagnostic Environments for Reward And Imitation Learning

The objective of many real-world tasks is complex and difficult to proce...
research
05/11/2022

Delayed Reinforcement Learning by Imitation

When the agent's observations or interactions are delayed, classic reinf...

Please sign up or login with your details

Forgot password? Click here to reset