Trying AGAIN instead of Trying Longer: Prior Learning for Automatic Curriculum Learning

04/07/2020
by   Rémy Portelas, et al.
1

A major challenge in the Deep RL (DRL) community is to train agents able to generalize over unseen situations, which is often approached by training them on a diversity of tasks (or environments). A powerful method to foster diversity is to procedurally generate tasks by sampling their parameters from a multi-dimensional distribution, enabling in particular to propose a different task for each training episode. In practice, to get the high diversity of training tasks necessary for generalization, one has to use complex procedural generation systems. With such generators, it is hard to get prior knowledge on the subset of tasks that are actually learnable at all (many generated tasks may be unlearnable), what is their relative difficulty and what is the most efficient task distribution ordering for training. A typical solution in such cases is to rely on some form of Automated Curriculum Learning (ACL) to adapt the sampling distribution. One limit of current approaches is their need to explore the task space to detect progress niches over time, which leads to a loss of time. Additionally, we hypothesize that the induced noise in the training data may impair the performances of brittle DRL learners. We address this problem by proposing a two stage ACL approach where 1) a teacher algorithm first learns to train a DRL agent with a high-exploration curriculum, and then 2) distills learned priors from the first run to generate an "expert curriculum" to re-train the same agent from scratch. Besides demonstrating 50 improvements on average over the current state of the art, the objective of this work is to give a first example of a new research direction oriented towards refining ACL techniques over multiple learners, which we call Classroom Teaching.

READ FULL TEXT
research
11/16/2020

Meta Automatic Curriculum Learning

A major challenge in the Deep RL (DRL) community is to train agents able...
research
03/17/2021

TeachMyAgent: a Benchmark for Automatic Curriculum Learning in Deep RL

Training autonomous agents able to generalize to multiple tasks is a key...
research
12/28/2020

Automatic Curriculum Learning With Over-repetition Penalty for Dialogue Policy Learning

Dialogue policy learning based on reinforcement learning is difficult to...
research
06/28/2021

Multi-task curriculum learning in a complex, visual, hard-exploration domain: Minecraft

An important challenge in reinforcement learning is training agents that...
research
10/16/2019

Teacher algorithms for curriculum learning of Deep RL in continuously parameterized environments

We consider the problem of how a teacher algorithm can enable an unknown...
research
07/01/2020

Adaptive Procedural Task Generation for Hard-Exploration Problems

We introduce Adaptive Procedural Task Generation (APT-Gen), an approach ...
research
03/10/2020

Automatic Curriculum Learning For Deep RL: A Short Survey

Automatic Curriculum Learning (ACL) has become a cornerstone of recent s...

Please sign up or login with your details

Forgot password? Click here to reset