Learning to Guide Multiple Heterogeneous Actors from a Single Human Demonstration via Automatic Curriculum Learning in StarCraft II

05/11/2022
by   Nicholas Waytowich, et al.
0

Traditionally, learning from human demonstrations via direct behavior cloning can lead to high-performance policies given that the algorithm has access to large amounts of high-quality data covering the most likely scenarios to be encountered when the agent is operating. However, in real-world scenarios, expert data is limited and it is desired to train an agent that learns a behavior policy general enough to handle situations that were not demonstrated by the human expert. Another alternative is to learn these policies with no supervision via deep reinforcement learning, however, these algorithms require a large amount of computing time to perform well on complex tasks with high-dimensional state and action spaces, such as those found in StarCraft II. Automatic curriculum learning is a recent mechanism comprised of techniques designed to speed up deep reinforcement learning by adjusting the difficulty of the current task to be solved according to the agent's current capabilities. Designing a proper curriculum, however, can be challenging for sufficiently complex tasks, and thus we leverage human demonstrations as a way to guide agent exploration during training. In this work, we aim to train deep reinforcement learning agents that can command multiple heterogeneous actors where starting positions and overall difficulty of the task are controlled by an automatically-generated curriculum from a single human demonstration. Our results show that an agent trained via automated curriculum learning can outperform state-of-the-art deep reinforcement learning baselines and match the performance of the human expert in a simulated command and control task in StarCraft II modeled over a real military scenario.

READ FULL TEXT

page 1

page 2

page 3

page 5

page 6

research
01/31/2019

An Optimization Framework for Task Sequencing in Curriculum Learning

Curriculum learning is gaining popularity in (deep) reinforcement learni...
research
02/17/2022

Robust Reinforcement Learning via Genetic Curriculum

Achieving robust performance is crucial when applying deep reinforcement...
research
04/14/2022

GloCAL: Glocalized Curriculum-Aided Learning of Multiple Tasks with Application to Robotic Grasping

The domain of robotics is challenging to apply deep reinforcement learni...
research
09/28/2017

Deep TAMER: Interactive Agent Shaping in High-Dimensional State Spaces

While recent advances in deep reinforcement learning have allowed autono...
research
12/01/2019

Automated curriculum generation for Policy Gradients from Demonstrations

In this paper, we present a technique that improves the process of train...
research
10/17/2019

Adaptive Curriculum Generation from Demonstrations for Sim-to-Real Visuomotor Control

We propose Adaptive Curriculum Generation from Demonstrations (ACGD) for...
research
12/28/2022

On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations

KL-regularized reinforcement learning from expert demonstrations has pro...

Please sign up or login with your details

Forgot password? Click here to reset