Open-Ended Learning Leads to Generally Capable Agents

07/27/2021
by   Open-Ended Learning Team, et al.
2

In this work we create agents that can perform well beyond a single, individual task, that exhibit much wider generalisation of behaviour to a massive, rich space of challenges. We define a universe of tasks within an environment domain and demonstrate the ability to train agents that are generally capable across this vast space and beyond. The environment is natively multi-agent, spanning the continuum of competitive, cooperative, and independent games, which are situated within procedurally generated physical 3D worlds. The resulting space is exceptionally diverse in terms of the challenges posed to agents, and as such, even measuring the learning progress of an agent is an open research problem. We propose an iterative notion of improvement between successive generations of agents, rather than seeking to maximise a singular objective, allowing us to quantify progress despite tasks being incomparable in terms of achievable rewards. We show that through constructing an open-ended learning process, which dynamically changes the training task distributions and training objectives such that the agent never stops learning, we achieve consistent learning of new behaviours. The resulting agent is able to score reward in every one of our humanly solvable evaluation levels, with behaviour generalising to many held-out points in the universe of tasks. Examples of this zero-shot generalisation include good performance on Hide and Seek, Capture the Flag, and Tag. Through analysis and hand-authored probe tasks we characterise the behaviour of our agent, and find interesting emergent heuristic behaviours such as trial-and-error experimentation, simple tool use, option switching, and cooperation. Finally, we demonstrate that the general capabilities of this agent could unlock larger scale transfer of behaviour through cheap finetuning.

READ FULL TEXT

page 1

page 2

page 3

page 7

page 17

page 22

page 30

page 41

research
08/09/2022

Heterogeneous Multi-agent Zero-Shot Coordination by Coevolution

Generating agents that can achieve Zero-Shot Coordination (ZSC) with uns...
research
02/18/2023

Promoting Cooperation in Multi-Agent Reinforcement Learning via Mutual Help

Multi-agent reinforcement learning (MARL) has achieved great progress in...
research
01/19/2023

Multi-Agent Interplay in a Competitive Survival Environment

Solving hard-exploration environments in an important challenge in Reinf...
research
02/19/2019

Emergent Coordination Through Competition

We study the emergence of cooperative behaviors in reinforcement learnin...
research
03/04/2022

AutoDIME: Automatic Design of Interesting Multi-Agent Environments

Designing a distribution of environments in which RL agents can learn in...
research
10/04/2021

Behaviour-conditioned policies for cooperative reinforcement learning tasks

The cooperation among AI systems, and between AI systems and humans is b...
research
04/29/2023

Learning to Seek: Multi-Agent Online Source Seeking Against Non-Stochastic Disturbances

This paper proposes to leverage the emerging learning techniques and dev...

Please sign up or login with your details

Forgot password? Click here to reset