Curiosity-Driven Multi-Agent Exploration with Mixed Objectives

10/29/2022
by   Roben Delos Reyes, et al.
0

Intrinsic rewards have been increasingly used to mitigate the sparse reward problem in single-agent reinforcement learning. These intrinsic rewards encourage the agent to look for novel experiences, guiding the agent to explore the environment sufficiently despite the lack of extrinsic rewards. Curiosity-driven exploration is a simple yet efficient approach that quantifies this novelty as the prediction error of the agent's curiosity module, an internal neural network that is trained to predict the agent's next state given its current state and action. We show here, however, that naively using this curiosity-driven approach to guide exploration in sparse reward cooperative multi-agent environments does not consistently lead to improved results. Straightforward multi-agent extensions of curiosity-driven exploration take into consideration either individual or collective novelty only and thus, they do not provide a distinct but collaborative intrinsic reward signal that is essential for learning in cooperative multi-agent tasks. In this work, we propose a curiosity-driven multi-agent exploration method that has the mixed objective of motivating the agents to explore the environment in ways that are individually and collectively novel. First, we develop a two-headed curiosity module that is trained to predict the corresponding agent's next observation in the first head and the next joint observation in the second head. Second, we design the intrinsic reward formula to be the sum of the individual and joint prediction errors of this curiosity module. We empirically show that the combination of our curiosity module architecture and intrinsic reward formulation guides multi-agent exploration more efficiently than baseline approaches, thereby providing the best performance boost to MARL algorithms in cooperative navigation environments with sparse rewards.

READ FULL TEXT
research
02/21/2023

Curiosity-driven Exploration in Sparse-reward Multi-agent Reinforcement Learning

Sparsity of rewards while applying a deep reinforcement learning method ...
research
05/22/2023

Developmental Curiosity and Social Interaction in Virtual Agents

Infants explore their complex physical and social environment in an orga...
research
06/10/2020

The Emergence of Individuality in Multi-Agent Reinforcement Learning

Individuality is essential in human society, which induces the division ...
research
04/04/2022

Continuously Discovering Novel Strategies via Reward-Switching Policy Optimization

We present Reward-Switching Policy Optimization (RSPO), a paradigm to di...
research
09/23/2022

Multi-Agent Exploration of an Unknown Sparse Landmark Complex via Deep Reinforcement Learning

In recent years Landmark Complexes have been successfully employed for l...
research
04/24/2021

Ask Explore: Grounded Question Answering for Curiosity-Driven Exploration

In many real-world scenarios where extrinsic rewards to the agent are ex...
research
08/12/2020

REMAX: Relational Representation for Multi-Agent Exploration

Training a multi-agent reinforcement learning (MARL) model is generally ...

Please sign up or login with your details

Forgot password? Click here to reset