DeepAI AI Chat
Log In Sign Up

Believe What You See: Implicit Constraint Approach for Offline Multi-Agent Reinforcement Learning

by   Yiqin Yang, et al.

Learning from datasets without interaction with environments (Offline Learning) is an essential step to apply Reinforcement Learning (RL) algorithms in real-world scenarios. However, compared with the single-agent counterpart, offline multi-agent RL introduces more agents with the larger state and action space, which is more challenging but attracts little attention. We demonstrate current offline RL algorithms are ineffective in multi-agent systems due to the accumulated extrapolation error. In this paper, we propose a novel offline RL algorithm, named Implicit Constraint Q-learning (ICQ), which effectively alleviates the extrapolation error by only trusting the state-action pairs given in the dataset for value estimation. Moreover, we extend ICQ to multi-agent tasks by decomposing the joint-policy under the implicit constraint. Experimental results demonstrate that the extrapolation error is reduced to almost zero and insensitive to the number of agents. We further show that ICQ achieves the state-of-the-art performance in the challenging multi-agent offline tasks (StarCraft II).


page 1

page 2

page 3

page 4


Plan Better Amid Conservatism: Offline Multi-Agent Reinforcement Learning with Actor Rectification

The idea of conservatism has led to significant progress in offline rein...

Contrastive Value Learning: Implicit Models for Simple Offline RL

Model-based reinforcement learning (RL) methods are appealing in the off...

The StarCraft Multi-Agent Challenge

In the last few years, deep multi-agent reinforcement learning (RL) has ...

State Advantage Weighting for Offline RL

We present state advantage weighting for offline reinforcement learning ...

Towards Safe Propofol Dosing during General Anesthesia Using Deep Offline Reinforcement Learning

Automated anesthesia promises to enable more precise and personalized an...