Cooperative and Competitive Biases for Multi-Agent Reinforcement Learning

01/18/2021
by   Heechang Ryu, et al.
0

Training a multi-agent reinforcement learning (MARL) algorithm is more challenging than training a single-agent reinforcement learning algorithm, because the result of a multi-agent task strongly depends on the complex interactions among agents and their interactions with a stochastic and dynamic environment. We propose an algorithm that boosts MARL training using the biased action information of other agents based on a friend-or-foe concept. For a cooperative and competitive environment, there are generally two groups of agents: cooperative-agents and competitive-agents. In the proposed algorithm, each agent updates its value function using its own action and the biased action information of other agents in the two groups. The biased joint action of cooperative agents is computed as the sum of their actual joint action and the imaginary cooperative joint action, by assuming all the cooperative agents jointly maximize the target agent's value function. The biased joint action of competitive agents can be computed similarly. Each agent then updates its own value function using the biased action information, resulting in a biased value function and corresponding biased policy. Subsequently, the biased policy of each agent is inevitably subjected to recommend an action to cooperate and compete with other agents, thereby introducing more active interactions among agents and enhancing the MARL policy learning. We empirically demonstrate that our algorithm outperforms existing algorithms in various mixed cooperative-competitive environments. Furthermore, the introduced biases gradually decrease as the training proceeds and the correction based on the imaginary assumption vanishes.

READ FULL TEXT
research
06/22/2020

QOPT: Optimistic Value Function Decentralization for Cooperative Multi-Agent Reinforcement Learning

We propose a novel value-based algorithm for cooperative multi-agent rei...
research
11/11/2019

SMIX(λ): Enhancing Centralized Value Functions for Cooperative Multi-Agent Reinforcement Learning

Learning a stable and generalizable centralized value function (CVF) is ...
research
07/20/2021

Reinforcement learning autonomously identifying the source of errors for agents in a group mission

When agents are swarmed to carry out a mission, there is often a sudden ...
research
03/28/2023

The challenge of redundancy on multi-agent value factorisation

In the field of cooperative multi-agent reinforcement learning (MARL), t...
research
05/31/2021

Tesseract: Tensorised Actors for Multi-Agent Reinforcement Learning

Reinforcement Learning in large action spaces is a challenging problem. ...
research
11/29/2015

Solving Transition-Independent Multi-agent MDPs with Sparse Interactions (Extended version)

In cooperative multi-agent sequential decision making under uncertainty,...
research
02/10/2021

Modeling the Interaction between Agents in Cooperative Multi-Agent Reinforcement Learning

Value-based methods of multi-agent reinforcement learning (MARL), especi...

Please sign up or login with your details

Forgot password? Click here to reset