Adaptive Value Decomposition with Greedy Marginal Contribution Computation for Cooperative Multi-Agent Reinforcement Learning

02/14/2023
by   Shanqi Liu, et al.
0

Real-world cooperation often requires intensive coordination among agents simultaneously. This task has been extensively studied within the framework of cooperative multi-agent reinforcement learning (MARL), and value decomposition methods are among those cutting-edge solutions. However, traditional methods that learn the value function as a monotonic mixing of per-agent utilities cannot solve the tasks with non-monotonic returns. This hinders their application in generic scenarios. Recent methods tackle this problem from the perspective of implicit credit assignment by learning value functions with complete expressiveness or using additional structures to improve cooperation. However, they are either difficult to learn due to large joint action spaces or insufficient to capture the complicated interactions among agents which are essential to solving tasks with non-monotonic returns. To address these problems, we propose a novel explicit credit assignment method to address the non-monotonic problem. Our method, Adaptive Value decomposition with Greedy Marginal contribution (AVGM), is based on an adaptive value decomposition that learns the cooperative value of a group of dynamically changing agents. We first illustrate that the proposed value decomposition can consider the complicated interactions among agents and is feasible to learn in large-scale scenarios. Then, our method uses a greedy marginal contribution computed from the value decomposition as an individual credit to incentivize agents to learn the optimal cooperative policy. We further extend the module with an action encoder to guarantee the linear time complexity for computing the greedy marginal contribution. Experimental results demonstrate that our method achieves significant performance improvements in several non-monotonic domains.

READ FULL TEXT
research
07/05/2023

Multi-Agent Cooperation via Unsupervised Learning of Joint Intentions

The field of cooperative multi-agent reinforcement learning (MARL) has s...
research
08/15/2022

Transformer-based Value Function Decomposition for Cooperative Multi-agent Reinforcement Learning in StarCraft

The StarCraft II Multi-Agent Challenge (SMAC) was created to be a challe...
research
12/08/2021

Greedy-based Value Representation for Optimal Coordination in Multi-agent Reinforcement Learning

Due to the representation limitation of the joint Q value function, mult...
research
08/07/2022

Maximum Correntropy Value Decomposition for Multi-agent Deep Reinforcemen Learning

We explore value decomposition solutions for multi-agent deep reinforcem...
research
05/12/2023

Boosting Value Decomposition via Unit-Wise Attentive State Representation for Cooperative Multi-Agent Reinforcement Learning

In cooperative multi-agent reinforcement learning (MARL), the environmen...
research
06/02/2022

RACA: Relation-Aware Credit Assignment for Ad-Hoc Cooperation in Multi-Agent Deep Reinforcement Learning

In recent years, reinforcement learning has faced several challenges in ...
research
10/17/2022

Multi-Agent Automated Machine Learning

In this paper, we propose multi-agent automated machine learning (MA2ML)...

Please sign up or login with your details

Forgot password? Click here to reset