Transformer-based Value Function Decomposition for Cooperative Multi-agent Reinforcement Learning in StarCraft

08/15/2022
by   Muhammad Junaid Khan, et al.
0

The StarCraft II Multi-Agent Challenge (SMAC) was created to be a challenging benchmark problem for cooperative multi-agent reinforcement learning (MARL). SMAC focuses exclusively on the problem of StarCraft micromanagement and assumes that each unit is controlled individually by a learning agent that acts independently and only possesses local information; centralized training is assumed to occur with decentralized execution (CTDE). To perform well in SMAC, MARL algorithms must handle the dual problems of multi-agent credit assignment and joint action evaluation. This paper introduces a new architecture TransMix, a transformer-based joint action-value mixing network which we show to be efficient and scalable as compared to the other state-of-the-art cooperative MARL solutions. TransMix leverages the ability of transformers to learn a richer mixing function for combining the agents' individual value functions. It achieves comparable performance to previous work on easy SMAC scenarios and outperforms other techniques on hard scenarios, as well as scenarios that are corrupted with Gaussian noise to simulate fog of war.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/22/2020

QOPT: Optimistic Value Function Decentralization for Cooperative Multi-Agent Reinforcement Learning

We propose a novel value-based algorithm for cooperative multi-agent rei...
research
03/28/2022

UNMAS: Multi-Agent Reinforcement Learning for Unshaped Cooperative Scenarios

Multi-agent reinforcement learning methods such as VDN, QMIX, and QTRAN ...
research
05/12/2023

Boosting Value Decomposition via Unit-Wise Attentive State Representation for Cooperative Multi-Agent Reinforcement Learning

In cooperative multi-agent reinforcement learning (MARL), the environmen...
research
09/19/2021

Greedy UnMixing for Q-Learning in Multi-Agent Reinforcement Learning

This paper introduces Greedy UnMix (GUM) for cooperative multi-agent rei...
research
02/14/2023

Adaptive Value Decomposition with Greedy Marginal Contribution Computation for Cooperative Multi-Agent Reinforcement Learning

Real-world cooperation often requires intensive coordination among agent...
research
11/11/2019

SMIX(λ): Enhancing Centralized Value Functions for Cooperative Multi-Agent Reinforcement Learning

Learning a stable and generalizable centralized value function (CVF) is ...
research
12/22/2020

QVMix and QVMix-Max: Extending the Deep Quality-Value Family of Algorithms to Cooperative Multi-Agent Reinforcement Learning

This paper introduces four new algorithms that can be used for tackling ...

Please sign up or login with your details

Forgot password? Click here to reset