Partially Observable Mean Field Multi-Agent Reinforcement Learning Based on Graph-Attention

04/25/2023
by   Min Yang, et al.
0

Traditional multi-agent reinforcement learning algorithms are difficultly applied in a large-scale multi-agent environment. The introduction of mean field theory has enhanced the scalability of multi-agent reinforcement learning in recent years. This paper considers partially observable multi-agent reinforcement learning (MARL), where each agent can only observe other agents within a fixed range. This partial observability affects the agent's ability to assess the quality of the actions of surrounding agents. This paper focuses on developing a method to capture more effective information from local observations in order to select more effective actions. Previous work in this field employs probability distributions or weighted mean field to update the average actions of neighborhood agents, but it does not fully consider the feature information of surrounding neighbors and leads to a local optimum. In this paper, we propose a novel multi-agent reinforcement learning algorithm, Partially Observable Mean Field Multi-Agent Reinforcement Learning based on Graph–Attention (GAMFQ) to remedy this flaw. GAMFQ uses a graph attention module and a mean field module to describe how an agent is influenced by the actions of other agents at each time step. This graph attention module consists of a graph attention encoder and a differentiable attention mechanism, and this mechanism outputs a dynamic graph to represent the effectiveness of neighborhood agents against central agents. The mean–field module approximates the effect of a neighborhood agent on a central agent as the average effect of effective neighborhood agents. We evaluate GAMFQ on three challenging tasks in the MAgents framework. Experiments show that GAMFQ outperforms baselines including the state-of-the-art partially observable mean-field reinforcement learning algorithms.

READ FULL TEXT

page 1

page 14

page 15

page 16

research
09/11/2022

Graphon Mean-Field Control for Cooperative Multi-Agent Reinforcement Learning

The marriage between mean-field theory and reinforcement learning has sh...
research
06/21/2020

Breaking the Curse of Many Agents: Provable Mean Embedding Q-Iteration for Mean-Field Reinforcement Learning

Multi-agent reinforcement learning (MARL) achieves significant empirical...
research
03/06/2022

Depthwise Convolution for Multi-Agent Communication with Enhanced Mean-Field Approximation

Multi-agent settings remain a fundamental challenge in the reinforcement...
research
04/19/2020

Intention Propagation for Multi-agent Reinforcement Learning

A hallmark of an AI agent is to mimic human beings to understand and int...
research
06/17/2021

Many Agent Reinforcement Learning Under Partial Observability

Recent renewed interest in multi-agent reinforcement learning (MARL) has...
research
07/12/2023

Learning Decentralized Partially Observable Mean Field Control for Artificial Collective Behavior

Recent reinforcement learning (RL) methods have achieved success in vari...
research
08/16/2023

Partially Observable Multi-agent RL with (Quasi-)Efficiency: The Blessing of Information Sharing

We study provable multi-agent reinforcement learning (MARL) in the gener...

Please sign up or login with your details

Forgot password? Click here to reset