Learning Nearly Decomposable Value Functions Via Communication Minimization

10/11/2019
by   Tonghan Wang, et al.
44

Reinforcement learning encounters major challenges in multi-agent settings, such as scalability and non-stationarity. Recently, value function factorization learning emerges as a promising way to address these challenges in collaborative multi-agent systems. However, existing methods have been focusing on learning fully decentralized value function, which are not efficient for tasks requiring communication. To address this limitation, this paper presents a novel framework for learning nearly decomposable value functions with communication, with which agents act on their own most of the time but occasionally send messages to other agents in order for effective coordination. This framework hybridizes value function factorization learning and communication learning by introducing two information-theoretic regularizers. These regularizers are maximizing mutual information between decentralized Q functions and communication messages while minimizing the entropy of messages between agents. We show how to optimize these regularizers in a way that is easily integrated with existing value function factorization methods such as QMIX. Finally, we demonstrate that, on the StarCraft unit micromanagement benchmark, our framework significantly outperforms baseline methods and allows to cut off more than 80% communication without sacrificing the performance. The video of our experiments is available at https://sites.google.com/view/ndvf.

READ FULL TEXT

page 7

page 8

page 15

research
01/04/2022

Value Functions Factorization with Latent State Information Sharing in Decentralized Multi-Agent Policy Gradients

Value function factorization via centralized training and decentralized ...
research
02/16/2021

DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning

In fully cooperative multi-agent reinforcement learning (MARL) settings,...
research
09/27/2019

Deep Coordination Graphs

This paper introduces the deep coordination graph (DCG) for collaborativ...
research
06/04/2023

A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning

In fully cooperative multi-agent reinforcement learning (MARL) settings,...
research
09/16/2020

Energy-based Surprise Minimization for Multi-Agent Value Factorization

Multi-Agent Reinforcement Learning (MARL) has demonstrated significant s...
research
05/31/2019

Information Minimization In Emergent Languages

There is a growing interest in studying the languages emerging when neur...
research
03/07/2022

Learning to Ground Decentralized Multi-Agent Communication with Contrastive Learning

For communication to happen successfully, a common language is required ...

Please sign up or login with your details

Forgot password? Click here to reset