Learning to Improve Representations by Communicating About Perspectives

09/20/2021
by   Julius Taylor, et al.
0

Effective latent representations need to capture abstract features of the externalworld. We hypothesise that the necessity for a group of agents to reconcile theirsubjective interpretations of a shared environment state is an essential factor in-fluencing this property. To test this hypothesis, we propose an architecture whereindividual agents in a population receive different observations of the same under-lying state and learn latent representations that they communicate to each other. Wehighlight a fundamental link between emergent communication and representationlearning: the role of language as a cognitive tool and the opportunities conferredby subjectivity, an inherent property of most multi-agent systems. We present aminimal architecture comprised of a population of autoencoders, where we defineloss functions, capturing different aspects of effective communication, and examinetheir effect on the learned representations. We show that our proposed architectureallows the emergence of aligned representations. The subjectivity introduced bypresenting agents with distinct perspectives of the environment state contributes tolearning abstract representations that outperform those learned by both a single au-toencoder and a population of autoencoders, presented with identical perspectives.Altogether, our results demonstrate how communication from subjective perspec-tives can lead to the acquisition of more abstract representations in multi-agentsystems, opening promising perspectives for future research at the intersection ofrepresentation learning and emergent communication.

READ FULL TEXT
research
10/28/2021

Learning to Ground Multi-Agent Communication with Autoencoders

Communication requires having a common language, a lingua franca, betwee...
research
12/12/2019

Shaping representations through communication: community size effect in artificial learning systems

Motivated by theories of language and communication that explain why com...
research
01/18/2021

HAMMER: Multi-Level Coordination of Reinforcement Learning Agents via Learned Messaging

Cooperative multi-agent reinforcement learning (MARL) has achieved signi...
research
11/06/2019

To Populate is To Regulate

We examine the effects of instantiating Lewis signaling games within a p...
research
11/04/2021

Towards Learning to Speak and Hear Through Multi-Agent Communication over a Continuous Acoustic Channel

While multi-agent reinforcement learning has been used as an effective m...
research
07/03/2023

Learning to Communicate using Contrastive Learning

Communication is a powerful tool for coordination in multi-agent RL. But...
research
12/20/2017

Revisiting the Master-Slave Architecture in Multi-Agent Deep Reinforcement Learning

Many tasks in artificial intelligence require the collaboration of multi...

Please sign up or login with your details

Forgot password? Click here to reset