Learning to Communicate in a Noisy Environment

10/21/2019
by   Anant Sahai, et al.
0

In this work we examine the problem of learning to cooperate in the context of wireless communication. We consider the two agent setting where agents must learn modulation and demodulation schemes that enable them to communicate with each other in the presence of a power-constrained additive white Gaussian noise (AWGN) channel. We investigate whether learning is possible under different levels of information sharing between distributed agents that are not necessarily co-designed. We make use of the "Echo" protocol, a learning protocol where an agent hears, understands, and repeats (echoes) back the message received from another agent, simultaneously training itself to communicate. To capture the idea of cooperation between agents that are "not necessarily co-designed," we use two different populations of function approximators - neural networks and polynomials. In addition to diverse learning agents, we include non-learning agents that use fixed modulation protocols such as QPSK and 16QAM. We verify that the Echo learning approach succeeds independent of the inner workings of the agents, and that learning agents can not only learn to match the communication expectations of others, but can also collaboratively invent a successful communication approach from independent random initializations. We complement our simulations with an implementation of the Echo protocol in software-defined radios. To explore the continuum between tight co-design of learning agents and independently designed agents, we study how learning is impacted by different levels of information sharing - including sharing training symbols, sharing intermediate loss information, and sharing full gradient information. We find that, in general, co-design (increased information sharing) accelerates learning and that this effect becomes more pronounced as the communication task becomes harder.

READ FULL TEXT
research
04/22/2021

Optimal communication and control strategies in a multi-agent MDP problem

The problem of controlling multi-agent systems under different models of...
research
02/08/2016

Learning to Communicate to Solve Riddles with Deep Distributed Recurrent Q-Networks

We propose deep distributed recurrent Q-networks (DDRQN), which enable t...
research
10/02/2018

On Learning How to Communicate Over Noisy Channels for Collaborative Tasks

Consider a collaborative task carried out by two autonomous agents that ...
research
10/02/2018

Learning-Based Physical Layer Communications for Multiagent Collaboration

Consider a collaborative task carried out by two autonomous agents that ...
research
10/02/2018

Learning-Based Physical Layer Communications for Multi-agent Collaboration

Consider a collaborative task carried out by two autonomous agents that ...
research
04/19/2021

Learning to Communicate with Strangers via Channel Randomisation Methods

We introduce two methods for improving the performance of agents meeting...
research
02/04/2019

On the Enactability of Agent Interaction Protocols: Toward a Unified Approach

Interactions between agents are usually designed from a global viewpoint...

Please sign up or login with your details

Forgot password? Click here to reset