A Privacy-preserving Distributed Training Framework for Cooperative Multi-agent Deep Reinforcement Learning

09/30/2021
by   Yimin Shi, et al.
0

Deep Reinforcement Learning (DRL) sometimes needs a large amount of data to converge in the training procedure and in some cases, each action of the agent may produce regret. This barrier naturally motivates different data sets or environment owners to cooperate to share their knowledge and train their agents more efficiently. However, it raises privacy concerns if we directly merge the raw data from different owners. To solve this problem, we proposed a new Deep Neural Network (DNN) architecture with both global NN and local NN, and a distributed training framework. We allow the global weights to be updated by all the collaborator agents while the local weights are only updated by the agent they belong to. In this way, we hope the global weighs can share the common knowledge among these collaborators while the local NN can keep the specialized properties and ensure the agent to be compatible with its specific environment. Experiments show that the framework can efficiently help agents in the same or similar environments to collaborate in their training process and gain a higher convergence rate and better performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/15/2020

Multi-Agent Trust Region Policy Optimization

We extend trust region policy optimization (TRPO) to multi-agent reinfor...
research
12/05/2020

Multi-agent navigation based on deep reinforcement learning and traditional pathfinding algorithm

We develop a new framework for multi-agent collision avoidance problem. ...
research
11/02/2020

Cooperative Heterogeneous Deep Reinforcement Learning

Numerous deep reinforcement learning agents have been proposed, and each...
research
05/21/2020

Distributed Resource Scheduling for Large-Scale MEC Systems: A Multi-Agent Ensemble Deep Reinforcement Learning with Imitation Acceleration

We consider the optimization of distributed resource scheduling to minim...
research
09/17/2020

Evolutionary Selective Imitation: Interpretable Agents by Imitation Learning Without a Demonstrator

We propose a new method for training an agent via an evolutionary strate...
research
05/29/2018

Learning Under Distributed Features

This work studies the problem of learning under both large data and larg...

Please sign up or login with your details

Forgot password? Click here to reset