The Gradient Convergence Bound of Federated Multi-Agent Reinforcement Learning with Efficient Communication

03/24/2021
by   Xing Xu, et al.
0

The paper considers a distributed version of deep reinforcement learning (DRL) for multi-agent decision-making process in the paradigm of federated learning. Since the deep neural network models in federated learning are trained locally and aggregated iteratively through a central server, frequent information exchange incurs a large amount of communication overheads. Besides, due to the heterogeneity of agents, Markov state transition trajectories from different agents are usually unsynchronized within the same time interval, which will further influence the convergence bound of the aggregated deep neural network models. Therefore, it is of vital importance to reasonably evaluate the effectiveness of different optimization methods. Accordingly, this paper proposes a utility function to consider the balance between reducing communication overheads and improving convergence performance. Meanwhile, this paper develops two new optimization methods on top of variation-aware periodic averaging methods: 1) the decay-based method which gradually decreases the weight of the model's local gradients within the progress of local updating, and 2) the consensus-based method which introduces the consensus algorithm into federated learning for the exchange of the model's local gradients. This paper also provides novel convergence guarantees for both developed methods and demonstrates their effectiveness and efficiency through theoretical analysis and numerical simulation results.

READ FULL TEXT

page 1

page 4

page 10

research
01/30/2022

Communication-Efficient Consensus Mechanism for Federated Reinforcement Learning

The paper considers independent reinforcement learning (IRL) for multi-a...
research
08/19/2022

Communication Size Reduction of Federated Learning based on Neural ODE Model

Federated learning is a machine learning method in which data is not agg...
research
06/24/2020

Local Stochastic Approximation: A Unified View of Federated Learning and Distributed Multi-Task Reinforcement Learning Algorithms

Motivated by broad applications in reinforcement learning and federated ...
research
05/13/2023

Network-GIANT: Fully distributed Newton-type optimization via harmonic Hessian consensus

This paper considers the problem of distributed multi-agent learning, wh...
research
09/05/2023

Personalized Federated Deep Reinforcement Learning-based Trajectory Optimization for Multi-UAV Assisted Edge Computing

In the era of 5G mobile communication, there has been a significant surg...
research
03/19/2023

On the Convergence of Decentralized Federated Learning Under Imperfect Information Sharing

Decentralized learning and optimization is a central problem in control ...
research
05/18/2023

The Blessing of Heterogeneity in Federated Q-learning: Linear Speedup and Beyond

When the data used for reinforcement learning (RL) are collected by mult...

Please sign up or login with your details

Forgot password? Click here to reset