Decentralized Deep Learning using Momentum-Accelerated Consensus

10/21/2020
by   Aditya Balu, et al.
0

We consider the problem of decentralized deep learning where multiple agents collaborate to learn from a distributed dataset. While there exist several decentralized deep learning approaches, the majority consider a central parameter-server topology for aggregating the model parameters from the agents. However, such a topology may be inapplicable in networked systems such as ad-hoc mobile networks, field robotics, and power network systems where direct communication with the central parameter server may be inefficient. In this context, we propose and analyze a novel decentralized deep learning algorithm where the agents interact over a fixed communication topology (without a central server). Our algorithm is based on the heavy-ball acceleration method used in gradient-based optimization. We propose a novel consensus protocol where each agent shares with its neighbors its model parameters as well as gradient-momentum values during the optimization process. We consider both strongly convex and non-convex objective functions and theoretically analyze our algorithm's performance. We present several empirical comparisons with competing decentralized learning methods to demonstrate the efficacy of our approach under different communication topologies.

READ FULL TEXT

page 7

page 21

research
06/23/2017

Collaborative Deep Learning in Fixed Topology Networks

There is significant recent interest to parallelize deep learning algori...
research
05/30/2018

On Consensus-Optimality Trade-offs in Collaborative Deep Learning

In distributed machine learning, where agents collaboratively learn from...
research
06/01/2023

DSGD-CECA: Decentralized SGD with Communication-Optimal Exact Consensus Algorithm

Decentralized Stochastic Gradient Descent (SGD) is an emerging neural ne...
research
03/02/2021

Cross-Gradient Aggregation for Decentralized Learning from Non-IID data

Decentralized learning enables a group of collaborative agents to learn ...
research
09/08/2019

Distributed Deep Learning with Event-Triggered Communication

We develop a Distributed Event-Triggered Stochastic GRAdient Descent (DE...
research
02/28/2020

Decentralized gradient methods: does topology matter?

Consensus-based distributed optimization methods have recently been advo...
research
09/28/2022

Neighborhood Gradient Clustering: An Efficient Decentralized Learning Method for Non-IID Data Distributions

Decentralized learning over distributed datasets can have significantly ...

Please sign up or login with your details

Forgot password? Click here to reset