Finite-Time Consensus Learning for Decentralized Optimization with Nonlinear Gossiping

11/04/2021
by   Junya Chen, et al.
0

Distributed learning has become an integral tool for scaling up machine learning and addressing the growing need for data privacy. Although more robust to the network topology, decentralized learning schemes have not gained the same level of popularity as their centralized counterparts for being less competitive performance-wise. In this work, we attribute this issue to the lack of synchronization among decentralized learning workers, showing both empirically and theoretically that the convergence rate is tied to the synchronization level among the workers. Such motivated, we present a novel decentralized learning framework based on nonlinear gossiping (NGO), that enjoys an appealing finite-time consensus property to achieve better synchronization. We provide a careful analysis of its convergence and discuss its merits for modern distributed optimization applications, such as deep neural networks. Our analysis on how communication delay and randomized chats affect learning further enables the derivation of practical variants that accommodate asynchronous and randomized communications. To validate the effectiveness of our proposal, we benchmark NGO against competing solutions through an extensive set of tests, with encouraging results reported.

READ FULL TEXT

page 16

page 18

research
04/06/2018

Fast Decentralized Optimization over Networks

The present work introduces the hybrid consensus alternating direction m...
research
01/25/2019

DADAM: A Consensus-based Distributed Adaptive Gradient Method for Online Optimization

Adaptive gradient-based optimization methods such as ADAGRAD, RMSPROP, a...
research
03/14/2021

CrossoverScheduler: Overlapping Multiple Distributed Training Applications in a Crossover Manner

Distributed deep learning workloads include throughput-intensive trainin...
research
05/19/2023

Beyond Exponential Graph: Communication-Efficient Topologies for Decentralized Learning via Finite-time Convergence

Decentralized learning has recently been attracting increasing attention...
research
09/29/2020

A Low Complexity Decentralized Neural Net with Centralized Equivalence using Layer-wise Learning

We design a low complexity decentralized learning algorithm to train a r...
research
02/11/2021

Straggler-Resilient Distributed Machine Learning with Dynamic Backup Workers

With the increasing demand for large-scale training of machine learning ...
research
02/04/2019

Hop: Heterogeneity-Aware Decentralized Training

Recent work has shown that decentralized algorithms can deliver superior...

Please sign up or login with your details

Forgot password? Click here to reset