A Low Complexity Decentralized Neural Net with Centralized Equivalence using Layer-wise Learning

09/29/2020
by   Xinyue Liang, et al.
0

We design a low complexity decentralized learning algorithm to train a recently proposed large neural network in distributed processing nodes (workers). We assume the communication network between the workers is synchronized and can be modeled as a doubly-stochastic mixing matrix without having any master node. In our setup, the training data is distributed among the workers but is not shared in the training process due to privacy and security concerns. Using alternating-direction-method-of-multipliers (ADMM) along with a layerwise convex optimization approach, we propose a decentralized learning algorithm which enjoys low computational complexity and communication cost among the workers. We show that it is possible to achieve equivalent learning performance as if the data is available in a single place. Finally, we experimentally illustrate the time complexity and convergence behavior of the algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/30/2019

GADMM: Fast and Communication Efficient Framework for Distributed Machine Learning

When the data is distributed across multiple servers, efficient data exc...
research
02/22/2020

Communication-Efficient Decentralized Learning with Sparsification and Adaptive Peer Selection

Distributed learning techniques such as federated learning have enabled ...
research
08/19/2020

Restructuring, Pruning, and Adjustment of Deep Models for Parallel Distributed Inference

Using multiple nodes and parallel computing algorithms has become a prin...
research
10/08/2021

RelaySum for Decentralized Deep Learning on Heterogeneous Data

In decentralized machine learning, workers compute model updates on thei...
research
06/29/2018

Fundamental Limits of Distributed Data Shuffling

Data shuffling of training data among different computing nodes (workers...
research
11/04/2021

Finite-Time Consensus Learning for Decentralized Optimization with Nonlinear Gossiping

Distributed learning has become an integral tool for scaling up machine ...
research
03/30/2015

Decentralized learning for wireless communications and networking

This chapter deals with decentralized learning algorithms for in-network...

Please sign up or login with your details

Forgot password? Click here to reset