Consensus Driven Learning

05/20/2020
by   Kyle Crandall, et al.
0

As the complexity of our neural network models grow, so too do the data and computation requirements for successful training. One proposed solution to this problem is training on a distributed network of computational devices, thus distributing the computational and data storage loads. This strategy has already seen some adoption by the likes of Google and other companies. In this paper we propose a new method of distributed, decentralized learning that allows a network of computation nodes to coordinate their training using asynchronous updates over an unreliable network while only having access to a local dataset. This is achieved by taking inspiration from Distributed Averaging Consensus algorithms to coordinate the various nodes. Sharing the internal model instead of the training data allows the original raw data to remain with the computation node. The asynchronous nature and lack of centralized coordination allows this paradigm to function with limited communication requirements. We demonstrate our method on the MNIST, Fashion MNIST, and CIFAR10 datasets. We show that our coordination method allows models to be learned on highly biased datasets, and in the presence of intermittent communication failure.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/07/2021

Decentralized Optimization with Heterogeneous Delays: a Continuous-Time Approach

In decentralized optimization, nodes of a communication network privatel...
research
04/10/2020

Asynchronous Decentralized Learning of a Neural Network

In this work, we exploit an asynchronous computing framework namely ARoc...
research
01/19/2020

Asynchronous Consensus Algorithm

This document describes a new consensus algorithm which is asynchronous ...
research
10/28/2019

Distributed Estimation via Network Regularization

We propose a new method for distributed estimation of a linear model by ...
research
02/21/2019

Gradient Scheduling with Global Momentum for Non-IID Data Distributed Asynchronous Training

Distributed asynchronous offline training has received widespread attent...
research
04/07/2022

Decentralized Event-Triggered Federated Learning with Heterogeneous Communication Thresholds

A recent emphasis of distributed learning research has been on federated...
research
02/07/2022

Asynchronous Parallel Incremental Block-Coordinate Descent for Decentralized Machine Learning

Machine learning (ML) is a key technique for big-data-driven modelling a...

Please sign up or login with your details

Forgot password? Click here to reset