Distributed Learning over Unreliable Networks

10/17/2018
by   Hanlin Tang, et al.
0

Most of today's distributed machine learning systems assume reliable networks: whenever two machines exchange information (e.g., gradients or models), the network should guarantee the delivery of the message. At the same time, recent work exhibits the impressive tolerance of machine learning algorithms to errors or noise arising from relaxed communication or synchronization. In this paper, we connect these two trends, and consider the following question: Can we design machine learning systems that are tolerant to network unreliability during training? With this motivation, we focus on a theoretical problem of independent interest---given a standard distributed parameter server architecture, if every communication between the worker and the server has a non-zero probability p of being dropped, does there exist an algorithm that still converges, and at what speed? In the context of prior art, this problem can be phrased as distributed learning over random topologies. The technical contribution of this paper is a novel theoretical analysis proving that distributed learning over random topologies can achieve comparable convergence rate to centralized or distributed learning over reliable networks. Further, we prove that the influence of the packet drop rate diminishes with the growth of the number of blackparameter servers. We map this theoretical result onto a real-world scenario, training deep neural networks over an unreliable network layer, and conduct network simulation to validate the system improvement by allowing the networks to be unreliable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2023

Architecting Peer-to-Peer Serverless Distributed Machine Learning Training for Improved Fault Tolerance

Distributed Machine Learning refers to the practice of training a model ...
research
08/30/2019

GADMM: Fast and Communication Efficient Framework for Distributed Machine Learning

When the data is distributed across multiple servers, efficient data exc...
research
11/27/2018

LEASGD: an Efficient and Privacy-Preserving Decentralized Algorithm for Distributed Learning

Distributed learning systems have enabled training large-scale models ov...
research
09/07/2023

Convergence Analysis of Decentralized ASGD

Over the last decades, Stochastic Gradient Descent (SGD) has been intens...
research
04/12/2021

Distributed Learning Systems with First-order Methods

Scalable and efficient distributed learning is one of the main driving f...
research
05/13/2022

Modularity in NEAT Reinforcement Learning Networks

Modularity is essential to many well-performing structured systems, as i...
research
05/07/2023

Boosting Distributed Machine Learning Training Through Loss-tolerant Transmission Protocol

Distributed Machine Learning (DML) systems are utilized to enhance the s...

Please sign up or login with your details

Forgot password? Click here to reset