Asymptotic Network Independence in Distributed Optimization for Machine Learning

06/28/2019
by   Alex Olshevsky, et al.
0

We provide a discussion of several recent results which have overcome a key barrier in distributed optimization for machine learning. Our focus is the so-called network independence property, which is achieved whenever a distributed method executed over a network of n nodes achieves comparable performance to a centralized method with the same computational power as the entire network. We explain this property through an example involving of training ML models and sketch a short mathematical analysis.

READ FULL TEXT
research
08/22/2019

Centralized and Distributed Machine Learning-Based QoT Estimation for Sliceable Optical Networks

Dynamic network slicing has emerged as a promising and fundamental frame...
research
03/24/2019

TonY: An Orchestrator for Distributed Machine Learning Jobs

Training machine learning (ML) models on large datasets requires conside...
research
05/09/2019

On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex Optimization

Recent developments on large-scale distributed machine learning applicat...
research
12/20/2019

A Survey on Distributed Machine Learning

The demand for artificial intelligence has grown significantly over the ...
research
06/30/2019

Network-accelerated Distributed Machine Learning Using MLFabric

Existing distributed machine learning (DML) systems focus on improving t...
research
02/14/2020

Extremal independence old and new

On 12 February 2020 the Royal Statistical Society hosted a meeting to di...
research
03/08/2018

Distributed virtual machine consolidation: A systematic mapping study

Background: Virtual Machine (VM) consolidation is an effective technique...

Please sign up or login with your details

Forgot password? Click here to reset