GADMM: Fast and Communication Efficient Framework for Distributed Machine Learning

08/30/2019
by   Anis Elgabli, et al.
0

When the data is distributed across multiple servers, efficient data exchange between the servers (or workers) for solving the distributed learning problem is an important problem and is the focus of this paper. We propose a fast, privacy-aware, and communication-efficient decentralized framework to solve the distributed machine learning (DML) problem. The proposed algorithm, GADMM, is based on Alternating Direct Method of Multiplier (ADMM) algorithm. The key novelty in GADMM is that each worker exchanges the locally trained model only with two neighboring workers, thereby training a global model with lower amount of communication in each exchange. We prove that GADMM converges faster than the centralized batch gradient descent for convex loss functions, and numerically show that it is faster and more communication-efficient than the state-of-the-art communication-efficient centralized algorithms such as the Lazily Aggregated Gradient (LAG), in linear and logistic regression tasks on synthetic and real datasets. Furthermore, we propose Dynamic GADMM (D-GADMM), a variant of GADMM, and prove its convergence under time-varying network topology of the workers.

READ FULL TEXT
research
10/23/2019

Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized Machine Learning

In this paper, we propose a communication-efficient decentralized machin...
research
09/29/2020

A Low Complexity Decentralized Neural Net with Centralized Equivalence using Layer-wise Learning

We design a low complexity decentralized learning algorithm to train a r...
research
09/04/2022

Communication Efficient Distributed Learning over Wireless Channels

Vertical distributed learning exploits the local features collected by m...
research
03/16/2019

SLSGD: Secure and Efficient Distributed On-device Machine Learning

We consider distributed on-device learning with limited communication an...
research
10/17/2018

Distributed Learning over Unreliable Networks

Most of today's distributed machine learning systems assume reliable ne...
research
03/30/2016

Towards Geo-Distributed Machine Learning

Latency to end-users and regulatory requirements push large companies to...
research
09/23/2019

SIVSHM: Secure Inter-VM Shared Memory

With wide spread acceptance of virtualization, virtual machines (VMs) fi...

Please sign up or login with your details

Forgot password? Click here to reset