Communication Efficient Federated Learning via Ordered ADMM in a Fully Decentralized Setting

02/05/2022
by   Yicheng Chen, et al.
0

The challenge of communication-efficient distributed optimization has attracted attention in recent years. In this paper, a communication efficient algorithm, called ordering-based alternating direction method of multipliers (OADMM) is devised in a general fully decentralized network setting where a worker can only exchange messages with neighbors. Compared to the classical ADMM, a key feature of OADMM is that transmissions are ordered among workers at each iteration such that a worker with the most informative data broadcasts its local variable to neighbors first, and neighbors who have not transmitted yet can update their local variables based on that received transmission. In OADMM, we prohibit workers from transmitting if their current local variables are not sufficiently different from their previously transmitted value. A variant of OADMM, called SOADMM, is proposed where transmissions are ordered but transmissions are never stopped for each node at each iteration. Numerical results demonstrate that given a targeted accuracy, OADMM can significantly reduce the number of communications compared to existing algorithms including ADMM. We also show numerically that SOADMM can accelerate convergence, resulting in communication savings compared to the classical ADMM.

READ FULL TEXT
research
09/15/2019

Communication-Censored Linearized ADMM for Decentralized Consensus Optimization

In this paper, we propose a communication- and computation-efficient alg...
research
10/23/2019

Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized Machine Learning

In this paper, we propose a communication-efficient decentralized machin...
research
04/24/2021

An Asynchronous Approximate Distributed Alternating Direction Method of Multipliers in Digraphs

In this work, we consider the asynchronous distributed optimization prob...
research
11/09/2019

L-FGADMM: Layer-Wise Federated Group ADMM for Communication Efficient Decentralized Deep Learning

This article proposes a communication-efficient decentralized deep learn...
research
09/04/2019

Parameter Estimation with the Ordered ℓ_2 Regularization via an Alternating Direction Method of Multipliers

Regularization is a popular technique in machine learning for model esti...
research
07/03/2020

Harnessing Wireless Channels for Scalable and Privacy-Preserving Federated Learning

Wireless connectivity is instrumental in enabling scalable federated lea...
research
05/08/2022

Communication Compression for Decentralized Learning with Operator Splitting Methods

In decentralized learning, operator splitting methods using a primal-dua...

Please sign up or login with your details

Forgot password? Click here to reset