L-FGADMM: Layer-Wise Federated Group ADMM for Communication Efficient Decentralized Deep Learning

11/09/2019
by   Anis Elgabli, et al.
0

This article proposes a communication-efficient decentralized deep learning algorithm, coined layer-wise federated group ADMM (L-FGADMM). To minimize an empirical risk, every worker in L-FGADMM periodically communicates with two neighbors, in which the periods are separately adjusted for different layers of its deep neural network. A constrained optimization problem for this setting is formulated and solved using the stochastic version of GADMM proposed in our prior work. Numerical evaluations show that by less frequently exchanging the largest layer, L-FGADMM can significantly reduce the communication cost, without compromising the convergence speed. Surprisingly, despite less exchanged information and decentralized operations, intermittently skipping the largest layer consensus in L-FGADMM creates a regularizing effect, thereby achieving the test accuracy as high as federated learning (FL), a baseline method with the entire layer consensus by the aid of a central entity.

READ FULL TEXT
research
03/11/2023

FedLP: Layer-wise Pruning Mechanism for Communication-Computation Efficient Federated Learning

Federated learning (FL) has prevailed as an efficient and privacy-preser...
research
10/23/2019

Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized Machine Learning

In this paper, we propose a communication-efficient decentralized machin...
research
10/28/2021

Communication-Efficient ADMM-based Federated Learning

Federated learning has shown its advances over the last few years but is...
research
03/21/2020

Dynamic Sampling and Selective Masking for Communication-Efficient Federated Learning

Federated learning (FL) is a novel machine learning setting which enable...
research
02/05/2022

Communication Efficient Federated Learning via Ordered ADMM in a Fully Decentralized Setting

The challenge of communication-efficient distributed optimization has at...
research
10/14/2017

Robust Federated Learning Using ADMM in the Presence of Data Falsifying Byzantines

In this paper, we consider the problem of federated (or decentralized) l...
research
04/26/2021

Communication-Efficient and Personalized Federated Lottery Ticket Learning

The lottery ticket hypothesis (LTH) claims that a deep neural network (i...

Please sign up or login with your details

Forgot password? Click here to reset