Federated Learning via Inexact ADMM

04/22/2022
by   Shenglong Zhou, et al.
0

One of the crucial issues in federated learning is how to develop efficient optimization algorithms. Most of the current ones require full devices participation and/or impose strong assumptions for convergence. Different from the widely-used gradient descent-based algorithms, this paper develops an inexact alternating direction method of multipliers (ADMM), which is both computation and communication-efficient, capable of combating the stragglers' effect, and convergent under mild conditions.

READ FULL TEXT
research
05/03/2022

Efficient and Convergent Federated Learning

Federated learning has shown its advances over the last few years but is...
research
10/18/2021

Towards Federated Bayesian Network Structure Learning with Continuous Optimization

Traditionally, Bayesian network structure learning is often carried out ...
research
10/14/2017

Robust Federated Learning Using ADMM in the Presence of Data Falsifying Byzantines

In this paper, we consider the problem of federated (or decentralized) l...
research
10/28/2021

Communication-Efficient ADMM-based Federated Learning

Federated learning has shown its advances over the last few years but is...
research
09/05/2020

Distributed Optimization, Averaging via ADMM, and Network Topology

There has been an increasing necessity for scalable optimization methods...
research
06/17/2022

FedNew: A Communication-Efficient and Privacy-Preserving Newton-Type Method for Federated Learning

Newton-type methods are popular in federated learning due to their fast ...
research
07/07/2019

Fast and Provable ADMM for Learning with Generative Priors

In this work, we propose a (linearized) Alternating Direction Method-of-...

Please sign up or login with your details

Forgot password? Click here to reset