DualFL: A Duality-based Federated Learning Algorithm with Communication Acceleration in the General Convex Regime

05/17/2023
by   Jongho Park, et al.
0

We propose a novel training algorithm called DualFL (Dualized Federated Learning), for solving a distributed optimization problem in federated learning. Our approach is based on a specific dual formulation of the federated learning problem. DualFL achieves communication acceleration under various settings on smoothness and strong convexity of the problem. Moreover, it theoretically guarantees the use of inexact local solvers, preserving its optimal communication complexity even with inexact local solutions. DualFL is the first federated learning algorithm that achieves communication acceleration, even when the cost function is either nonsmooth or non-strongly convex. Numerical results demonstrate that the practical performance of DualFL is comparable to those of state-of-the-art federated learning algorithms, and it is robust with respect to hyperparameter tuning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/02/2020

Model-Agnostic Round-Optimal Federated Learning via Knowledge Transfer

Federated learning enables multiple parties to collaboratively learn a m...
research
06/02/2022

Federated Learning with a Sampling Algorithm under Isoperimetry

Federated learning uses a set of techniques to efficiently distribute th...
research
01/26/2022

A dual approach for federated learning

We study the federated optimization problem from a dual perspective and ...
research
10/27/2020

Federated Learning From Big Data Over Networks

This paper formulates and studies a novel algorithm for federated learni...
research
07/12/2023

Locally Adaptive Federated Learning via Stochastic Polyak Stepsizes

State-of-the-art federated learning algorithms such as FedAvg require ca...
research
12/09/2020

Accurate and Fast Federated Learning via IID and Communication-Aware Grouping

Federated learning has emerged as a new paradigm of collaborative machin...
research
07/08/2022

Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with Inexact Prox

Inspired by a recent breakthrough of Mishchenko et al (2022), who for th...

Please sign up or login with your details

Forgot password? Click here to reset