FedSplit: An algorithmic framework for fast federated optimization

05/11/2020
by   Reese Pathak, et al.
0

Motivated by federated learning, we consider the hub-and-spoke model of distributed optimization in which a central authority coordinates the computation of a solution among many agents while limiting communication. We first study some past procedures for federated optimization, and show that their fixed points need not correspond to stationary points of the original optimization problem, even in simple convex settings with deterministic updates. In order to remedy these issues, we introduce FedSplit, a class of algorithms based on operator splitting procedures for solving distributed convex minimization with additive structure. We prove that these procedures have the correct fixed points, corresponding to optima of the original optimization problem, and we characterize their convergence rates under different settings. Our theory shows that these methods are provably robust to inexact computation of intermediate local quantities. We complement our theory with some simple experiments that demonstrate the benefits of our methods in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/25/2021

Distributionally Robust Federated Averaging

In this paper, we study communication efficient distributed algorithms f...
research
06/29/2021

Achieving Statistical Optimality of Federated Learning: Beyond Stationary Points

Federated Learning (FL) is a promising framework that has great potentia...
research
04/03/2020

From Local SGD to Local Fixed Point Methods for Federated Learning

Most algorithms for solving optimization problems or finding saddle poin...
research
08/12/2021

An Operator Splitting View of Federated Learning

Over the past few years, the federated learning () community has witness...
research
03/08/2021

Convergence and Accuracy Trade-Offs in Federated Learning and Meta-Learning

We study a family of algorithms, which we refer to as local update metho...
research
06/09/2023

Communication-Efficient Zeroth-Order Distributed Online Optimization: Algorithm, Theory, and Applications

This paper focuses on a multi-agent zeroth-order online optimization pro...
research
03/17/2021

Sample-based Federated Learning via Mini-batch SSCA

In this paper, we investigate unconstrained and constrained sample-based...

Please sign up or login with your details

Forgot password? Click here to reset