DeepAI AI Chat
Log In Sign Up

An Operator Splitting View of Federated Learning

by   Saber Malekmohammadi, et al.

Over the past few years, the federated learning () community has witnessed a proliferation of new algorithms. However, our understating of the theory of is still fragmented, and a thorough, formal comparison of these algorithms remains elusive. Motivated by this gap, we show that many of the existing algorithms can be understood from an operator splitting point of view. This unification allows us to compare different algorithms with ease, to refine previous convergence results and to uncover new algorithmic variants. In particular, our analysis reveals the vital role played by the step size in algorithms. The unification also leads to a streamlined and economic way to accelerate algorithms, without incurring any communication overhead. We perform numerical experiments on both convex and nonconvex models to validate our findings.


Federated Learning with Randomized Douglas-Rachford Splitting Methods

In this paper, we develop two new algorithms, called, FedDR and asyncFed...

A Federated Learning Framework for Nonconvex-PL Minimax Problems

We consider a general class of nonconvex-PL minimax problems in the cros...

FedSplit: An algorithmic framework for fast federated optimization

Motivated by federated learning, we consider the hub-and-spoke model of ...

Federated Learning on Riemannian Manifolds

Federated learning (FL) has found many important applications in smart-p...

Sample-based Federated Learning via Mini-batch SSCA

In this paper, we investigate unconstrained and constrained sample-based...

From Local SGD to Local Fixed Point Methods for Federated Learning

Most algorithms for solving optimization problems or finding saddle poin...

Combinatorial Gradient Fields for 2D Images with Empirically Convergent Separatrices

This paper proposes an efficient probabilistic method that computes comb...