FedFOR: Stateless Heterogeneous Federated Learning with First-Order Regularization

09/21/2022
by   Junjiao Tian, et al.
0

Federated Learning (FL) seeks to distribute model training across local clients without collecting data in a centralized data-center, hence removing data-privacy concerns. A major challenge for FL is data heterogeneity (where each client's data distribution can differ) as it can lead to weight divergence among local clients and slow global convergence. The current SOTA FL methods designed for data heterogeneity typically impose regularization to limit the impact of non-IID data and are stateful algorithms, i.e., they maintain local statistics over time. While effective, these approaches can only be used for a special case of FL involving only a small number of reliable clients. For the more typical applications of FL where the number of clients is large (e.g., edge-device and mobile applications), these methods cannot be applied, motivating the need for a stateless approach to heterogeneous FL which can be used for any number of clients. We derive a first-order gradient regularization to penalize inconsistent local updates due to local data heterogeneity. Specifically, to mitigate weight divergence, we introduce a first-order approximation of the global data distribution into local objectives, which intuitively penalizes updates in the opposite direction of the global update. The end result is a stateless FL algorithm that achieves 1) significantly faster convergence (i.e., fewer communication rounds) and 2) higher overall converged performance than SOTA methods under non-IID data distribution. Importantly, our approach does not impose unrealistic limits on the client size, enabling learning from a large number of clients as is typical in most FL applications.

READ FULL TEXT
research
07/17/2023

FedCME: Client Matching and Classifier Exchanging to Handle Data Heterogeneity in Federated Learning

Data heterogeneity across clients is one of the key challenges in Federa...
research
10/21/2021

Guess what? You can boost Federated Learning for free

Federated Learning (FL) exploits the computation power of edge devices, ...
research
04/18/2022

FedKL: Tackling Data Heterogeneity in Federated Reinforcement Learning by Penalizing KL Divergence

As a distributed learning paradigm, Federated Learning (FL) faces the co...
research
04/19/2023

Model Pruning Enables Localized and Efficient Federated Learning for Yield Forecasting and Data Sharing

Federated Learning (FL) presents a decentralized approach to model train...
research
07/07/2022

Adaptive Personlization in Federated Learning for Highly Non-i.i.d. Data

Federated learning (FL) is a distributed learning method that offers med...
research
04/12/2023

FedTrip: A Resource-Efficient Federated Learning Method with Triplet Regularization

In the federated learning scenario, geographically distributed clients c...
research
07/26/2021

On The Impact of Client Sampling on Federated Learning Convergence

While clients' sampling is a central operation of current state-of-the-a...

Please sign up or login with your details

Forgot password? Click here to reset