Mobilizing Personalized Federated Learning via Random Walk Stochastic ADMM

04/25/2023
by   Ziba Parsons, et al.
0

In this research, we investigate the barriers associated with implementing Federated Learning (FL) in real-world scenarios, where a consistent connection between the central server and all clients cannot be maintained, and data distribution is heterogeneous. To address these challenges, we focus on mobilizing the federated setting, where the server moves between groups of adjacent clients to learn local models. Specifically, we propose a new algorithm, Random Walk Stochastic Alternating Direction Method of Multipliers (RWSADMM), capable of adapting to dynamic and ad-hoc network conditions as long as a sufficient number of connected clients are available for model training. In RWSADMM, the server walks randomly toward a group of clients. It formulates local proximity among adjacent clients based on hard inequality constraints instead of consensus updates to address data heterogeneity. Our proposed method is convergent, reduces communication costs, and enhances scalability by reducing the number of clients the central server needs to communicate with.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/17/2022

Improving Federated Learning Communication Efficiency with Global Momentum Fusion for Gradient Compression Schemes

Communication costs within Federated learning hinder the system scalabil...
research
04/02/2023

Personalized Federated Learning with Local Attention

Federated Learning (FL) aims to learn a single global model that enables...
research
06/01/2022

Walk for Learning: A Random Walk Approach for Federated Learning from Heterogeneous Data

We consider the problem of a Parameter Server (PS) that wishes to learn ...
research
06/17/2022

Federated learning with incremental clustering for heterogeneous data

Federated learning enables different parties to collaboratively build a ...
research
07/20/2023

Boosting Federated Learning Convergence with Prototype Regularization

As a distributed machine learning technique, federated learning (FL) req...
research
02/09/2023

Delay Sensitive Hierarchical Federated Learning with Stochastic Local Updates

The impact of local averaging on the performance of federated learning (...
research
10/29/2021

ADDS: Adaptive Differentiable Sampling for Robust Multi-Party Learning

Distributed multi-party learning provides an effective approach for trai...

Please sign up or login with your details

Forgot password? Click here to reset