DeepAI AI Chat
Log In Sign Up

FedMGDA+: Federated Learning meets Multi-objective Optimization

by   Zeou Hu, et al.

Federated learning has emerged as a promising, massively distributed way to train a joint deep model over large amounts of edge devices while keeping private user data strictly on device. In this work, motivated from ensuring fairness among users and robustness against malicious adversaries, we formulate federated learning as multi-objective optimization and propose a new algorithm FedMGDA+ that is guaranteed to converge to Pareto stationary solutions. FedMGDA+ is simple to implement, has fewer hyperparameters to tune, and refrains from sacrificing the performance of any participating user. We establish the convergence properties of FedMGDA+ and point out its connections to existing approaches. Extensive experiments on a variety of datasets confirm that FedMGDA+ compares favorably against state-of-the-art.


page 1

page 2

page 3

page 4


Accelerating Fair Federated Learning: Adaptive Federated Adam

Federated learning is a distributed and privacy-preserving approach to t...

Federated Learning with Fair Averaging

Fairness has emerged as a critical problem in federated learning (FL). I...

Faster On-Device Training Using New Federated Momentum Algorithm

Mobile crowdsensing has gained significant attention in recent years and...

Device Heterogeneity in Federated Learning: A Superquantile Approach

We propose a federated learning framework to handle heterogeneous client...

Effective Federated Adaptive Gradient Methods with Non-IID Decentralized Data

Federated learning allows loads of edge computing devices to collaborati...

Towards Building a Robust and Fair Federated Learning System

Federated Learning (FL) has emerged as a promising practical framework f...

Optimising Communication Overhead in Federated Learning Using NSGA-II

Federated learning is a training paradigm according to which a server-ba...