-
Client Selection in Federated Learning: Convergence Analysis and Power-of-Choice Selection Strategies
Federated learning is a distributed optimization paradigm that enables a...
read it
-
Effective Federated Adaptive Gradient Methods with Non-IID Decentralized Data
Federated learning allows loads of edge computing devices to collaborati...
read it
-
Performance Optimization for Federated Person Re-identification via Benchmark Analysis
Federated learning is a privacy-preserving machine learning technique th...
read it
-
Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization
In federated optimization, heterogeneity in the clients' local datasets ...
read it
-
LotteryFL: Personalized and Communication-Efficient Federated Learning with Lottery Ticket Hypothesis on Non-IID Datasets
Federated learning is a popular distributed machine learning paradigm wi...
read it
-
On the Convergence of Federated Optimization in Heterogeneous Networks
The burgeoning field of federated learning involves training machine lea...
read it
-
Federated Learning via Posterior Averaging: A New Perspective and Practical Algorithms
Federated learning is typically approached as an optimization problem, w...
read it
Adaptive Federated Optimization
Federated learning is a distributed machine learning paradigm in which a large number of clients coordinate with a central server to learn a model without sharing their own training data. Due to the heterogeneity of the client datasets, standard federated optimization methods such as Federated Averaging (FedAvg) are often difficult to tune and exhibit unfavorable convergence behavior. In non-federated settings, adaptive optimization methods have had notable success in combating such issues. In this work, we propose federated versions of adaptive optimizers, including Adagrad, Adam, and Yogi, and analyze their convergence in the presence of heterogeneous data for general nonconvex settings. Our results highlight the interplay between client heterogeneity and communication efficiency. We also perform extensive experiments on these methods and show that the use of adaptive optimizers can significantly improve the performance of federated learning.
READ FULL TEXT
Comments
There are no comments yet.