-
Federated Residual Learning
We study a new form of federated learning where the clients train person...
read it
-
LotteryFL: Personalized and Communication-Efficient Federated Learning with Lottery Ticket Hypothesis on Non-IID Datasets
Federated learning is a popular distributed machine learning paradigm wi...
read it
-
PFL-MoE: Personalized Federated Learning Based on Mixture of Experts
Federated learning (FL) is an emerging distributed machine learning para...
read it
-
Federated Learning with Personalization Layers
The emerging paradigm of federated learning strives to enable collaborat...
read it
-
Second-Order Guarantees in Federated Learning
Federated learning is a useful framework for centralized learning from d...
read it
-
Federated Generalized Bayesian Learning via Distributed Stein Variational Gradient Descent
This paper introduces Distributed Stein Variational Gradient Descent (DS...
read it
-
Robust Federated Learning: The Case of Affine Distribution Shifts
Federated learning is a distributed paradigm that aims at training model...
read it
Personalized Federated Learning: A Meta-Learning Approach
The goal of federated learning is to design algorithms in which several agents communicate with a central node, in a privacy-protecting manner, to minimize the average of their loss functions. In this approach, each node not only shares the required computational budget but also has access to a larger data set, which improves the quality of the resulting model. However, this method only develops a common output for all the agents, and therefore, does not adapt the model to each user data. This is an important missing feature especially given the heterogeneity of the underlying data distribution for various agents. In this paper, we study a personalized variant of the federated learning in which our goal is to find a shared initial model in a distributed manner that can be slightly updated by either a current or a new user by performing one or a few steps of gradient descent with respect to its own loss function. This approach keeps all the benefits of the federated learning architecture while leading to a more personalized model for each user. We show this problem can be studied within the Model-Agnostic Meta-Learning (MAML) framework. Inspired by this connection, we propose a personalized variant of the well-known Federated Averaging algorithm and evaluate its performance in terms of gradient norm for non-convex loss functions. Further, we characterize how this performance is affected by the closeness of underlying distributions of user data, measured in terms of distribution distances such as Total Variation and 1-Wasserstein metric.
READ FULL TEXT
Comments
There are no comments yet.