Personalized Federated Learning: A Meta-Learning Approach

by   Alireza Fallah, et al.

The goal of federated learning is to design algorithms in which several agents communicate with a central node, in a privacy-protecting manner, to minimize the average of their loss functions. In this approach, each node not only shares the required computational budget but also has access to a larger data set, which improves the quality of the resulting model. However, this method only develops a common output for all the agents, and therefore, does not adapt the model to each user data. This is an important missing feature especially given the heterogeneity of the underlying data distribution for various agents. In this paper, we study a personalized variant of the federated learning in which our goal is to find a shared initial model in a distributed manner that can be slightly updated by either a current or a new user by performing one or a few steps of gradient descent with respect to its own loss function. This approach keeps all the benefits of the federated learning architecture while leading to a more personalized model for each user. We show this problem can be studied within the Model-Agnostic Meta-Learning (MAML) framework. Inspired by this connection, we propose a personalized variant of the well-known Federated Averaging algorithm and evaluate its performance in terms of gradient norm for non-convex loss functions. Further, we characterize how this performance is affected by the closeness of underlying distributions of user data, measured in terms of distribution distances such as Total Variation and 1-Wasserstein metric.



There are no comments yet.


page 1

page 2

page 3

page 4


Order Optimal One-Shot Federated Learning for non-Convex Loss Functions

We consider the problem of federated learning in a one-shot setting in w...

Iterated Vector Fields and Conservatism, with Applications to Federated Learning

We study iterated vector fields and investigate whether they are conserv...

LotteryFL: Personalized and Communication-Efficient Federated Learning with Lottery Ticket Hypothesis on Non-IID Datasets

Federated learning is a popular distributed machine learning paradigm wi...

Personalized Federated Learning with Multiple Known Clusters

We consider the problem of personalized federated learning when there ar...

Fine-tuning is Fine in Federated Learning

We study the performance of federated learning algorithms and their vari...

Federated Generalized Bayesian Learning via Distributed Stein Variational Gradient Descent

This paper introduces Distributed Stein Variational Gradient Descent (DS...

Meta-HAR: Federated Representation Learning for Human Activity Recognition

Human activity recognition (HAR) based on mobile sensors plays an import...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.