Log In Sign Up

Personalized Federated Learning: A Unified Framework and Universal Optimization Techniques

by   Filip Hanzely, et al.

We study the optimization aspects of personalized Federated Learning (FL). We develop a universal optimization theory applicable to all convex personalized FL models in the literature. In particular, we propose a general personalized objective capable of recovering essentially any existing personalized FL objective as a special case. We design several optimization techniques to minimize the general objective, namely a tailored variant of Local SGD and variants of accelerated coordinate descent/accelerated SVRCD. We demonstrate the practicality and/or optimality of our methods both in terms of communication and local computation. Lastly, we argue about the implications of our general optimization theory when applied to solve specific personalized FL objectives.


page 1

page 2

page 3

page 4


Achieving Personalized Federated Learning with Sparse Local Models

Federated learning (FL) is vulnerable to heterogeneously distributed dat...

Lower Bounds and Optimal Algorithms for Personalized Federated Learning

In this work, we consider the optimization formulation of personalized f...

Local SGD: Unified Theory and New Efficient Methods

We present a unified framework for analyzing local SGD methods in the co...

FedADC: Accelerated Federated Learning with Drift Control

Federated learning (FL) has become de facto framework for collaborative ...

FedPara: Low-rank Hadamard Product Parameterization for Efficient Federated Learning

To overcome the burdens on frequent model uploads and downloads during f...

Personalized Federated Learning with Clustered Generalization

We study the recent emerging personalized federated learning (PFL) that ...

Personalized Federated Learning of Driver Prediction Models for Autonomous Driving

Autonomous vehicles (AVs) must interact with a diverse set of human driv...