QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning

07/29/2021
by   Kaan Ozkara, et al.
1

Traditionally, federated learning (FL) aims to train a single global model while collaboratively using multiple clients and a server. Two natural challenges that FL algorithms face are heterogeneity in data across clients and collaboration of clients with diverse resources. In this work, we introduce a quantized and personalized FL algorithm QuPeD that facilitates collective (personalized model compression) training via knowledge distillation (KD) among clients who have access to heterogeneous data and resources. For personalization, we allow clients to learn compressed personalized models with different quantization parameters and model dimensions/structures. Towards this, first we propose an algorithm for learning quantized models through a relaxed optimization problem, where quantization values are also optimized over. When each client participating in the (federated) learning process has different requirements for the compressed model (both in model dimension and precision), we formulate a compressed personalization framework by introducing knowledge distillation loss for local client objectives collaborating through a global model. We develop an alternating proximal gradient update for solving this compressed personalization problem, and analyze its convergence properties. Numerically, we validate that QuPeD outperforms competing personalized FL methods, FedAvg, and local training of clients in various heterogeneous settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/23/2021

QuPeL: Quantized Personalization with Applications to Federated Learning

Traditionally, federated learning (FL) aims to train a single global mod...
research
07/05/2022

A Generative Framework for Personalized Learning and Estimation: Theory, Algorithms, and Privacy

A distinguishing characteristic of federated learning is that the (local...
research
02/23/2023

Personalized Decentralized Federated Learning with Knowledge Distillation

Personalization in federated learning (FL) functions as a coordinator fo...
research
05/29/2023

Partially Personalized Federated Learning: Breaking the Curse of Data Heterogeneity

We present a partially personalized formulation of Federated Learning (F...
research
01/01/2023

FedICT: Federated Multi-task Distillation for Multi-access Edge Computing

The growing interest in intelligent services and privacy protection for ...
research
04/08/2022

CD^2-pFed: Cyclic Distillation-guided Channel Decoupling for Model Personalization in Federated Learning

Federated learning (FL) is a distributed learning paradigm that enables ...
research
04/16/2022

FedVQCS: Federated Learning via Vector Quantized Compressed Sensing

In this paper, a new communication-efficient federated learning (FL) fra...

Please sign up or login with your details

Forgot password? Click here to reset