FedKD: Communication Efficient Federated Learning via Knowledge Distillation

08/30/2021
by   Chuhan Wu, et al.
0

Federated learning is widely used to learn intelligent models from decentralized data. In federated learning, clients need to communicate their local model updates in each iteration of model learning. However, model updates are large in size if the model contains numerous parameters, and there usually needs many rounds of communication until model converges. Thus, the communication cost in federated learning can be quite heavy. In this paper, we propose a communication efficient federated learning method based on knowledge distillation. Instead of directly communicating the large models between clients and server, we propose an adaptive mutual distillation framework to reciprocally learn a student and a teacher model on each client, where only the student model is shared by different clients and updated collaboratively to reduce the communication cost. Both the teacher and student on each client are learned on its local data and the knowledge distilled from each other, where their distillation intensities are controlled by their prediction quality. To further reduce the communication cost, we propose a dynamic gradient approximation method based on singular value decomposition to approximate the exchanged gradients with dynamic precision. Extensive experiments on benchmark datasets in different tasks show that our approach can effectively reduce the communication cost and achieve competitive results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2023

Federated Learning on Non-iid Data via Local and Global Distillation

Most existing federated learning algorithms are based on the vanilla Fed...
research
10/20/2020

Asynchronous Edge Learning using Cloned Knowledge Distillation

With the increasing demand for more and more data, the federated learnin...
research
12/05/2022

FedUKD: Federated UNet Model with Knowledge Distillation for Land Use Classification from Satellite and Street Views

Federated Deep Learning frameworks can be used strategically to monitor ...
research
07/19/2022

FedX: Unsupervised Federated Learning with Cross Knowledge Distillation

This paper presents FedX, an unsupervised federated learning framework. ...
research
12/01/2020

Communication-Efficient Federated Distillation

Communication constraints are one of the major challenges preventing the...
research
08/07/2023

Adapter-based Selective Knowledge Distillation for Federated Multi-domain Meeting Summarization

Meeting summarization has emerged as a promising technique for providing...
research
09/17/2020

Distilled One-Shot Federated Learning

Current federated learning algorithms take tens of communication rounds ...

Please sign up or login with your details

Forgot password? Click here to reset