Communication-Efficient Federated Distillation

12/01/2020
by   Felix Sattler, et al.
0

Communication constraints are one of the major challenges preventing the wide-spread adoption of Federated Learning systems. Recently, Federated Distillation (FD), a new algorithmic paradigm for Federated Learning with fundamentally different communication properties, emerged. FD methods leverage ensemble distillation techniques and exchange model outputs, presented as soft labels on an unlabeled public data set, between the central server and the participating clients. While for conventional Federated Learning algorithms, like Federated Averaging (FA), communication scales with the size of the jointly trained model, in FD communication scales with the distillation data set size, resulting in advantageous communication properties, especially when large models are trained. In this work, we investigate FD from the perspective of communication efficiency by analyzing the effects of active distillation-data curation, soft-label quantization and delta-coding techniques. Based on the insights gathered from this analysis, we present Compressed Federated Distillation (CFD), an efficient Federated Distillation method. Extensive experiments on Federated image classification and language modeling problems demonstrate that our method can reduce the amount of communication necessary to achieve fixed performance targets by more than two orders of magnitude, when compared to FD and by more than four orders of magnitude when compared with FA.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 8

page 13

12/15/2020

CosSGD: Nonlinear Quantization for Communication-efficient Federated Learning

Federated learning facilitates learning across clients without transferr...
08/30/2021

FedKD: Communication Efficient Federated Learning via Knowledge Distillation

Federated learning is widely used to learn intelligent models from decen...
03/14/2022

Communication-Efficient Federated Distillation with Active Data Sampling

Federated learning (FL) is a promising paradigm to enable privacy-preser...
04/26/2022

One-shot Federated Learning without Server-side Training

Federated Learning (FL) has recently made significant progress as a new ...
07/05/2019

Wireless Federated Distillation for Distributed Edge Learning with Heterogeneous Data

Cooperative training methods for distributed machine learning typically ...
10/28/2021

Towards Model Agnostic Federated Learning Using Knowledge Distillation

An often unquestioned assumption underlying most current federated learn...
03/21/2020

Dynamic Sampling and Selective Masking for Communication-Efficient Federated Learning

Federated learning (FL) is a novel machine learning setting which enable...