Communication-Efficient On-Device Machine Learning: Federated Distillation and Augmentation under Non-IID Private Data

11/28/2018
by   Eunjeong Jeong, et al.
0

On-device machine learning (ML) enables the training process to exploit a massive amount of user-generated private data samples. To enjoy this benefit, inter-device communication overhead should be minimized. With this end, we propose federated distillation (FD), a distributed model training algorithm whose communication payload size is much smaller than a benchmark scheme, federated learning (FL), particularly when the model size is large. Moreover, user-generated data samples are likely to become non-IID across devices, which commonly degrades the performance compared to the case with an IID dataset. To cope with this, we propose federated augmentation (FAug), where each device collectively trains a generative model, and thereby augments its local data towards yielding an IID dataset. Empirical studies demonstrate that FD with FAug yields around 26x less communication overhead while achieving 95-98 accuracy compared to FL.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/20/2020

Estimation of Individual Device Contributions for Incentivizing Federated Learning

Federated learning (FL) is an emerging technique used to train a machine...
research
08/14/2020

Distillation-Based Semi-Supervised Federated Learning for Communication-Efficient Collaborative Training with Non-IID Private Data

This study develops a federated learning (FL) framework overcoming large...
research
07/15/2019

Multi-hop Federated Private Data Augmentation with Sample Compression

On-device machine learning (ML) has brought about the accessibility to a...
research
03/14/2022

Communication-Efficient Federated Distillation with Active Data Sampling

Federated learning (FL) is a promising paradigm to enable privacy-preser...
research
08/24/2021

Data-Free Evaluation of User Contributions in Federated Learning

Federated learning (FL) trains a machine learning model on mobile device...
research
07/05/2021

Towards Node Liability in Federated Learning: Computational Cost and Network Overhead

Many machine learning (ML) techniques suffer from the drawback that thei...
research
04/11/2023

TinyReptile: TinyML with Federated Meta-Learning

Tiny machine learning (TinyML) is a rapidly growing field aiming to demo...

Please sign up or login with your details

Forgot password? Click here to reset