Communication and Energy Efficient Slimmable Federated Learning via Superposition Coding and Successive Decoding

12/05/2021
by   Hankyul Baek, et al.
0

Mobile devices are indispensable sources of big data. Federated learning (FL) has a great potential in exploiting these private data by exchanging locally trained models instead of their raw data. However, mobile devices are often energy limited and wirelessly connected, and FL cannot cope flexibly with their heterogeneous and time-varying energy capacity and communication throughput, limiting the adoption. Motivated by these issues, we propose a novel energy and communication efficient FL framework, coined SlimFL. To resolve the heterogeneous energy capacity problem, each device in SlimFL runs a width-adjustable slimmable neural network (SNN). To address the heterogeneous communication throughput problem, each full-width (1.0x) SNN model and its half-width (0.5x) model are superposition-coded before transmission, and successively decoded after reception as the 0.5x or 1.0x model depending on the channel quality. Simulation results show that SlimFL can simultaneously train both 0.5x and 1.0x models with reasonable accuracy and convergence speed, compared to its vanilla FL counterpart separately training the two models using 2x more communication resources. Surprisingly, SlimFL achieves even higher accuracy with lower energy footprints than vanilla FL for poor channels and non-IID data distributions, under which vanilla FL converges slowly.

READ FULL TEXT
research
12/05/2021

Joint Superposition Coding and Training for Federated Learning over Multi-Width Neural Networks

This paper aims to integrate two synergetic technologies, federated lear...
research
03/26/2022

SlimFL: Federated Learning with Superposition Coding over Slimmable Neural Networks

Federated learning (FL) is a key enabler for efficient communication and...
research
07/20/2022

Slimmable Quantum Federated Learning

Quantum federated learning (QFL) has recently received increasing attent...
research
12/22/2020

To Talk or to Work: Flexible Communication Compression for Energy Efficient Federated Learning over Heterogeneous Mobile Edge Devices

Recent advances in machine learning, wireless communication, and mobile ...
research
03/18/2021

A Framework for Energy and Carbon Footprint Analysis of Distributed and Federated Edge Learning

Recent advances in distributed learning raise environmental concerns due...
research
04/26/2021

Communication-Efficient and Personalized Federated Lottery Ticket Learning

The lottery ticket hypothesis (LTH) claims that a deep neural network (i...
research
08/18/2022

FedComm: Understanding Communication Protocols for Edge-based Federated Learning

Federated learning (FL) trains machine learning (ML) models on devices u...

Please sign up or login with your details

Forgot password? Click here to reset