FAQS: Communication-efficient Federate DNN Architecture and Quantization Co-Search for personalized Hardware-aware Preferences

10/16/2022
by   Hongjiang Chen, et al.
0

Due to user privacy and regulatory restrictions, federate learning (FL) is proposed as a distributed learning framework for training deep neural networks (DNN) on decentralized data clients. Recent advancements in FL have applied Neural Architecture Search (NAS) to replace the predefined one-size-fit-all DNN model, which is not optimal for all tasks of various data distributions, with searchable DNN architectures. However, previous methods suffer from expensive communication cost rasied by frequent large model parameters transmission between the server and clients. Such difficulty is further amplified when combining NAS algorithms, which commonly require prohibitive computation and enormous model storage. Towards this end, we propose FAQS, an efficient personalized FL-NAS-Quantization framework to reduce the communication cost with three features: weight-sharing super kernels, bit-sharing quantization and masked transmission. FAQS has an affordable search time and demands very limited size of transmitted messages at each round. By setting different personlized pareto function loss on local clients, FAQS can yield heterogeneous hardware-aware models for various user preferences. Experimental results show that FAQS achieves average reduction of 1.58x in communication bandwith per round compared with normal FL framework and 4.51x compared with FL+NAS framwork.

READ FULL TEXT

page 1

page 3

research
12/27/2021

SPIDER: Searching Personalized Neural Architecture for Federated Learning

Federated learning (FL) is an efficient learning framework that assists ...
research
04/18/2020

FedNAS: Federated Deep Learning via Neural Architecture Search

Federated Learning (FL) has been proved to be an effective learning fram...
research
02/15/2020

Neural Architecture Search over Decentralized Data

To preserve user privacy while enabling mobile intelligence, techniques ...
research
12/16/2022

Communication-Efficient Federated Learning for Heterogeneous Edge Devices Based on Adaptive Gradient Quantization

Federated learning (FL) enables geographically dispersed edge devices (i...
research
04/09/2021

FL-AGCNS: Federated Learning Framework for Automatic Graph Convolutional Network Search

Recently, some Neural Architecture Search (NAS) techniques are proposed ...
research
06/22/2022

FedorAS: Federated Architecture Search under system heterogeneity

Federated learning (FL) has recently gained considerable attention due t...
research
09/07/2021

BioNetExplorer: Architecture-Space Exploration of Bio-Signal Processing Deep Neural Networks for Wearables

In this work, we propose the BioNetExplorer framework to systematically ...

Please sign up or login with your details

Forgot password? Click here to reset