FedDCT: Federated Learning of Large Convolutional Neural Networks on Resource Constrained Devices using Divide and Co-Training

11/20/2022
by   Quan Nguyen, et al.
0

We introduce FedDCT, a novel distributed learning paradigm that enables the usage of large, high-performance CNNs on resource-limited edge devices. As opposed to traditional FL approaches, which require each client to train the full-size neural network independently during each training round, the proposed FedDCT allows a cluster of several clients to collaboratively train a large deep learning model by dividing it into an ensemble of several small sub-models and train them on multiple devices in parallel while maintaining privacy. In this co-training process, clients from the same cluster can also learn from each other, further improving their ensemble performance. In the aggregation stage, the server takes a weighted average of all the ensemble models trained by all the clusters. FedDCT reduces the memory requirements and allows low-end devices to participate in FL. We empirically conduct extensive experiments on standardized datasets, including CIFAR-10, CIFAR-100, and two real-world medical datasets HAM10000 and VAIPE. Experimental results show that FedDCT outperforms a set of current SOTA FL methods with interesting convergence behaviors. Furthermore, compared to other existing approaches, FedDCT achieves higher accuracy and substantially reduces the number of communication rounds (with 4-8 times fewer memory requirements) to achieve the desired accuracy on the testing dataset without incurring any extra training cost on the server side.

READ FULL TEXT

page 1

page 9

page 12

page 16

research
07/28/2020

Group Knowledge Transfer: Collaborative Training of Large CNNs on the Edge

Scaling up the convolutional neural network (CNN) size (e.g., width, dep...
research
04/27/2022

Heterogeneous Ensemble Knowledge Transfer for Training Large Models in Federated Learning

Federated learning (FL) enables edge-devices to collaboratively learn a ...
research
07/25/2023

EdgeConvEns: Convolutional Ensemble Learning for Edge Intelligence

Deep edge intelligence aims to deploy deep learning models that demand c...
research
03/03/2023

FedML Parrot: A Scalable Federated Learning System via Heterogeneity-aware Scheduling on Sequential and Hierarchical Training

Federated Learning (FL) enables collaborations among clients for train m...
research
09/08/2022

FADE: Enabling Large-Scale Federated Adversarial Training on Resource-Constrained Edge Devices

Adversarial Training (AT) has been proven to be an effective method of i...
research
03/19/2023

PFSL: Personalized Fair Split Learning with Data Label Privacy for thin clients

The traditional framework of federated learning (FL) requires each clien...
research
04/05/2022

Federated Cross Learning for Medical Image Segmentation

Federated learning (FL) can collaboratively train deep learning models u...

Please sign up or login with your details

Forgot password? Click here to reset