FedTiny: Pruned Federated Learning Towards Specialized Tiny Models

12/05/2022
by   Hong Huang, et al.
0

Neural network pruning has been a well-established compression technique to enable deep learning models on resource-constrained devices. The pruned model is usually specialized to meet specific hardware platforms and training tasks (defined as deployment scenarios). However, existing pruning approaches rely heavily on training data to trade off model size, efficiency, and accuracy, which becomes ineffective for federated learning (FL) over distributed and confidential datasets. Moreover, the memory- and compute-intensive pruning process of most existing approaches cannot be handled by most FL devices with resource limitations. In this paper, we develop FedTiny, a novel distributed pruning framework for FL, to obtain specialized tiny models for memory- and computing-constrained participating devices with confidential local data. To alleviate biased pruning due to unseen heterogeneous data over devices, FedTiny introduces an adaptive batch normalization (BN) selection module to adaptively obtain an initially pruned model to fit deployment scenarios. Besides, to further improve the initial pruning, FedTiny develops a lightweight progressive pruning module for local finer pruning under tight memory and computational budgets, where the pruning policy for each layer is gradually determined rather than evaluating the overall deep model structure. Extensive experimental results demonstrate the effectiveness of FedTiny, which outperforms state-of-the-art baseline approaches, especially when compressing deep models to extremely sparse tiny models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2021

Adaptive Dynamic Pruning for Non-IID Federated Learning

Federated Learning (FL) has emerged as a new paradigm of training machin...
research
03/11/2023

FedLP: Layer-wise Pruning Mechanism for Communication-Computation Efficient Federated Learning

Federated learning (FL) has prevailed as an efficient and privacy-preser...
research
09/30/2021

Federated Dropout – A Simple Approach for Enabling Federated Learning on Resource Constrained Devices

Federated learning (FL) is a popular framework for training an AI model ...
research
09/13/2023

FedDIP: Federated Learning with Extreme Dynamic Pruning and Incremental Regularization

Federated Learning (FL) has been successfully adopted for distributed tr...
research
01/27/2022

On the Convergence of Heterogeneous Federated Learning with Arbitrary Adaptive Online Model Pruning

One of the biggest challenges in Federated Learning (FL) is that client ...
research
04/28/2020

Streamlining Tensor and Network Pruning in PyTorch

In order to contrast the explosion in size of state-of-the-art machine l...
research
05/26/2023

Aggregating Capacity in FL through Successive Layer Training for Computationally-Constrained Devices

Federated learning (FL) is usually performed on resource-constrained edg...

Please sign up or login with your details

Forgot password? Click here to reset