FedDIP: Federated Learning with Extreme Dynamic Pruning and Incremental Regularization

09/13/2023
by   Qianyu Long, et al.
0

Federated Learning (FL) has been successfully adopted for distributed training and inference of large-scale Deep Neural Networks (DNNs). However, DNNs are characterized by an extremely large number of parameters, thus, yielding significant challenges in exchanging these parameters among distributed nodes and managing the memory. Although recent DNN compression methods (e.g., sparsification, pruning) tackle such challenges, they do not holistically consider an adaptively controlled reduction of parameter exchange while maintaining high accuracy levels. We, therefore, contribute with a novel FL framework (coined FedDIP), which combines (i) dynamic model pruning with error feedback to eliminate redundant information exchange, which contributes to significant performance improvement, with (ii) incremental regularization that can achieve extreme sparsity of models. We provide convergence analysis of FedDIP and report on a comprehensive performance and comparative assessment against state-of-the-art methods using benchmark data sets and DNN models. Our results showcase that FedDIP not only controls the model sparsity but efficiently achieves similar or better performance compared to other model pruning methods adopting incremental regularization during distributed model training. The code is available at: https://github.com/EricLoong/feddip.

READ FULL TEXT

page 1

page 9

page 10

research
06/13/2021

Adaptive Dynamic Pruning for Non-IID Federated Learning

Federated Learning (FL) has emerged as a new paradigm of training machin...
research
03/08/2023

Model-Agnostic Federated Learning

Since its debut in 2016, Federated Learning (FL) has been tied to the in...
research
12/05/2022

FedTiny: Pruned Federated Learning Towards Specialized Tiny Models

Neural network pruning has been a well-established compression technique...
research
12/18/2021

Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better

Federated learning (FL) enables distribution of machine learning workloa...
research
07/09/2021

FedAdapt: Adaptive Offloading for IoT Devices in Federated Learning

Applying Federated Learning (FL) on Internet-of-Things devices is necess...
research
06/15/2022

Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness

Certifiable robustness is a highly desirable property for adopting deep ...
research
09/05/2020

FlipOut: Uncovering Redundant Weights via Sign Flipping

Modern neural networks, although achieving state-of-the-art results on m...

Please sign up or login with your details

Forgot password? Click here to reset