Complement Sparsification: Low-Overhead Model Pruning for Federated Learning

03/10/2023
by   Xiaopeng Jiang, et al.
0

Federated Learning (FL) is a privacy-preserving distributed deep learning paradigm that involves substantial communication and computation effort, which is a problem for resource-constrained mobile and IoT devices. Model pruning/sparsification develops sparse models that could solve this problem, but existing sparsification solutions cannot satisfy at the same time the requirements for low bidirectional communication overhead between the server and the clients, low computation overhead at the clients, and good model accuracy, under the FL assumption that the server does not have access to raw data to fine-tune the pruned models. We propose Complement Sparsification (CS), a pruning mechanism that satisfies all these requirements through a complementary and collaborative pruning done at the server and the clients. At each round, CS creates a global sparse model that contains the weights that capture the general data distribution of all clients, while the clients create local sparse models with the weights pruned from the global model to capture the local trends. For improved model performance, these two types of complementary sparse models are aggregated into a dense model in each round, which is subsequently pruned in an iterative process. CS requires little computation overhead on the top of vanilla FL for both the server and the clients. We demonstrate that CS is an approximation of vanilla FL and, thus, its models perform well. We evaluate CS experimentally with two popular FL benchmark datasets. CS achieves substantial reduction in bidirectional communication, while achieving performance comparable with vanilla FL. In addition, CS outperforms baseline pruning mechanisms for FL.

READ FULL TEXT
research
08/16/2022

FedMR: Fedreated Learning via Model Recombination

As a promising privacy-preserving machine learning method, Federated Lea...
research
05/02/2023

Efficient Federated Learning with Enhanced Privacy via Lottery Ticket Pruning in Edge Computing

Federated learning (FL) is a collaborative learning paradigm for decentr...
research
04/19/2023

Model Pruning Enables Localized and Efficient Federated Learning for Yield Forecasting and Data Sharing

Federated Learning (FL) presents a decentralized approach to model train...
research
07/19/2022

FedNet2Net: Saving Communication and Computations in Federated Learning with Model Growing

Federated learning (FL) is a recently developed area of machine learning...
research
01/31/2023

Truthful Incentive Mechanism for Federated Learning with Crowdsourced Data Labeling

Federated learning (FL) has emerged as a promising paradigm that trains ...
research
07/05/2020

Multi-Armed Bandit Based Client Scheduling for Federated Learning

By exploiting the computing power and local data of distributed clients,...
research
01/28/2023

Does Federated Learning Really Need Backpropagation?

Federated learning (FL) is a general principle for decentralized clients...

Please sign up or login with your details

Forgot password? Click here to reset