Efficient Federated Learning with Enhanced Privacy via Lottery Ticket Pruning in Edge Computing

by   Yifan Shi, et al.

Federated learning (FL) is a collaborative learning paradigm for decentralized private data from mobile terminals (MTs). However, it suffers from issues in terms of communication, resource of MTs, and privacy. Existing privacy-preserving FL methods usually adopt the instance-level differential privacy (DP), which provides a rigorous privacy guarantee but with several bottlenecks: severe performance degradation, transmission overhead, and resource constraints of edge devices such as MTs. To overcome these drawbacks, we propose Fed-LTP, an efficient and privacy-enhanced FL framework with Lottery Ticket Hypothesis (LTH) and zero-concentrated DP (zCDP). It generates a pruned global model on the server side and conducts sparse-to-sparse training from scratch with zCDP on the client side. On the server side, two pruning schemes are proposed: (i) the weight-based pruning (LTH) determines the pruned global model structure; (ii) the iterative pruning further shrinks the size of the pruned model's parameters. Meanwhile, the performance of Fed-LTP is also boosted via model validation based on the Laplace mechanism. On the client side, we use sparse-to-sparse training to solve the resource-constraints issue and provide tighter privacy analysis to reduce the privacy budget. We evaluate the effectiveness of Fed-LTP on several real-world datasets in both independent and identically distributed (IID) and non-IID settings. The results clearly confirm the superiority of Fed-LTP over state-of-the-art (SOTA) methods in communication, computation, and memory efficiencies while realizing a better utility-privacy trade-off.


Federated Learning with Sparsified Model Perturbation: Improving Accuracy under Client-Level Differential Privacy

Federated learning (FL) that enables distributed clients to collaborativ...

Gradient-Leakage Resilient Federated Learning

Federated learning(FL) is an emerging distributed learning paradigm with...

Complement Sparsification: Low-Overhead Model Pruning for Federated Learning

Federated Learning (FL) is a privacy-preserving distributed deep learnin...

Communication and Energy Efficient Wireless Federated Learning with Intrinsic Privacy

Federated Learning (FL) is a collaborative learning framework that enabl...

Secure Distributed/Federated Learning: Prediction-Privacy Trade-Off for Multi-Agent System

Decentralized learning is an efficient emerging paradigm for boosting th...

Federated Nonconvex Sparse Learning

Nonconvex sparse learning plays an essential role in many areas, such as...

Dataset Obfuscation: Its Applications to and Impacts on Edge Machine Learning

Obfuscating a dataset by adding random noises to protect the privacy of ...

Please sign up or login with your details

Forgot password? Click here to reset