DeepAI
Log In Sign Up

Federated Learning with Sparsified Model Perturbation: Improving Accuracy under Client-Level Differential Privacy

02/15/2022
by   Rui Hu, et al.
0

Federated learning (FL) that enables distributed clients to collaboratively learn a shared statistical model while keeping their training data locally has received great attention recently and can improve privacy and communication efficiency in comparison with traditional centralized machine learning paradigm. However, sensitive information about the training data can still be inferred from model updates shared in FL. Differential privacy (DP) is the state-of-the-art technique to defend against those attacks. The key challenge to achieve DP in FL lies in the adverse impact of DP noise on model accuracy, particularly for deep learning models with large numbers of model parameters. This paper develops a novel differentially-private FL scheme named Fed-SMP that provides client-level DP guarantee while maintaining high model accuracy. To mitigate the impact of privacy protection on model accuracy, Fed-SMP leverages a new technique called Sparsified Model Perturbation (SMP), where local models are sparsified first before being perturbed with additive Gaussian noise. Two sparsification strategies are considered in Fed-SMP: random sparsification and top-k sparsification. We also apply Rényi differential privacy to providing a tight analysis for the end-to-end DP guarantee of Fed-SMP and prove the convergence of Fed-SMP with general loss functions. Extensive experiments on real-world datasets are conducted to demonstrate the effectiveness of Fed-SMP in largely improving model accuracy with the same level of DP guarantee and saving communication cost simultaneously.

READ FULL TEXT
06/25/2021

Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy

Providing privacy protection has been one of the primary motivations of ...
02/02/2023

Fed-GLOSS-DP: Federated, Global Learning using Synthetic Sets with Record Level Differential Privacy

This work proposes Fed-GLOSS-DP, a novel approach to privacy-preserving ...
05/01/2020

Exploring Private Federated Learning with Laplacian Smoothing

Federated learning aims to protect data privacy by collaboratively learn...
05/11/2021

DP-SIGNSGD: When Efficiency Meets Privacy and Robustness

Federated learning (FL) has emerged as a promising collaboration paradig...
07/05/2021

Optimizing the Numbers of Queries and Replies in Federated Learning with Differential Privacy

Federated learning (FL) empowers distributed clients to collaboratively ...
11/13/2022

Differentially Private Vertical Federated Learning

A successful machine learning (ML) algorithm often relies on a large amo...
03/12/2021

Private Cross-Silo Federated Learning for Extracting Vaccine Adverse Event Mentions

Federated Learning (FL) is quickly becoming a goto distributed training ...