Amplitude-Varying Perturbation for Balancing Privacy and Utility in Federated Learning

03/07/2023
by   Xin Yuan, et al.
0

While preserving the privacy of federated learning (FL), differential privacy (DP) inevitably degrades the utility (i.e., accuracy) of FL due to model perturbations caused by DP noise added to model updates. Existing studies have considered exclusively noise with persistent root-mean-square amplitude and overlooked an opportunity of adjusting the amplitudes to alleviate the adverse effects of the noise. This paper presents a new DP perturbation mechanism with a time-varying noise amplitude to protect the privacy of FL and retain the capability of adjusting the learning performance. Specifically, we propose a geometric series form for the noise amplitude and reveal analytically the dependence of the series on the number of global aggregations and the (ϵ,δ)-DP requirement. We derive an online refinement of the series to prevent FL from premature convergence resulting from excessive perturbation noise. Another important aspect is an upper bound developed for the loss function of a multi-layer perceptron (MLP) trained by FL running the new DP mechanism. Accordingly, the optimal number of global aggregations is obtained, balancing the learning and privacy. Extensive experiments are conducted using MLP, supporting vector machine, and convolutional neural network models on four public datasets. The contribution of the new DP mechanism to the convergence and accuracy of privacy-preserving FL is corroborated, compared to the state-of-the-art Gaussian noise mechanism with a persistent noise amplitude.

READ FULL TEXT
research
02/15/2022

Federated Learning with Sparsified Model Perturbation: Improving Accuracy under Client-Level Differential Privacy

Federated learning (FL) that enables distributed clients to collaborativ...
research
02/29/2020

Performance Analysis and Optimization in Privacy-Preserving Federated Learning

As a means of decentralized machine learning, federated learning (FL) ha...
research
01/11/2021

On the Practicality of Differential Privacy in Federated Learning by Tuning Iteration Times

In spite that Federated Learning (FL) is well known for its privacy prot...
research
10/22/2021

PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy

Federated Learning (FL) allows multiple participating clients to train m...
research
05/01/2023

Towards the Flatter Landscape and Better Generalization in Federated Learning under Client-level Differential Privacy

To defend the inference attacks and mitigate the sensitive information l...
research
11/01/2019

Performance Analysis on Federated Learning with Differential Privacy

In this paper, to effectively prevent the differential attack, we propos...
research
01/20/2023

Trade Privacy for Utility: A Learning-Based Privacy Pricing Game in Federated Learning

To prevent implicit privacy disclosure in sharing gradients among data o...

Please sign up or login with your details

Forgot password? Click here to reset