Randomized Quantization is All You Need for Differential Privacy in Federated Learning

06/20/2023
by   Yeojoon Youn, et al.
0

Federated learning (FL) is a common and practical framework for learning a machine model in a decentralized fashion. A primary motivation behind this decentralized approach is data privacy, ensuring that the learner never sees the data of each local source itself. Federated learning then comes with two majors challenges: one is handling potentially complex model updates between a server and a large number of data sources; the other is that de-centralization may, in fact, be insufficient for privacy, as the local updates themselves can reveal information about the sources' data. To address these issues, we consider an approach to federated learning that combines quantization and differential privacy. Absent privacy, Federated Learning often relies on quantization to reduce communication complexity. We build upon this approach and develop a new algorithm called the Randomized Quantization Mechanism (RQM), which obtains privacy through a two-levels of randomization. More precisely, we randomly sub-sample feasible quantization levels, then employ a randomized rounding procedure using these sub-sampled discrete levels. We are able to establish that our results preserve “Renyi differential privacy” (Renyi DP). We empirically study the performance of our algorithm and demonstrate that compared to previous work it yields improved privacy-accuracy trade-offs for DP federated learning. To the best of our knowledge, this is the first study that solely relies on randomized quantization without incorporating explicit discrete noise to achieve Renyi DP guarantees in Federated Learning systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/21/2023

Adaptive Local Steps Federated Learning with Differential Privacy Driven by Convergence Analysis

Federated Learning (FL) is a distributed machine learning technique that...
research
12/03/2018

Protection Against Reconstruction and Its Applications in Private Federated Learning

Federated learning has become an exciting direction for both research an...
research
07/31/2020

LDP-FL: Practical Private Aggregation in Federated Learning with Local Differential Privacy

Train machine learning models on sensitive user data has raised increasi...
research
07/09/2022

The Poisson binomial mechanism for secure and private federated learning

We introduce the Poisson Binomial mechanism (PBM), a discrete differenti...
research
05/09/2022

Protecting Data from all Parties: Combining FHE and DP in Federated Learning

This paper tackles the problem of ensuring training data privacy in a fe...
research
01/08/2021

Differentially Private Federated Learning for Cancer Prediction

Since 2014, the NIH funded iDASH (integrating Data for Analysis, Anonymi...
research
12/09/2020

Privacy Amplification by Decentralization

Analyzing data owned by several parties while achieving a good trade-off...

Please sign up or login with your details

Forgot password? Click here to reset