FLAME: Differentially Private Federated Learning in the Shuffle Model

09/17/2020
by   Ruixuan Liu, et al.
0

Differentially private federated learning has been intensively studied. The current works are mainly based on the curator model or local model of differential privacy. However, both of them have pros and cons. The curator model allows greater accuracy but requires a trusted analyzer. In the local model where users randomize local data before sending them to the analyzer, a trusted analyzer is not required, but the accuracy is limited. In this work, by leveraging the privacy amplification effect in the recently proposed shuffle model of differential privacy, we achieve the best of two worlds, i.e., accuracy in the curator model and strong privacy without relying on any trusted party. We first propose an FL framework in the shuffle model and a simple protocol (SS-Simple) extended from existing work. We find that SS-Simple only provides an insufficient privacy amplification effect in FL since the dimension of the model parameter is quite large. To solve this challenge, we propose an enhanced protocol (SS-Double) to increase the privacy amplification effect by subsampling. Furthermore, for boosting the utility when the model size is greater than the user population, we propose an advanced protocol (SS-Topk) with gradient sparsification techniques. We also provide theoretical analysis and numerical evaluations of the privacy amplification of the proposed protocols. Experiments on real-world datasets validate that SS-Topk improves the testing accuracy by 60.7% than the local model based FL. We highlight the observation that SS-Topk even can improve by 33.94% accuracy than the curator model based FL without any trusted party. Compared with non-private FL, our protocol SS-Topk only lose 1.48% accuracy under (4.696, 10^-5)-DP.

READ FULL TEXT
research
02/15/2022

OLIVE: Oblivious and Differentially Private Federated Learning on Trusted Execution Environment

Differentially private federated learning (DP-FL) has received increasin...
research
05/23/2023

Fair Differentially Private Federated Learning Framework

Federated learning (FL) is a distributed machine learning strategy that ...
research
02/12/2022

Local Differential Privacy for Federated Learning in Industrial Settings

Federated learning (FL) is a collaborative learning approach that has ga...
research
06/25/2020

Towards Differentially Private Text Representations

Most deep learning frameworks require users to pool their local data or ...
research
08/06/2020

On the relationship between (secure) multi-party computation and (secure) federated learning

The contribution of this short note, contains the following two parts: i...
research
09/21/2020

Training Production Language Models without Memorizing User Data

This paper presents the first consumer-scale next-word prediction (NWP) ...
research
04/11/2023

Echo of Neighbors: Privacy Amplification for Personalized Private Federated Learning with Shuffle Model

Federated Learning, as a popular paradigm for collaborative training, is...

Please sign up or login with your details

Forgot password? Click here to reset