Scalable and Differentially Private Distributed Aggregation in the Shuffled Model

06/19/2019
by   Badih Ghazi, et al.
0

Federated learning promises to make machine learning feasible on distributed, private datasets by implementing gradient descent using secure aggregation methods. The idea is to compute a global weight update without revealing the contributions of individual users. Current practical protocols for secure aggregation work in an "honest but curious" setting where a curious adversary observing all communication to and from the server cannot learn any private information assuming the server is honest and follows the protocol. A more scalable and robust primitive for privacy-preserving protocols is shuffling of user data, so as to hide the origin of each data item. Highly scalable and secure protocols for shuffling, so-called mixnets, have been proposed as a primitive for privacy-preserving analytics in the Encode-Shuffle-Analyze framework by Bittau et al. Recent papers by Cheu et al. and Balle et al. have formalized the "shuffled model" and suggested protocols for secure aggregation that achieve differential privacy guarantees. Their protocols come at a cost, though: Either the expected aggregation error or the amount of communication per user scales as a polynomial n^Ω(1) in the number of users n. In this paper we propose simple and more efficient protocol for aggregation in the shuffled model, where communication as well as error increases only polylogarithmically in n. Our new technique is a conceptual "invisibility cloak" that makes users' data almost indistinguishable from random noise while introducing zero distortion on the sum.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/24/2022

On Privacy Preserving Data Aggregation Protocols using BGN cryptosystem

The notion of aggregator oblivious (AO) security for privacy preserving ...
research
11/14/2016

Practical Secure Aggregation for Federated Learning on User-Held Data

Secure Aggregation protocols allow a collection of mutually distrust par...
research
10/13/2021

Infinitely Divisible Noise in the Low Privacy Regime

Federated learning, in which training data is distributed among users an...
research
07/27/2023

Samplable Anonymous Aggregation for Private Federated Data Analysis

We revisit the problem of designing scalable protocols for private stati...
research
05/18/2023

Amplification by Shuffling without Shuffling

Motivated by recent developments in the shuffle model of differential pr...
research
02/20/2023

OLYMPIA: A Simulation Framework for Evaluating the Concrete Scalability of Secure Aggregation Protocols

Recent secure aggregation protocols enable privacy-preserving federated ...
research
07/11/2022

MPC for Tech Giants (GMPC): Enabling Gulliver and the Lilliputians to Cooperate Amicably

In this work, we introduce the Gulliver multi-party computation model (G...

Please sign up or login with your details

Forgot password? Click here to reset