Walking to Hide: Privacy Amplification via Random Message Exchanges in Network

06/20/2022
by   Hao Wu, et al.
0

The *shuffle model* is a powerful tool to amplify the privacy guarantees of the *local model* of differential privacy. In contrast to the fully decentralized manner of guaranteeing privacy in the local model, the shuffle model requires a central, trusted shuffler. To avoid this central shuffler, recent work of Liew et al. (2022) proposes shuffling locally randomized data in a decentralized manner, via random walks on the communication network constituted by the clients. The privacy amplification bound it thus provides depends on the topology of the underlying communication network, even for infinitely long random walks. It does not match the state-of-the-art privacy amplification bound for the shuffle model (Feldman et al., 2021). In this work, we prove that the output of n clients' data, each perturbed by an ϵ_0-local randomizer, and shuffled by random walks with a logarithmic number of steps, is ( O ( (1 - e^-ϵ_0 ) √( ( e^ϵ_0 / n ) ln (1 / δ ) ) ), O(δ) )-differentially private. Importantly, this bound is independent of the topology of the communication network, and asymptotically closes the gap between the privacy amplification bounds for the network shuffle model (Liew et al., 2022) and the shuffle model (Feldman et al., 2021). Our proof is based on a reduction to the shuffle model, and an analysis of the distribution of random walks of finite length. Building on this, we further show that if each client is sampled independently with probability p, the privacy guarantee of the network shuffle model can be further improved to ( O ( (1 - e^-ϵ_0 ) √(p ( e^ϵ_0 / n ) ln (1 / δ ) ) ) , O(δ) ). Importantly, the subsampling is also performed in a fully decentralized manner that does not require a trusted central entity; compared with related bounds in prior work, our bound is stronger.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/17/2020

The Limits of Pan Privacy and Shuffle Privacy for Learning and Estimation

There has been a recent wave of interest in intermediate trust models fo...
research
10/29/2021

Optimal Compression of Locally Differentially Private Mechanisms

Compressing the output of ϵ-locally differentially private (LDP) randomi...
research
04/08/2022

Network Shuffling: Privacy Amplification via Random Walks

Recently, it is shown that shuffling can amplify the central differentia...
research
08/24/2023

Analog Multi-Party Computing: Locally Differential Private Protocols for Collaborative Computations

We consider a fully decentralized scenario in which no central trusted e...
research
06/10/2022

Muffliato: Peer-to-Peer Privacy Amplification for Decentralized Optimization and Averaging

Decentralized optimization is increasingly popular in machine learning f...
research
05/03/2022

Privacy Amplification via Random Participation in Federated Learning

Running a randomized algorithm on a subsampled dataset instead of the en...
research
04/21/2019

Genuine Global Identifiers and their Mutual Sureties

We introduce the fundamental notion of a genuine global identifier, name...

Please sign up or login with your details

Forgot password? Click here to reset