Analyzing the Shuffle Model through the Lens of Quantitative Information Flow

05/22/2023
by   Mireya Jurado, et al.
0

Local differential privacy (LDP) is a variant of differential privacy (DP) that avoids the need for a trusted central curator, at the cost of a worse trade-off between privacy and utility. The shuffle model is a way to provide greater anonymity to users by randomly permuting their messages, so that the link between users and their reported values is lost to the data collector. By combining an LDP mechanism with a shuffler, privacy can be improved at no cost for the accuracy of operations insensitive to permutations, thereby improving utility in many tasks. However, the privacy implications of shuffling are not always immediately evident, and derivations of privacy bounds are made on a case-by-case basis. In this paper, we analyze the combination of LDP with shuffling in the rigorous framework of quantitative information flow (QIF), and reason about the resulting resilience to inference attacks. QIF naturally captures randomization mechanisms as information-theoretic channels, thus allowing for precise modeling of a variety of inference attacks in a natural way and for measuring the leakage of private information under these attacks. We exploit symmetries of the particular combination of k-RR mechanisms with the shuffle model to achieve closed formulas that express leakage exactly. In particular, we provide formulas that show how shuffling improves protection against leaks in the local model, and study how leakage behaves for various values of the privacy parameter of the LDP mechanism. In contrast to the strong adversary from differential privacy, we focus on an uninformed adversary, who does not know the value of any individual in the dataset. This adversary is often more realistic as a consumer of statistical datasets, and we show that in some situations mechanisms that are equivalent w.r.t. the strong adversary can provide different privacy guarantees under the uninformed one.

READ FULL TEXT

page 1

page 8

research
01/28/2022

Bounding Training Data Reconstruction in Private (Deep) Learning

Differential privacy is widely accepted as the de facto method for preve...
research
07/18/2022

Protecting Global Properties of Datasets with Distribution Privacy Mechanisms

Alongside the rapid development of data collection and analysis techniqu...
research
04/14/2023

Pool Inference Attacks on Local Differential Privacy: Quantifying the Privacy Guarantees of Apple's Count Mean Sketch in Practice

Behavioral data generated by users' devices, ranging from emoji use to p...
research
06/11/2021

A Shuffling Framework for Local Differential Privacy

ldp deployments are vulnerable to inference attacks as an adversary can ...
research
10/24/2022

Explaining epsilon in differential privacy through the lens of information theory

The study of leakage measures for privacy has been a subject of intensiv...
research
10/16/2020

Toward Evaluating Re-identification Risks in the Local Privacy Model

LDP (Local Differential Privacy) has recently attracted much attention a...
research
03/01/2023

An Improved Christofides Mechanism for Local Differential Privacy Framework

The development of Internet technology enables an analysis on the whole ...

Please sign up or login with your details

Forgot password? Click here to reset