Aggregating Votes with Local Differential Privacy: Usefulness, Soundness vs. Indistinguishability

08/14/2019
by   Shaowei Wang, et al.
0

Voting plays a central role in bringing crowd wisdom to collective decision making, meanwhile data privacy has been a common ethical/legal issue in eliciting preferences from individuals. This work studies the problem of aggregating individual's voting data under the local differential privacy setting, where usefulness and soundness of the aggregated scores are of major concern. One naive approach to the problem is adding Laplace random noises, however, it makes aggregated scores extremely fragile to new types of strategic behaviors tailored to the local privacy setting: data amplification attack and view disguise attack. The data amplification attack means an attacker's manipulation power is amplified by the privacy-preserving procedure when contributing a fraud vote. The view disguise attack happens when an attacker could disguise malicious data as valid private views to manipulate the voting result. In this work, after theoretically quantifying the estimation error bound and the manipulating risk bound of the Laplace mechanism, we propose two mechanisms improving the usefulness and soundness simultaneously: the weighted sampling mechanism and the additive mechanism. The former one interprets the score vector as probabilistic data. Compared to the Laplace mechanism for Borda voting rule with d candidates, it reduces the mean squared error bound by half and lowers the maximum magnitude risk bound from +∞ to O(d^3/nϵ). The latter one randomly outputs a subset of candidates according to their total scores. Its mean squared error bound is optimized from O(d^5/nϵ^2) to O(d^4/nϵ^2), and its maximum magnitude risk bound is reduced to O(d^2/nϵ). Experimental results validate that our proposed approaches averagely reduce estimation error by 50% and are more robust to adversarial attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/02/2022

Local Differential Privacy Meets Computational Social Choice – Resilience under Voter Deletion

The resilience of a voting system has been a central topic in computatio...
research
05/24/2022

Fine-grained Poisoning Attacks to Local Differential Privacy Protocols for Mean and Variance Estimation

Local differential privacy (LDP) protects individual data contributors a...
research
02/23/2022

Differential privacy for symmetric log-concave mechanisms

Adding random noise to database query results is an important tool for a...
research
05/15/2018

How Private Is Your Voting? A Framework for Comparing the Privacy of Voting Mechanisms

Voting privacy has received a lot of attention across several research c...
research
11/24/2019

Four accuracy bounds and one estimator for frequency estimation under local differential privacy

We present four lower bounds on the mean squared error of both frequency...
research
04/10/2023

Differentially Private Numerical Vector Analyses in the Local and Shuffle Model

Numerical vector aggregation plays a crucial role in privacy-sensitive a...
research
09/27/2021

Differentially Private Aggregation in the Shuffle Model: Almost Central Accuracy in Almost a Single Message

The shuffle model of differential privacy has attracted attention in the...

Please sign up or login with your details

Forgot password? Click here to reset