DeepAI AI Chat
Log In Sign Up

Aggregating Votes with Local Differential Privacy: Usefulness, Soundness vs. Indistinguishability

by   Shaowei Wang, et al.

Voting plays a central role in bringing crowd wisdom to collective decision making, meanwhile data privacy has been a common ethical/legal issue in eliciting preferences from individuals. This work studies the problem of aggregating individual's voting data under the local differential privacy setting, where usefulness and soundness of the aggregated scores are of major concern. One naive approach to the problem is adding Laplace random noises, however, it makes aggregated scores extremely fragile to new types of strategic behaviors tailored to the local privacy setting: data amplification attack and view disguise attack. The data amplification attack means an attacker's manipulation power is amplified by the privacy-preserving procedure when contributing a fraud vote. The view disguise attack happens when an attacker could disguise malicious data as valid private views to manipulate the voting result. In this work, after theoretically quantifying the estimation error bound and the manipulating risk bound of the Laplace mechanism, we propose two mechanisms improving the usefulness and soundness simultaneously: the weighted sampling mechanism and the additive mechanism. The former one interprets the score vector as probabilistic data. Compared to the Laplace mechanism for Borda voting rule with d candidates, it reduces the mean squared error bound by half and lowers the maximum magnitude risk bound from +∞ to O(d^3/nϵ). The latter one randomly outputs a subset of candidates according to their total scores. Its mean squared error bound is optimized from O(d^5/nϵ^2) to O(d^4/nϵ^2), and its maximum magnitude risk bound is reduced to O(d^2/nϵ). Experimental results validate that our proposed approaches averagely reduce estimation error by 50% and are more robust to adversarial attacks.


page 1

page 2

page 3

page 4


Local Differential Privacy Meets Computational Social Choice – Resilience under Voter Deletion

The resilience of a voting system has been a central topic in computatio...

Fine-grained Poisoning Attacks to Local Differential Privacy Protocols for Mean and Variance Estimation

Local differential privacy (LDP) protects individual data contributors a...

Differential privacy for symmetric log-concave mechanisms

Adding random noise to database query results is an important tool for a...

How Private Is Your Voting? A Framework for Comparing the Privacy of Voting Mechanisms

Voting privacy has received a lot of attention across several research c...

Four accuracy bounds and one estimator for frequency estimation under local differential privacy

We present four lower bounds on the mean squared error of both frequency...

Differentially Private Numerical Vector Analyses in the Local and Shuffle Model

Numerical vector aggregation plays a crucial role in privacy-sensitive a...

Differential Privacy at Risk: Bridging Randomness and Privacy Budget

The calibration of noise for a privacy-preserving mechanism depends on t...