How Private Is Your Voting? A Framework for Comparing the Privacy of Voting Mechanisms
Voting privacy has received a lot of attention across several research communities. Traditionally, cryptographic literature has focused on how to privately implement a voting mechanism. Yet, a number of recent works attempt to minimize the amount of information one can infer from the output (rather than the implementation) of the voting mechanism. These works apply differential privacy (DP) techniques which noise the outcome to achieve privacy. This approach intrinsically compromises accuracy, rendering such a voting mechanism unsuitable for most realistic scenarios. In this work we investigate the inherent "noiseless" privacy that different voting rules achieve. To this end we utilize the well-accepted notion of Distributional Differential Privacy (DDP). We prove that under standard assumptions in voting literature about the distribution of votes, most natural mechanisms achieve a satisfactory level of DDP, indicating that noising--and its negative side-effects for voting--is unnecessary in most cases. We then put forth a systematic study of noiseless privacy of commonly studied of voting rules, and compare these rules with respect to their privacy. Note that both DP and DDP induce (possibly loose) upper bounds on information leakage, which makes them insufficient for such a task. To circumvent this, we extend the definitions to require the bound to be exact (i.e. optimal) in a well defined manner. Although motivated by voting, our definitions and techniques can be generically applied to address the optimality (with respect to privacy) of general mechanisms for privacy-preserving data release.
READ FULL TEXT