Verifiable Differential Privacy For When The Curious Become Dishonest

08/18/2022
by   Ari Biswas, et al.
0

Many applications seek to produce differentially private statistics on sensitive data. Traditional approaches in the centralised model rely on a trusted aggregator to gather the raw data, aggregate statistics and introduce appropriate noise. Recent work has tried to relax the trust assumptions and reduce the need for trusted entities. However, such systems can trade off trust for increased noise and still require complete trust in some participants. Moreover, they do not prevent a malicious entity from introducing adversarial noise to skew the result or unmask some inputs. In this paper, we introduce the notion of “verifiable differential privacy with covert security”. The purpose is to ensure both privacy of the client's data and assurance that the output is not subject to any form of adversarial manipulation. The result is that everyone is assured that the noise used for differential privacy has been generated correctly, but no one can determine what the noise was. In the event of a malicious entity attempting to pervert the protocol, their actions will be detected with a constant probability negligibly close to one. We show that such verifiable privacy is practical and can be implemented at scale.

READ FULL TEXT
research
07/02/2018

An Algorithmic Framework For Differentially Private Data Analysis on Trusted Processors

Differential privacy has emerged as the main definition for private data...
research
10/20/2020

DuetSGX: Differential Privacy with Secure Hardware

Differential privacy offers a formal privacy guarantee for individuals, ...
research
07/10/2023

A unifying framework for differentially private quantum algorithms

Differential privacy is a widely used notion of security that enables th...
research
07/21/2022

Widespread Underestimation of Sensitivity in Differentially Private Libraries and How to Fix It

We identify a new class of vulnerabilities in implementations of differe...
research
10/30/2019

Fault Tolerance of Neural Networks in Adversarial Settings

Artificial Intelligence systems require a through assessment of differen...
research
01/28/2022

Statistical anonymity: Quantifying reidentification risks without reidentifying users

Data anonymization is an approach to privacy-preserving data release aim...
research
02/07/2022

Differential Privacy for Symbolic Systems with Application to Markov Chains

Data-driven systems are gathering increasing amounts of data from users,...

Please sign up or login with your details

Forgot password? Click here to reset