QuerySnout: Automating the Discovery of Attribute Inference Attacks against Query-Based Systems

11/09/2022
by   Ana-Maria Cretu, et al.
0

Although query-based systems (QBS) have become one of the main solutions to share data anonymously, building QBSes that robustly protect the privacy of individuals contributing to the dataset is a hard problem. Theoretical solutions relying on differential privacy guarantees are difficult to implement correctly with reasonable accuracy, while ad-hoc solutions might contain unknown vulnerabilities. Evaluating the privacy provided by QBSes must thus be done by evaluating the accuracy of a wide range of privacy attacks. However, existing attacks require time and expertise to develop, need to be manually tailored to the specific systems attacked, and are limited in scope. In this paper, we develop QuerySnout (QS), the first method to automatically discover vulnerabilities in QBSes. QS takes as input a target record and the QBS as a black box, analyzes its behavior on one or more datasets, and outputs a multiset of queries together with a rule to combine answers to them in order to reveal the sensitive attribute of the target record. QS uses evolutionary search techniques based on a novel mutation operator to find a multiset of queries susceptible to lead to an attack, and a machine learning classifier to infer the sensitive attribute from answers to the queries selected. We showcase the versatility of QS by applying it to two attack scenarios, three real-world datasets, and a variety of protection mechanisms. We show the attacks found by QS to consistently equate or outperform, sometimes by a large margin, the best attacks from the literature. We finally show how QS can be extended to QBSes that require a budget, and apply QS to a simple QBS based on the Laplace mechanism. Taken together, our results show how powerful and accurate attacks against QBSes can already be found by an automated system, allowing for highly complex QBSes to be automatically tested "at the pressing of a button".

READ FULL TEXT

page 10

page 16

page 20

research
12/07/2020

Black-box Model Inversion Attribute Inference Attacks on Classification Models

Increasing use of ML technologies in privacy-sensitive domains such as m...
research
10/18/2020

Unexpected Information Leakage of Differential Privacy Due to Linear Property of Queries

The differential privacy is a widely accepted conception of privacy pres...
research
06/01/2023

Does Black-box Attribute Inference Attacks on Graph Neural Networks Constitute Privacy Risk?

Graph neural networks (GNNs) have shown promising results on real-life d...
research
02/18/2019

Averaging Attacks on Bounded Perturbation Algorithms

We describe and evaluate an attack that reconstructs the histogram of an...
research
06/19/2019

Adversarial Task-Specific Privacy Preservation under Attribute Attack

With the prevalence of machine learning services, crowdsourced data cont...
research
06/04/2023

Adversary for Social Good: Leveraging Adversarial Attacks to Protect Personal Attribute Privacy

Social media has drastically reshaped the world that allows billions of ...
research
05/25/2021

Honest-but-Curious Nets: Sensitive Attributes of Private Inputs can be Secretly Coded into the Entropy of Classifiers' Outputs

It is known that deep neural networks, trained for the classification of...

Please sign up or login with your details

Forgot password? Click here to reset