AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning

05/13/2018
by   Jinyuan Jia, et al.
0

Users in various web and mobile applications are vulnerable to attribute inference attacks, in which an attacker leverages a machine learning classifier to infer a target user's private attributes (e.g., location, sexual orientation, political view) from its public data (e.g., rating scores, page likes). Existing defenses leverage game theory or heuristics based on correlations between the public data and attributes. These defenses are not practical. Specifically, game-theoretic defenses require solving intractable optimization problems, while correlation-based defenses incur large utility loss of users' public data. In this paper, we present AttriGuard, a practical defense against attribute inference attacks. AttriGuard is computationally tractable and has small utility loss. Our AttriGuard works in two phases. Suppose we aim to protect a user's private attribute. In Phase I, for each value of the attribute, we find a minimum noise such that if we add the noise to the user's public data, then the attacker's classifier is very likely to infer the attribute value for the user. We find the minimum noise via adapting existing evasion attacks in adversarial machine learning. In Phase II, we sample one attribute value according to a certain probability distribution and add the corresponding noise found in Phase I to the user's public data. We formulate finding the probability distribution as solving a constrained convex optimization problem. We extensively evaluate AttriGuard and compare it with existing methods using a real-world dataset. Our results show that AttriGuard substantially outperforms existing methods. Our work is the first one that shows evasion attacks can be used as defensive techniques for privacy protection.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/22/2019

Privacy-Aware Recommendation with Private-Attribute Protection using Adversarial Learning

Recommendation is one of the critical applications that helps users find...
research
09/23/2019

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

In a membership inference attack, an attacker aims to infer whether a da...
research
09/17/2019

Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges

As machine learning (ML) becomes more and more powerful and easily acces...
research
12/01/2022

Purifier: Defending Data Inference Attacks via Transforming Confidence Scores

Neural networks are susceptible to data inference attacks such as the me...
research
07/25/2023

Random (Un)rounding : Vulnerabilities in Discrete Attribute Disclosure in the 2021 Canadian Census

The 2021 Canadian census is notable for using a unique form of privacy, ...
research
02/27/2022

Attacks on Deidentification's Defenses

Quasi-identifier-based deidentification techniques (QI-deidentification)...
research
05/25/2021

Honest-but-Curious Nets: Sensitive Attributes of Private Inputs can be Secretly Coded into the Entropy of Classifiers' Outputs

It is known that deep neural networks, trained for the classification of...

Please sign up or login with your details

Forgot password? Click here to reset