MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

09/23/2019
by   Jinyuan Jia, et al.
0

In a membership inference attack, an attacker aims to infer whether a data sample is in a target classifier's training dataset or not. Specifically, given a black-box access to the target classifier, the attacker trains a binary classifier, which takes a data sample's confidence score vector predicted by the target classifier as an input and predicts the data sample to be a member or non-member of the target classifier's training dataset. Membership inference attacks pose severe privacy and security threats to the training dataset. Most existing defenses leverage differential privacy when training the target classifier or regularize the training process of the target classifier. These defenses suffer from two key limitations: 1) they do not have formal utility-loss guarantees of the confidence score vectors, and 2) they achieve suboptimal privacy-utility tradeoffs. In this work, we propose MemGuard, the first defense with formal utility-loss guarantees against black-box membership inference attacks. Instead of tampering the training process of the target classifier, MemGuard adds noise to each confidence score vector predicted by the target classifier. Our key observation is that attacker uses a classifier to predict member or non-member and classifier is vulnerable to adversarial examples. Based on the observation, we propose to add a carefully crafted noise vector to a confidence score vector to turn it into an adversarial example that misleads the attacker's classifier. Our experimental results on three datasets show that MemGuard can effectively defend against membership inference attacks and achieve better privacy-utility tradeoffs than existing defenses. Our work is the first one to show that adversarial examples can be used as defensive mechanisms to defend against membership inference attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/08/2020

Defending Model Inversion and Membership Inference Attacks via Prediction Purification

Neural networks are susceptible to data inference attacks such as the mo...
research
09/17/2019

Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges

As machine learning (ML) becomes more and more powerful and easily acces...
research
02/04/2022

LTU Attacker for Membership Inference

We address the problem of defending predictive models, such as machine l...
research
09/05/2022

An Adaptive Black-box Defense against Trojan Attacks (TrojDef)

Trojan backdoor is a poisoning attack against Neural Network (NN) classi...
research
05/13/2018

AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning

Users in various web and mobile applications are vulnerable to attribute...
research
05/16/2020

DAMIA: Leveraging Domain Adaptation as a Defense against Membership Inference Attacks

Deep Learning (DL) techniques allow ones to train models from a dataset ...
research
12/01/2022

Purifier: Defending Data Inference Attacks via Transforming Confidence Scores

Neural networks are susceptible to data inference attacks such as the me...

Please sign up or login with your details

Forgot password? Click here to reset