Sampling Attacks: Amplification of Membership Inference Attacks by Repeated Queries

09/01/2020
by   Shadi Rahimian, et al.
0

Machine learning models have been shown to leak information violating the privacy of their training set. We focus on membership inference attacks on machine learning models which aim to determine whether a data point was used to train the victim model. Our work consists of two sides: We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model. We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100 The other sides of our work includes experimental results on two recent membership inference attack models and the defenses against them. For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time. We carry out our experiments on a wide range of datasets which allows us to better analyze the interaction between adversaries, defense mechanism and datasets. We find out that our proposed fast and easy-to-implement output perturbation technique offers good privacy protection for membership inference attacks at little impact on utility.

READ FULL TEXT
research
06/15/2019

Reconciling Utility and Membership Privacy via Knowledge Distillation

Large capacity machine learning models are prone to membership inference...
research
11/27/2020

Use the Spear as a Shield: A Novel Adversarial Example based Privacy-Preserving Technique against Membership Inference Attacks

Recently, the membership inference attack poses a serious threat to the ...
research
03/14/2021

Membership Inference Attacks on Machine Learning: A Survey

Membership inference attack aims to identify whether a data sample was u...
research
03/24/2020

Systematic Evaluation of Privacy Risks of Machine Learning Models

Machine learning models are prone to memorizing sensitive data, making t...
research
06/12/2023

Gaussian Membership Inference Privacy

We propose a new privacy notion called f-Membership Inference Privacy (f...
research
07/26/2020

Anonymizing Machine Learning Models

There is a known tension between the need to analyze personal data to dr...
research
10/31/2019

Reducing audio membership inference attack accuracy to chance: 4 defenses

It is critical to understand the privacy and robustness vulnerabilities ...

Please sign up or login with your details

Forgot password? Click here to reset