Sampling Attacks: Amplification of Membership Inference Attacks by Repeated Queries

by   Shadi Rahimian, et al.

Machine learning models have been shown to leak information violating the privacy of their training set. We focus on membership inference attacks on machine learning models which aim to determine whether a data point was used to train the victim model. Our work consists of two sides: We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model. We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100 The other sides of our work includes experimental results on two recent membership inference attack models and the defenses against them. For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time. We carry out our experiments on a wide range of datasets which allows us to better analyze the interaction between adversaries, defense mechanism and datasets. We find out that our proposed fast and easy-to-implement output perturbation technique offers good privacy protection for membership inference attacks at little impact on utility.



There are no comments yet.


page 9


Reconciling Utility and Membership Privacy via Knowledge Distillation

Large capacity machine learning models are prone to membership inference...

Label-Only Membership Inference Attacks

Membership inference attacks are one of the simplest forms of privacy le...

Use the Spear as a Shield: A Novel Adversarial Example based Privacy-Preserving Technique against Membership Inference Attacks

Recently, the membership inference attack poses a serious threat to the ...

Membership Inference Attacks on Machine Learning: A Survey

Membership inference attack aims to identify whether a data sample was u...

Systematic Evaluation of Privacy Risks of Machine Learning Models

Machine learning models are prone to memorizing sensitive data, making t...

Accuracy-Privacy Trade-off in Deep Ensembles

Deep ensemble learning has been shown to improve accuracy by training mu...

Reducing audio membership inference attack accuracy to chance: 4 defenses

It is critical to understand the privacy and robustness vulnerabilities ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.