On the Discredibility of Membership Inference Attacks

12/06/2022
by   Shahbaz Rezaei, et al.
0

With the wide-spread application of machine learning models, it has become critical to study the potential data leakage of models trained on sensitive data. Recently, various membership inference (MI) attacks are proposed that determines if a sample was part of the training set or not. Although the first generation of MI attacks has been proven to be ineffective in practice, a few recent studies proposed practical MI attacks that achieve reasonable true positive rate at low false positive rate. The question is whether these attacks can be reliably used in practice. We showcase a practical application of membership inference attacks where it is used by an auditor (investigator) to prove to a judge/jury that an auditee unlawfully used sensitive data during training. Then, we show that the auditee can provide a dataset (with potentially unlimited number of samples) to a judge where MI attacks catastrophically fail. Hence, the auditee challenges the credibility of the auditor and can get the case dismissed. More importantly, we show that the auditee does not need to know anything about the MI attack neither a query access to it. In other words, all currently SOTA MI attacks in literature suffer from the same issue. Through comprehensive experimental evaluation, we show that our algorithms can increase the false positive rate from ten to thousands times larger than what auditor claim to the judge. Lastly, we argue that the implication of our algorithms is beyond discredibility: Current membership inference attacks can identify the memorized subpopulations, but they cannot reliably identify which exact sample in the subpopulation was used during training.

READ FULL TEXT

page 7

page 9

research
11/15/2021

On the Importance of Difficulty Calibration in Membership Inference Attacks

The vulnerability of machine learning models to membership inference att...
research
06/07/2023

Membership inference attack with relative decision boundary distance

Membership inference attack is one of the most popular privacy attacks i...
research
05/29/2019

Ultimate Power of Inference Attacks: Privacy Risks of High-Dimensional Models

Models leak information about their training data. This enables attacker...
research
03/07/2023

Can Membership Inferencing be Refuted?

Membership inference (MI) attack is currently the most popular test for ...
research
11/17/2021

Do Not Trust Prediction Scores for Membership Inference Attacks

Membership inference attacks (MIAs) aim to determine whether a specific ...
research
10/19/2022

Canary in a Coalmine: Better Membership Inference with Ensembled Adversarial Queries

As industrial applications are increasingly automated by machine learnin...
research
04/14/2021

Defending against Adversarial Denial-of-Service Attacks

Data poisoning is one of the most relevant security threats against mach...

Please sign up or login with your details

Forgot password? Click here to reset