LTU Attacker for Membership Inference

02/04/2022
by   Joseph Pedersen, et al.
1

We address the problem of defending predictive models, such as machine learning classifiers (Defender models), against membership inference attacks, in both the black-box and white-box setting, when the trainer and the trained model are publicly released. The Defender aims at optimizing a dual objective: utility and privacy. Both utility and privacy are evaluated with an external apparatus including an Attacker and an Evaluator. On one hand, Reserved data, distributed similarly to the Defender training data, is used to evaluate Utility; on the other hand, Reserved data, mixed with Defender training data, is used to evaluate membership inference attack robustness. In both cases classification accuracy or error rate are used as the metric: Utility is evaluated with the classification accuracy of the Defender model; Privacy is evaluated with the membership prediction error of a so-called "Leave-Two-Unlabeled" LTU Attacker, having access to all of the Defender and Reserved data, except for the membership label of one sample from each. We prove that, under certain conditions, even a "naïve" LTU Attacker can achieve lower bounds on privacy loss with simple attack strategies, leading to concrete necessary conditions to protect privacy, including: preventing over-fitting and adding some amount of randomness. However, we also show that such a naïve LTU Attacker can fail to attack the privacy of models known to be vulnerable in the literature, demonstrating that knowledge must be complemented with strong attack strategies to turn the LTU Attacker into a powerful means of evaluating privacy. Our experiments on the QMNIST and CIFAR-10 datasets validate our theoretical results and confirm the roles of over-fitting prevention and randomness in the algorithms to protect against privacy attacks.

READ FULL TEXT

page 2

page 6

research
09/23/2019

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

In a membership inference attack, an attacker aims to infer whether a da...
research
09/15/2022

CLIPping Privacy: Identity Inference Attacks on Multi-Modal Machine Learning Models

As deep learning is now used in many real-world applications, research h...
research
01/11/2023

Enabling Trade-offs in Privacy and Utility in Genomic Data Beacons and Summary Statistics

The collection and sharing of genomic data are becoming increasingly com...
research
06/12/2023

Gaussian Membership Inference Privacy

We propose a new privacy notion called f-Membership Inference Privacy (f...
research
05/27/2022

Benign Overparameterization in Membership Inference with Early Stopping

Does a neural network's privacy have to be at odds with its accuracy? In...
research
10/15/2021

Hand Me Your PIN! Inferring ATM PINs of Users Typing with a Covered Hand

Automated Teller Machines (ATMs) represent the most used system for with...
research
05/30/2022

White-box Membership Attack Against Machine Learning Based Retinopathy Classification

The advances in machine learning (ML) have greatly improved AI-based dia...

Please sign up or login with your details

Forgot password? Click here to reset