Leveraging Adversarial Examples to Quantify Membership Information Leakage

03/17/2022
by   Ganesh Del Grosso, et al.
0

The use of personal data for training machine learning systems comes with a privacy threat and measuring the level of privacy of a model is one of the major challenges in machine learning today. Identifying training data based on a trained model is a standard way of measuring the privacy risks induced by the model. We develop a novel approach to address the problem of membership inference in pattern recognition models, relying on information provided by adversarial examples. The strategy we propose consists of measuring the magnitude of a perturbation necessary to build an adversarial example. Indeed, we argue that this quantity reflects the likelihood of belonging to the training data. Extensive numerical experiments on multivariate data and an array of state-of-the-art target models show that our method performs comparable or even outperforms state-of-the-art strategies, but without requiring any additional training samples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2022

Membership Inference Attacks via Adversarial Examples

The raise of machine learning and deep learning led to significant impro...
research
05/24/2019

Privacy Risks of Securing Machine Learning Models against Adversarial Examples

The arms race between attacks and defenses for machine learning models h...
research
08/17/2022

On the Privacy Effect of Data Enhancement via the Lens of Memorization

Machine learning poses severe privacy concerns as it is shown that the l...
research
08/26/2018

Adversarially Regularising Neural NLI Models to Integrate Logical Background Knowledge

Adversarial examples are inputs to machine learning models designed to c...
research
11/18/2019

Privacy Leakage Avoidance with Switching Ensembles

We consider membership inference attacks, one of the main privacy issues...
research
02/23/2021

Measuring Data Leakage in Machine-Learning Models with Fisher Information

Machine-learning models contain information about the data they were tra...
research
06/08/2020

Provable trade-offs between private robust machine learning

Historically, machine learning methods have not been designed with secur...

Please sign up or login with your details

Forgot password? Click here to reset