On the Vulnerability of Data Points under Multiple Membership Inference Attacks and Target Models

10/28/2022
by   Mauro Conti, et al.
0

Membership Inference Attacks (MIAs) infer whether a data point is in the training data of a machine learning model. It is a threat while being in the training data is private information of a data point. MIA correctly infers some data points as members or non-members of the training data. Intuitively, data points that MIA accurately detects are vulnerable. Considering those data points may exist in different target models susceptible to multiple MIAs, the vulnerability of data points under multiple MIAs and target models is worth exploring. This paper defines new metrics that can reflect the actual situation of data points' vulnerability and capture vulnerable data points under multiple MIAs and target models. From the analysis, MIA has an inference tendency to some data points despite a low overall inference performance. Additionally, we implement 54 MIAs, whose average attack accuracy ranges from 0.5 to 0.9, to support our analysis with our scalable and flexible platform, Membership Inference Attacks Platform (VMIAP). Furthermore, previous methods are unsuitable for finding vulnerable data points under multiple MIAs and different target models. Finally, we observe that the vulnerability is not characteristic of the data point but related to the MIA and target model.

READ FULL TEXT

page 15

page 17

research
05/10/2018

Inference Attacks Against Collaborative Learning

Collaborative machine learning and related techniques such as distribute...
research
03/02/2022

PUMA: Performance Unchanged Model Augmentation for Training Data Removal

Preserving the performance of a trained model while removing unique char...
research
07/13/2021

DIVINE: Diverse Influential Training Points for Data Visualization and Model Refinement

As the complexity of machine learning (ML) models increases, resulting i...
research
11/17/2020

Bootstrap Aggregation for Point-based Generalized Membership Inference Attacks

An efficient scheme is introduced that extends the generalized membershi...
research
09/20/2023

Information Leakage from Data Updates in Machine Learning Models

In this paper we consider the setting where machine learning models are ...
research
01/13/2022

Reconstructing Training Data with Informed Adversaries

Given access to a machine learning model, can an adversary reconstruct t...
research
11/15/2021

On the Importance of Difficulty Calibration in Membership Inference Attacks

The vulnerability of machine learning models to membership inference att...

Please sign up or login with your details

Forgot password? Click here to reset