Information Leakage from Data Updates in Machine Learning Models

09/20/2023
by   Tian Hui, et al.
0

In this paper we consider the setting where machine learning models are retrained on updated datasets in order to incorporate the most up-to-date information or reflect distribution shifts. We investigate whether one can infer information about these updates in the training data (e.g., changes to attribute values of records). Here, the adversary has access to snapshots of the machine learning model before and after the change in the dataset occurs. Contrary to the existing literature, we assume that an attribute of a single or multiple training data points are changed rather than entire data records are removed or added. We propose attacks based on the difference in the prediction confidence of the original model and the updated model. We evaluate our attack methods on two public datasets along with multi-layer perceptron and logistic regression models. We validate that two snapshots of the model can result in higher information leakage in comparison to having access to only the updated model. Moreover, we observe that data records with rare values are more vulnerable to attacks, which points to the disparate vulnerability of privacy attacks in the update setting. When multiple records with the same original attribute value are updated to the same new value (i.e., repeated changes), the attacker is more likely to correctly guess the updated values since repeated changes leave a larger footprint on the trained model. These observations point to vulnerability of machine learning models to attribute inference attacks in the update setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/12/2022

How to Combine Membership-Inference Attacks on Multiple Updated Models

A large body of research has shown that machine learning models are vuln...
research
09/02/2022

Are Attribute Inference Attacks Just Imputation?

Models can expose sensitive information about their training data. In an...
research
10/28/2022

On the Vulnerability of Data Points under Multiple Membership Inference Attacks and Target Models

Membership Inference Attacks (MIAs) infer whether a data point is in the...
research
12/16/2021

Dataset correlation inference attacks against machine learning models

Machine learning models are increasingly used by businesses and organiza...
research
08/28/2019

On Inferring Training Data Attributes in Machine Learning Models

A number of recent works have demonstrated that API access to machine le...
research
06/08/2022

Can Backdoor Attacks Survive Time-Varying Models?

Backdoors are powerful attacks against deep neural networks (DNNs). By p...
research
03/11/2011

SPPAM - Statistical PreProcessing AlgorithM

Most machine learning tools work with a single table where each row is a...

Please sign up or login with your details

Forgot password? Click here to reset