How to Combine Membership-Inference Attacks on Multiple Updated Models

05/12/2022
by   Matthew Jagielski, et al.
0

A large body of research has shown that machine learning models are vulnerable to membership inference (MI) attacks that violate the privacy of the participants in the training data. Most MI research focuses on the case of a single standalone model, while production machine-learning platforms often update models over time, on data that often shifts in distribution, giving the attacker more information. This paper proposes new attacks that take advantage of one or more model updates to improve MI. A key part of our approach is to leverage rich information from standalone MI attacks mounted separately against the original and updated models, and to combine this information in specific ways to improve attack effectiveness. We propose a set of combination functions and tuning methods for each, and present both analytical and quantitative justification for various options. Our results on four public datasets show that our attacks are effective at using update information to give the adversary a significant advantage over attacks on standalone models, but also compared to a prior MI attack that takes advantage of model updates in a related machine-unlearning setting. We perform the first measurements of the impact of distribution shift on MI attacks with model updates, and show that a more drastic distribution shift results in significantly higher MI risk than a gradual shift. Our code is available at https://www.github.com/stanleykywu/model-updates.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/20/2023

Information Leakage from Data Updates in Machine Learning Models

In this paper we consider the setting where machine learning models are ...
research
06/08/2022

Can Backdoor Attacks Survive Time-Varying Models?

Backdoors are powerful attacks against deep neural networks (DNNs). By p...
research
05/26/2022

Membership Inference Attack Using Self Influence Functions

Member inference (MI) attacks aim to determine if a specific data sample...
research
05/21/2020

Revisiting Membership Inference Under Realistic Assumptions

Membership inference attacks on models trained using machine learning ha...
research
11/30/2020

TransMIA: Membership Inference Attacks Using Transfer Shadow Training

Transfer learning has been widely studied and gained increasing populari...
research
06/07/2021

Formalizing Distribution Inference Risks

Property inference attacks reveal statistical properties about a trainin...
research
10/19/2022

Canary in a Coalmine: Better Membership Inference with Ensembled Adversarial Queries

As industrial applications are increasingly automated by machine learnin...

Please sign up or login with your details

Forgot password? Click here to reset