TransMIA: Membership Inference Attacks Using Transfer Shadow Training

by   Seira Hidano, et al.

Transfer learning has been widely studied and gained increasing popularity to improve the accuracy of machine learning models by transferring some knowledge acquired in different training. However, no prior work has pointed out that transfer learning can strengthen privacy attacks on machine learning models. In this paper, we propose TransMIA (Transfer Learning-based Membership Inference Attacks), which use transfer learning to perform membership inference attacks on the source model when the adversary is able to access the parameters of the transferred model. In particular, we propose a transfer shadow training technique, where an adversary employs the parameters of the transferred model to construct shadow models, to significantly improve the performance of membership inference when a limited amount of shadow training data is available to the adversary. We evaluate our attacks using two real datasets, and show that our attacks outperform the state-of-the-art that does not use our transfer shadow training technique. We also compare four combinations of the learning-based/entropy-based approach and the fine-tuning/freezing approach, all of which employ our transfer shadow training technique. Then we examine the performance of these four approaches based on the distributions of confidence values, and discuss possible countermeasures against our attacks.


page 1

page 2

page 3

page 4


Privacy Analysis of Deep Learning in the Wild: Membership Inference Attacks against Transfer Learning

While being deployed in many critical applications as core components, m...

Use the Spear as a Shield: A Novel Adversarial Example based Privacy-Preserving Technique against Membership Inference Attacks

Recently, the membership inference attack poses a serious threat to the ...

FaceLeaks: Inference Attacks against Transfer Learning Models via Black-box Queries

Transfer learning is a useful machine learning framework that allows one...

Label-Only Membership Inference Attacks

Membership inference attacks are one of the simplest forms of privacy le...

Using Human Perception to Regularize Transfer Learning

Recent trends in the machine learning community show that models with fi...

How to Combine Membership-Inference Attacks on Multiple Updated Models

A large body of research has shown that machine learning models are vuln...

Teacher Model Fingerprinting Attacks Against Transfer Learning

Transfer learning has become a common solution to address training data ...

Please sign up or login with your details

Forgot password? Click here to reset