Membership Inference with Privately Augmented Data Endorses the Benign while Suppresses the Adversary

by   Da Yu, et al.

Membership inference (MI) in machine learning decides whether a given example is in target model's training set. It can be used in two ways: adversaries use it to steal private membership information while legitimate users can use it to verify whether their data has been forgotten by a trained model. Therefore, MI is a double-edged sword to privacy preserving machine learning. In this paper, we propose using private augmented data to sharpen its good side while passivate its bad side. To sharpen the good side, we exploit the data augmentation used in training to boost the accuracy of membership inference. Specifically, we compose a set of augmented instances for each sample and then the membership inference is formulated as a set classification problem, i.e., classifying a set of augmented data points instead of one point. We design permutation invariant features based on the losses of augmented instances. Our approach significantly improves the MI accuracy over existing algorithms. To passivate the bad side, we apply different data augmentation methods to each legitimate user and keep the augmented data as secret. We show that the malicious adversaries cannot benefit from our algorithms if being ignorant of the augmented data used in training. Extensive experiments demonstrate the superior efficacy of our algorithms. Our source code is available at anonymous GitHub page <>.


page 1

page 2

page 3

page 4


Canary in a Coalmine: Better Membership Inference with Ensembled Adversarial Queries

As industrial applications are increasingly automated by machine learnin...

Use the Spear as a Shield: A Novel Adversarial Example based Privacy-Preserving Technique against Membership Inference Attacks

Recently, the membership inference attack poses a serious threat to the ...

Membership Inference via Backdooring

Recently issued data privacy regulations like GDPR (General Data Protect...

RelaxLoss: Defending Membership Inference Attacks without Losing Utility

As a long-term threat to the privacy of training data, membership infere...

Membership Inference Attacks on Sequence-to-Sequence Models

Data privacy is an important issue for "machine learning as a service" p...

Membership Encoding for Deep Learning

Machine learning as a service (MLaaS), and algorithm marketplaces are on...

Augmentation Pathways Network for Visual Recognition

Data augmentation is practically helpful for visual recognition, especia...

Please sign up or login with your details

Forgot password? Click here to reset