Evaluating Membership Inference Through Adversarial Robustness

05/14/2022
by   Zhaoxi Zhang, et al.
0

The usage of deep learning is being escalated in many applications. Due to its outstanding performance, it is being used in a variety of security and privacy-sensitive areas in addition to conventional applications. One of the key aspects of deep learning efficacy is to have abundant data. This trait leads to the usage of data which can be highly sensitive and private, which in turn causes wariness with regard to deep learning in the general public. Membership inference attacks are considered lethal as they can be used to figure out whether a piece of data belongs to the training dataset or not. This can be problematic with regards to leakage of training data information and its characteristics. To highlight the significance of these types of attacks, we propose an enhanced methodology for membership inference attacks based on adversarial robustness, by adjusting the directions of adversarial perturbations through label smoothing under a white-box setting. We evaluate our proposed method on three datasets: Fashion-MNIST, CIFAR-10, and CIFAR-100. Our experimental results reveal that the performance of our method surpasses that of the existing adversarial robustness-based method when attacking normally trained models. Additionally, through comparing our technique with the state-of-the-art metric-based membership inference methods, our proposed method also shows better performance when attacking adversarially trained models. The code for reproducing the results of this work is available at <https://github.com/plll4zzx/Evaluating-Membership-Inference-Through-Adversarial-Robustness>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/17/2019

Smoothed Inference for Adversarially-Trained Models

Deep neural networks are known to be vulnerable to inputs with malicious...
research
11/27/2020

Use the Spear as a Shield: A Novel Adversarial Example based Privacy-Preserving Technique against Membership Inference Attacks

Recently, the membership inference attack poses a serious threat to the ...
research
12/03/2018

Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks

Deep neural networks are susceptible to various inference attacks as the...
research
05/26/2022

Membership Inference Attack Using Self Influence Functions

Member inference (MI) attacks aim to determine if a specific data sample...
research
05/20/2018

Improving Adversarial Robustness by Data-Specific Discretization

A recent line of research proposed (either implicitly or explicitly) gra...
research
09/25/2019

Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks

It has been widely recognized that adversarial examples can be easily cr...
research
10/19/2022

Canary in a Coalmine: Better Membership Inference with Ensembled Adversarial Queries

As industrial applications are increasingly automated by machine learnin...

Please sign up or login with your details

Forgot password? Click here to reset