l-Leaks: Membership Inference Attacks with Logits

05/13/2022
by   Shuhao Li, et al.
0

Machine Learning (ML) has made unprecedented progress in the past several decades. However, due to the memorability of the training data, ML is susceptible to various attacks, especially Membership Inference Attacks (MIAs), the objective of which is to infer the model's training data. So far, most of the membership inference attacks against ML classifiers leverage the shadow model with the same structure as the target model. However, empirical results show that these attacks can be easily mitigated if the shadow model is not clear about the network structure of the target model. In this paper, We present attacks based on black-box access to the target model. We name our attack l-Leaks. The l-Leaks follows the intuition that if an established shadow model is similar enough to the target model, then the adversary can leverage the shadow model's information to predict a target sample's membership.The logits of the trained target model contain valuable sample knowledge. We build the shadow model by learning the logits of the target model and making the shadow model more similar to the target model. Then shadow model will have sufficient confidence in the member samples of the target model. We also discuss the effect of the shadow model's different network structures to attack results. Experiments over different networks and datasets demonstrate that both of our attacks achieve strong performance.

READ FULL TEXT

page 2

page 7

research
07/30/2020

Label-Leaks: Membership Inference Attack with Label

Machine learning (ML) has made tremendous progress during the past decad...
research
03/04/2022

An Efficient Subpopulation-based Membership Inference Attack

Membership inference attacks allow a malicious entity to predict whether...
research
05/30/2022

White-box Membership Attack Against Machine Learning Based Retinopathy Classification

The advances in machine learning (ML) have greatly improved AI-based dia...
research
11/17/2020

Bootstrap Aggregation for Point-based Generalized Membership Inference Attacks

An efficient scheme is introduced that extends the generalized membershi...
research
04/01/2019

Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning

Machine learning (ML) has progressed rapidly during the past decade and ...
research
06/27/2019

Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference

Membership inference (MI) attacks exploit a learned model's lack of gene...
research
03/07/2023

Exploring the Limits of Indiscriminate Data Poisoning Attacks

Indiscriminate data poisoning attacks aim to decrease a model's test acc...

Please sign up or login with your details

Forgot password? Click here to reset