Reconciling Utility and Membership Privacy via Knowledge Distillation

06/15/2019
by   Virat Shejwalkar, et al.
0

Large capacity machine learning models are prone to membership inference attacks in which an adversary aims to infer whether a particular data sample is a member of the target model's training dataset. Such membership inferences can lead to serious privacy violations as machine learning models are often trained using privacy-sensitive data such as medical records and controversial user opinions. Recently defenses against membership inference attacks are developed, in particular, based on differential privacy and adversarial regularization; unfortunately, such defenses highly impact the classification accuracy of the underlying machine learning models. In this work, we present a new defense against membership inference attacks that preserves the utility of the target machine learning models significantly better than prior defenses. Our defense, called distillation for membership privacy (DMP), leverages knowledge distillation, a model compression technique, to train machine learning models with membership privacy. We use different techniques in the DMP to maximize its membership privacy with minor degradation to utility. DMP works effectively against the attackers with either a whitebox or blackbox access to the target model. We evaluate DMP's performance through extensive experiments on different deep neural networks and using various benchmark datasets. We show that DMP provides a significantly better tradeoff between inference resilience and classification performance than state-of-the-art membership inference defenses. For instance, a DMP-trained DenseNet provides a classification accuracy of 65.3% for a 54.4% (54.7%) blackbox (whitebox) membership inference attack accuracy, while an adversarially regularized DenseNet provides a classification accuracy of only 53.7% for a (much worse) 68.7% (69.5%) blackbox (whitebox) membership inference attack accuracy.

READ FULL TEXT
research
11/02/2021

Knowledge Cross-Distillation for Membership Privacy

A membership inference attack (MIA) poses privacy risks on the training ...
research
03/02/2022

MIAShield: Defending Membership Inference Attacks via Preemptive Exclusion of Members

In membership inference attacks (MIAs), an adversary observes the predic...
research
09/01/2020

Sampling Attacks: Amplification of Membership Inference Attacks by Repeated Queries

Machine learning models have been shown to leak information violating th...
research
10/31/2019

Reducing audio membership inference attack accuracy to chance: 4 defenses

It is critical to understand the privacy and robustness vulnerabilities ...
research
07/06/2020

Sharing Models or Coresets: A Study based on Membership Inference Attack

Distributed machine learning generally aims at training a global model b...
research
12/07/2018

Privacy Partitioning: Protecting User Data During the Deep Learning Inference Phase

We present a practical method for protecting data during the inference p...
research
03/14/2021

Membership Inference Attacks on Machine Learning: A Survey

Membership inference attack aims to identify whether a data sample was u...

Please sign up or login with your details

Forgot password? Click here to reset