One Parameter Defense – Defending against Data Inference Attacks via Differential Privacy

03/13/2022
by   Dayong Ye, et al.
7

Machine learning models are vulnerable to data inference attacks, such as membership inference and model inversion attacks. In these types of breaches, an adversary attempts to infer a data record's membership in a dataset or even reconstruct this data record using a confidence score vector predicted by the target model. However, most existing defense methods only protect against membership inference attacks. Methods that can combat both types of attacks require a new model to be trained, which may not be time-efficient. In this paper, we propose a differentially private defense method that handles both types of attacks in a time-efficient manner by tuning only one parameter, the privacy budget. The central idea is to modify and normalize the confidence score vectors with a differential privacy mechanism which preserves privacy and obscures membership and reconstructed data. Moreover, this method can guarantee the order of scores in the vector to avoid any loss in classification accuracy. The experimental results show the method to be an effective and timely defense against both membership inference and model inversion attacks with no reduction in accuracy.

READ FULL TEXT

page 1

page 12

page 13

research
10/11/2021

Generalization Techniques Empirically Outperform Differential Privacy against Membership Inference

Differentially private training algorithms provide protection against on...
research
07/28/2020

Label-Only Membership Inference Attacks

Membership inference attacks are one of the simplest forms of privacy le...
research
12/01/2022

Purifier: Defending Data Inference Attacks via Transforming Confidence Scores

Neural networks are susceptible to data inference attacks such as the me...
research
03/16/2021

The Influence of Dropout on Membership Inference in Differentially Private Models

Differentially private models seek to protect the privacy of data the mo...
research
06/01/2022

Privacy for Free: How does Dataset Condensation Help Privacy?

To prevent unintentional data leakage, research community has resorted t...
research
09/11/2020

Improving Robustness to Model Inversion Attacks via Mutual Information Regularization

This paper studies defense mechanisms against model inversion (MI) attac...
research
06/11/2021

A Shuffling Framework for Local Differential Privacy

ldp deployments are vulnerable to inference attacks as an adversary can ...

Please sign up or login with your details

Forgot password? Click here to reset