NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks

06/11/2022
by   Nuo Xu, et al.
0

Membership inference attacks (MIAs) against machine learning models can lead to serious privacy risks for the training dataset used in the model training. In this paper, we propose a novel and effective Neuron-Guided Defense method named NeuGuard against membership inference attacks (MIAs). We identify a key weakness in existing defense mechanisms against MIAs wherein they cannot simultaneously defend against two commonly used neural network based MIAs, indicating that these two attacks should be separately evaluated to assure the defense effectiveness. We propose NeuGuard, a new defense approach that jointly controls the output and inner neurons' activation with the object to guide the model output of training set and testing set to have close distributions. NeuGuard consists of class-wise variance minimization targeting restricting the final output neurons and layer-wise balanced output control aiming to constrain the inner neurons in each layer. We evaluate NeuGuard and compare it with state-of-the-art defenses against two neural network based MIAs, five strongest metric based MIAs including the newly proposed label-only MIA on three benchmark datasets. Results show that NeuGuard outperforms the state-of-the-art defenses by offering much improved utility-privacy trade-off, generality, and overhead.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2019

Privacy Risks of Securing Machine Learning Models against Adversarial Examples

The arms race between attacks and defenses for machine learning models h...
research
03/24/2020

Systematic Evaluation of Privacy Risks of Machine Learning Models

Machine learning models are prone to memorizing sensitive data, making t...
research
11/02/2021

Knowledge Cross-Distillation for Membership Privacy

A membership inference attack (MIA) poses privacy risks on the training ...
research
07/12/2022

RelaxLoss: Defending Membership Inference Attacks without Losing Utility

As a long-term threat to the privacy of training data, membership infere...
research
09/04/2023

Efficient Defense Against Model Stealing Attacks on Convolutional Neural Networks

Model stealing attacks have become a serious concern for deep learning m...
research
02/27/2020

Membership Inference Attacks and Defenses in Supervised Learning via Generalization Gap

This work studies membership inference (MI) attack against classifiers, ...
research
12/03/2022

LDL: A Defense for Label-Based Membership Inference Attacks

The data used to train deep neural network (DNN) models in applications ...

Please sign up or login with your details

Forgot password? Click here to reset