Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks

12/03/2018
by   Milad Nasr, et al.
0

Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We perform a comprehensive analysis of white-box privacy inference attacks on deep learning models. We measure the privacy leakage by leveraging the final model parameters as well as the parameter updates during the training and fine-tuning processes. We design the attacks in the stand-alone and federated settings, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We design and evaluate our novel white-box membership inference attacks against deep learning algorithms to measure their training data membership leakage. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, widely used to train deep neural networks. We show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants of a federated learning setting can run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.

READ FULL TEXT

page 1

page 4

research
05/06/2021

Membership Inference Attacks on Deep Regression Models for Neuroimaging

Ensuring the privacy of research participants is vital, even more so in ...
research
06/08/2023

Re-aligning Shadow Models can Improve White-box Membership Inference Attacks

Machine learning models have been shown to leak sensitive information ab...
research
04/16/2021

Membership Inference Attack Susceptibility of Clinical Language Models

Deep Neural Network (DNN) models have been shown to have high empirical ...
research
05/14/2022

Evaluating Membership Inference Through Adversarial Robustness

The usage of deep learning is being escalated in many applications. Due ...
research
08/01/2022

On the Evaluation of User Privacy in Deep Neural Networks using Timing Side Channel

Recent Deep Learning (DL) advancements in solving complex real-world tas...
research
05/20/2023

Stability, Generalization and Privacy: Precise Analysis for Random and NTK Features

Deep learning models can be vulnerable to recovery attacks, raising priv...
research
07/27/2022

Membership Inference Attacks via Adversarial Examples

The raise of machine learning and deep learning led to significant impro...

Please sign up or login with your details

Forgot password? Click here to reset