How Does a Deep Learning Model Architecture Impact Its Privacy?

10/20/2022
by   Guangsheng Zhang, et al.
0

As a booming research area in the past decade, deep learning technologies have been driven by big data collected and processed on an unprecedented scale. However, the sensitive information in the collected training data raises privacy concerns. Recent research indicated that deep learning models are vulnerable to various privacy attacks, including membership inference attacks, attribute inference attacks, and gradient inversion attacks. It is noteworthy that the performance of the attacks varies from model to model. In this paper, we conduct empirical analyses to answer a fundamental question: Does model architecture affect model privacy? We investigate several representative model architectures from CNNs to Transformers, and show that Transformers are generally more vulnerable to privacy attacks than CNNs. We further demonstrate that the micro design of activation layers, stem layers, and bias parameters, are the major reasons why CNNs are more resilient to privacy attacks than Transformers. We also find that the presence of attention modules is another reason why Transformers are more vulnerable to privacy attacks. We hope our discovery can shed some new light on how to defend against the investigated privacy attacks and help the community build privacy-friendly model architectures.

READ FULL TEXT

page 10

page 11

research
06/07/2022

Can CNNs Be More Robust Than Transformers?

The recent success of Vision Transformers is shaking the long dominance ...
research
09/05/2017

The Unintended Consequences of Overfitting: Training Data Inference Attacks

Machine learning algorithms that are applied to sensitive data pose a di...
research
02/05/2019

Disguised-Nets: Image Disguising for Privacy-preserving Deep Learning

Due to the high training costs of deep learning, model developers often ...
research
12/15/2022

Holistic risk assessment of inference attacks in machine learning

As machine learning expanding application, there are more and more unign...
research
02/08/2021

Quantifying and Mitigating Privacy Risks of Contrastive Learning

Data is the key factor to drive the development of machine learning (ML)...
research
04/07/2023

Deepfake Detection with Deep Learning: Convolutional Neural Networks versus Transformers

The rapid evolvement of deepfake creation technologies is seriously thre...
research
05/25/2022

Breaking the Chain of Gradient Leakage in Vision Transformers

User privacy is of great concern in Federated Learning, while Vision Tra...

Please sign up or login with your details

Forgot password? Click here to reset