Robustness Threats of Differential Privacy

12/14/2020
by   Nurislam Tursynbek, et al.
0

Differential privacy is a powerful and gold-standard concept of measuring and guaranteeing privacy in data analysis. It is well-known that differential privacy reduces the model's accuracy. However, it is unclear how it affects security of the model from robustness point of view. In this paper, we empirically observe an interesting trade-off between the differential privacy and the security of neural networks. Standard neural networks are vulnerable to input perturbations, either adversarial attacks or common corruptions. We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions. To explore this, we extensively study different robustness measurements, including FGSM and PGD adversaries, distance to linear decision boundaries, curvature profile, and performance on a corrupted dataset. Finally, we study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect (decrease and increase) the robustness of the model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/17/2021

Gradient Masking and the Underestimated Robustness Threats of Differential Privacy in Deep Learning

An important problem in deep learning is the privacy and security of neu...
research
10/30/2019

Fault Tolerance of Neural Networks in Adversarial Settings

Artificial Intelligence systems require a through assessment of differen...
research
01/06/2022

Learning to be adversarially robust and differentially private

We study the difficulties in learning that arise from robust and differe...
research
07/04/2021

Certifiably Robust Interpretation via Renyi Differential Privacy

Motivated by the recent discovery that the interpretation maps of CNNs c...
research
03/18/2021

Super-convergence and Differential Privacy: Training faster with better privacy guarantees

The combination of deep neural networks and Differential Privacy has bee...
research
05/18/2019

Quantifying Differential Privacy of Gossip Protocols in General Networks

In this work, we generalize the study of quantifying the differential pr...
research
06/19/2019

A unified view on differential privacy and robustness to adversarial examples

This short note highlights some links between two lines of research with...

Please sign up or login with your details

Forgot password? Click here to reset