Differentially Private Adversarial Robustness Through Randomized Perturbations

09/27/2020
by   Nan Xu, et al.
10

Deep Neural Networks, despite their great success in diverse domains, are provably sensitive to small perturbations on correctly classified examples and lead to erroneous predictions. Recently, it was proposed that this behavior can be combatted by optimizing the worst case loss function over all possible substitutions of training examples. However, this can be prone to weighing unlikely substitutions higher, limiting the accuracy gain. In this paper, we study adversarial robustness through randomized perturbations, which has two immediate advantages: (1) by ensuring that substitution likelihood is weighted by the proximity to the original word, we circumvent optimizing the worst case guarantees and achieve performance gains; and (2) the calibrated randomness imparts differentially-private model training, which additionally improves robustness against adversarial attacks on the model outputs. Our approach uses a novel density-based mechanism based on truncated Gumbel noise, which ensures training on substitutions of both rare and dense words in the vocabulary while maintaining semantic similarity for model robustness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2020

Smoothed Analysis of Online and Differentially Private Learning

Practical and pervasive needs for robustness and privacy in algorithms h...
research
06/14/2023

Augment then Smooth: Reconciling Differential Privacy with Certified Robustness

Machine learning models are susceptible to a variety of attacks that can...
research
03/02/2021

DPlis: Boosting Utility of Differentially Private Deep Learning via Randomized Smoothing

Deep learning techniques have achieved remarkable performance in wide-ra...
research
03/19/2020

RAB: Provable Robustness Against Backdoor Attacks

Recent studies have shown that deep neural networks (DNNs) are vulnerabl...
research
08/15/2023

Enhancing the Antidote: Improved Pointwise Certifications against Poisoning Attacks

Poisoning attacks can disproportionately influence model behaviour by ma...
research
09/03/2019

Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation

Neural networks are part of many contemporary NLP systems, yet their emp...

Please sign up or login with your details

Forgot password? Click here to reset