Theoretical evidence for adversarial robustness through randomization: the case of the Exponential family

02/04/2019
by   Rafael Pinot, et al.
0

This paper investigates the theory of robustness against adversarial attacks. It focuses on the family of randomization techniques that consist in injecting noise in the network at inference time. These techniques have proven effective in many contexts, but lack theoretical arguments. We close this gap by presenting a theoretical analysis of these approaches, hence explaining why they perform well in practice. More precisely, we provide the first result relating the randomization rate to robustness to adversarial attacks. This result applies for the general family of exponential distributions, and thus extends and unifies the previous approaches. We support our theoretical claims with a set of experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/26/2020

Randomization matters. How to defend against strong adversarial attacks

Is there a classifier that ensures optimal robustness against all advers...
research
02/14/2023

Randomization for adversarial robustness: the Good, the Bad and the Ugly

Deep neural networks are known to be vulnerable to adversarial attacks: ...
research
07/07/2023

A Theoretical Perspective on Subnetwork Contributions to Adversarial Robustness

The robustness of deep neural networks (DNNs) against adversarial attack...
research
06/25/2023

Computational Asymmetries in Robust Classification

In the context of adversarial robustness, we make three strongly related...
research
05/14/2019

Robustification of deep net classifiers by key based diversified aggregation with pre-filtering

In this paper, we address a problem of machine learning system vulnerabi...
research
07/14/2020

Towards a Theoretical Understanding of the Robustness of Variational Autoencoders

We make inroads into understanding the robustness of Variational Autoenc...

Please sign up or login with your details

Forgot password? Click here to reset