Are Adversarial Robustness and Common Perturbation Robustness Independant Attributes ?

09/04/2019
by   Alfred Laugros, et al.
0

Neural Networks have been shown to be sensitive to common perturbations such as blur, Gaussian noise, rotations, etc. They are also vulnerable to some artificial malicious corruptions called adversarial examples. The adversarial examples study has recently become very popular and it sometimes even reduces the term "adversarial robustness" to the term "robustness". Yet, we do not know to what extent the adversarial robustness is related to the global robustness. Similarly, we do not know if a robustness to various common perturbations such as translations or contrast losses for instance, could help with adversarial corruptions. We intend to study the links between the robustnesses of neural networks to both perturbations. With our experiments, we provide one of the first benchmark designed to estimate the robustness of neural networks to common perturbations. We show that increasing the robustness to carefully selected common perturbations, can make neural networks more robust to unseen common perturbations. We also prove that adversarial robustness and robustness to common perturbations are independent. Our results make us believe that neural network robustness should be addressed in a broader sense.

READ FULL TEXT
research
05/01/2019

Dropping Pixels for Adversarial Robustness

Deep neural networks are vulnerable against adversarial examples. In thi...
research
04/22/2020

Adversarial examples and where to find them

Adversarial robustness of trained models has attracted considerable atte...
research
06/22/2022

Understanding the effect of sparsity on neural networks robustness

This paper examines the impact of static sparsity on the robustness of a...
research
08/19/2020

Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training

Despite their performance, Artificial Neural Networks are not reliable e...
research
11/18/2021

Wiggling Weights to Improve the Robustness of Classifiers

Robustness against unwanted perturbations is an important aspect of depl...
research
10/12/2019

On Robustness of Neural Ordinary Differential Equations

Neural ordinary differential equations (ODEs) have been attracting incre...
research
05/26/2021

Using the Overlapping Score to Improve Corruption Benchmarks

Neural Networks are sensitive to various corruptions that usually occur ...

Please sign up or login with your details

Forgot password? Click here to reset