Adversarial and Natural Perturbations for General Robustness

10/03/2020
by   Sadaf Gulshad, et al.
0

In this paper we aim to explore the general robustness of neural network classifiers by utilizing adversarial as well as natural perturbations. Different from previous works which mainly focus on studying the robustness of neural networks against adversarial perturbations, we also evaluate their robustness on natural perturbations before and after robustification. After standardizing the comparison between adversarial and natural perturbations, we demonstrate that although adversarial training improves the performance of the networks against adversarial perturbations, it leads to drop in the performance for naturally perturbed samples besides clean samples. In contrast, natural perturbations like elastic deformations, occlusions and wave does not only improve the performance against natural perturbations, but also lead to improvement in the performance for the adversarial perturbations. Additionally they do not drop the accuracy on the clean images.

READ FULL TEXT
research
03/21/2021

Natural Perturbed Training for General Robustness of Neural Network Classifiers

We focus on the robustness of neural networks for classification. To per...
research
02/26/2020

Invariance vs. Robustness of Neural Networks

We study the performance of neural network models on random geometric tr...
research
06/20/2019

Improving the robustness of ImageNet classifiers using elements of human visual cognition

We investigate the robustness properties of image recognition models equ...
research
09/18/2023

Evaluating Adversarial Robustness with Expected Viable Performance

We introduce a metric for evaluating the robustness of a classifier, wit...
research
11/12/2021

Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception

Adversarial examples are often cited by neuroscientists and machine lear...
research
07/20/2021

Built-in Elastic Transformations for Improved Robustness

We focus on building robustness in the convolutions of neural visual cla...
research
01/16/2020

A Little Fog for a Large Turn

Small, carefully crafted perturbations called adversarial perturbations ...

Please sign up or login with your details

Forgot password? Click here to reset