Natural Perturbed Training for General Robustness of Neural Network Classifiers

03/21/2021
by   Sadaf Gulshad, et al.
18

We focus on the robustness of neural networks for classification. To permit a fair comparison between methods to achieve robustness, we first introduce a standard based on the mensuration of a classifier's degradation. Then, we propose natural perturbed training to robustify the network. Natural perturbations will be encountered in practice: the difference of two images of the same object may be approximated by an elastic deformation (when they have slightly different viewing angles), by occlusions (when they hide differently behind objects), or by saturation, Gaussian noise etc. Training some fraction of the epochs on random versions of such variations will help the classifier to learn better. We conduct extensive experiments on six datasets of varying sizes and granularity. Natural perturbed learning show better and much faster performance than adversarial training on clean, adversarial as well as natural perturbed images. It even improves general robustness on perturbations not seen during the training. For Cifar-10 and STL-10 natural perturbed training even improves the accuracy for clean data and reaches the state of the art performance. Ablation studies verify the effectiveness of natural perturbed training.

READ FULL TEXT
research
10/03/2020

Adversarial and Natural Perturbations for General Robustness

In this paper we aim to explore the general robustness of neural network...
research
07/20/2021

Built-in Elastic Transformations for Improved Robustness

We focus on building robustness in the convolutions of neural visual cla...
research
11/18/2021

Wiggling Weights to Improve the Robustness of Classifiers

Robustness against unwanted perturbations is an important aspect of depl...
research
05/15/2023

Exploiting Frequency Spectrum of Adversarial Images for General Robustness

In recent years, there has been growing concern over the vulnerability o...
research
01/29/2019

A Push-Pull Layer Improves Robustness of Convolutional Neural Networks

We propose a new layer in Convolutional Neural Networks (CNNs) to increa...
research
09/29/2020

Inverse Classification with Limited Budget and Maximum Number of Perturbed Samples

Most recent machine learning research focuses on developing new classifi...
research
01/22/2023

Provable Unrestricted Adversarial Training without Compromise with Generalizability

Adversarial training (AT) is widely considered as the most promising str...

Please sign up or login with your details

Forgot password? Click here to reset