Invariance vs. Robustness of Neural Networks

02/26/2020
by   Sandesh Kamath, et al.
10

We study the performance of neural network models on random geometric transformations and adversarial perturbations. Invariance means that the model's prediction remains unchanged when a geometric transformation is applied to an input. Adversarial robustness means that the model's prediction remains unchanged after small adversarial perturbations of an input. In this paper, we show a quantitative trade-off between rotation invariance and robustness. We empirically study the following two cases: (a) change in adversarial robustness as we improve only the invariance of equivariant models via training augmentation, (b) change in invariance as we improve only the adversarial robustness using adversarial training. We observe that the rotation invariance of equivariant models (StdCNNs and GCNNs) improves by training augmentation with progressively larger random rotations but while doing so, their adversarial robustness drops progressively, and very significantly on MNIST. We take adversarially trained LeNet and ResNet models which have good L_∞ adversarial robustness on MNIST and CIFAR-10, respectively, and observe that adversarial training with progressively larger perturbations results in a progressive drop in their rotation invariance profiles. Similar to the trade-off between accuracy and robustness known in previous work, we give a theoretical justification for the invariance vs. robustness trade-off observed in our experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/03/2020

Adversarial and Natural Perturbations for General Robustness

In this paper we aim to explore the general robustness of neural network...
research
06/08/2020

On Universalized Adversarial and Invariant Perturbations

Convolutional neural networks or standard CNNs (StdCNNs) are translation...
research
10/30/2022

FI-ODE: Certified and Robust Forward Invariance in Neural ODEs

We study how to certifiably enforce forward invariance properties in neu...
research
12/07/2017

A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations

Recent work has shown that neural network-based vision classifiers exhib...
research
06/13/2022

Towards Alternative Techniques for Improving Adversarial Robustness: Analysis of Adversarial Training at a Spectrum of Perturbations

Adversarial training (AT) and its variants have spearheaded progress in ...
research
09/24/2022

A Simple Strategy to Provable Invariance via Orbit Mapping

Many applications require robustness, or ideally invariance, of neural n...
research
08/15/2021

Deep Adversarially-Enhanced k-Nearest Neighbors

Recent works have theoretically and empirically shown that deep neural n...

Please sign up or login with your details

Forgot password? Click here to reset