On Universalized Adversarial and Invariant Perturbations

06/08/2020
by   Sandesh Kamath, et al.
0

Convolutional neural networks or standard CNNs (StdCNNs) are translation-equivariant models that achieve translation invariance when trained on data augmented with sufficient translations. Recent work on equivariant models for a given group of transformations (e.g., rotations) has lead to group-equivariant convolutional neural networks (GCNNs). GCNNs trained on data augmented with sufficient rotations achieve rotation invariance. Recent work by authors arXiv:2002.11318 studies a trade-off between invariance and robustness to adversarial attacks. In another related work arXiv:2005.08632, given any model and any input-dependent attack that satisfies a certain spectral property, the authors propose a universalization technique called SVD-Universal to produce a universal adversarial perturbation by looking at very few test examples. In this paper, we study the effectiveness of SVD-Universal on GCNNs as they gain rotation invariance through higher degree of training augmentation. We empirically observe that as GCNNs gain rotation invariance through training augmented with larger rotations, the fooling rate of SVD-Universal gets better. To understand this phenomenon, we introduce universal invariant directions and study their relation to the universal adversarial direction produced by SVD-Universal.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/26/2020

Invariance vs. Robustness of Neural Networks

We study the performance of neural network models on random geometric tr...
research
05/18/2020

Universalization of any adversarial attack using very few test examples

Deep learning models are known to be vulnerable not only to input-depend...
research
03/18/2021

Stride and Translation Invariance in CNNs

Convolutional Neural Networks have become the standard for image classif...
research
01/31/2019

Improving Model Robustness with Transformation-Invariant Attacks

Vulnerability of neural networks under adversarial attacks has raised se...
research
03/03/2021

Shift Invariance Can Reduce Adversarial Robustness

Shift invariance is a critical property of CNNs that improves performanc...
research
06/29/2023

Restore Translation Using Equivariant Neural Networks

Invariance to spatial transformations such as translations and rotations...
research
12/07/2017

A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations

Recent work has shown that neural network-based vision classifiers exhib...

Please sign up or login with your details

Forgot password? Click here to reset