Shift Invariance Can Reduce Adversarial Robustness

03/03/2021
by   Songwei Ge, et al.
11

Shift invariance is a critical property of CNNs that improves performance on classification. However, we show that invariance to circular shifts can also lead to greater sensitivity to adversarial attacks. We first characterize the margin between classes when a shift-invariant linear classifier is used. We show that the margin can only depend on the DC component of the signals. Then, using results about infinitely wide networks, we show that in some simple cases, fully connected and shift-invariant neural networks produce linear decision boundaries. Using this, we prove that shift invariance in neural networks produces adversarial examples for the simple case of two classes, each consisting of a single image with a black or white dot on a gray background. This is more than a curiosity; we show empirically that with real datasets and realistic architectures, shift invariance reduces adversarial robustness. Finally, we describe initial experiments using synthetic data to probe the source of this connection.

READ FULL TEXT
research
11/01/2018

Excessive Invariance Causes Adversarial Vulnerability

Despite their impressive performance, deep neural networks exhibit strik...
research
10/02/1998

A Linear Shift Invariant Multiscale Transform

This paper presents a multiscale decomposition algorithm. Unlike standar...
research
03/14/2023

Alias-Free Convnets: Fractional Shift Invariance via Polynomial Activations

Although CNNs are believed to be invariant to translations, recent works...
research
06/08/2020

On Universalized Adversarial and Invariant Perturbations

Convolutional neural networks or standard CNNs (StdCNNs) are translation...
research
07/19/2022

Assaying Out-Of-Distribution Generalization in Transfer Learning

Since out-of-distribution generalization is a generally ill-posed proble...
research
11/09/2020

What Does CNN Shift Invariance Look Like? A Visualization Study

Feature extraction with convolutional neural networks (CNNs) is a popula...
research
07/05/2023

GIT: Detecting Uncertainty, Out-Of-Distribution and Adversarial Samples using Gradients and Invariance Transformations

Deep neural networks tend to make overconfident predictions and often re...

Please sign up or login with your details

Forgot password? Click here to reset