DeepAI AI Chat
Log In Sign Up

Improving Model Robustness with Transformation-Invariant Attacks

01/31/2019
by   Houpu Yao, et al.
Arizona State University
Beihang University
0

Vulnerability of neural networks under adversarial attacks has raised serious concerns and extensive research. Recent studies suggested that model robustness relies on the use of robust features, i.e., features with strong correlation with labels, and that data dimensionality and distribution affect the learning of robust features. On the other hand, experiments showed that human vision, which is robust against adversarial attacks, is invariant to natural input transformations. Drawing on these findings, this paper investigates whether constraints on transformation invariance, including image cropping, rotation, and zooming, will force image classifiers to learn and use robust features and in turn acquire better robustness. Experiments on MNIST and CIFAR10 show that transformation invariance alone has limited effect. Nonetheless, models adversarially trained on cropping-invariant attacks, in particular, can (1) extract more robust features, (2) have significantly better robustness than the state-of-the-art models from adversarial training, and (3) require less training data.

READ FULL TEXT

page 1

page 4

page 7

05/25/2023

IDEA: Invariant Causal Defense for Graph Adversarial Robustness

Graph neural networks (GNNs) have achieved remarkable success in various...
12/09/2021

Mutual Adversarial Training: Learning together is better than going alone

Recent studies have shown that robustness to adversarial attacks can be ...
06/18/2020

The Dilemma Between Dimensionality Reduction and Adversarial Robustness

Recent work has shown the tremendous vulnerability to adversarial sample...
07/19/2022

Assaying Out-Of-Distribution Generalization in Transfer Learning

Since out-of-distribution generalization is a generally ill-posed proble...
06/08/2020

On Universalized Adversarial and Invariant Perturbations

Convolutional neural networks or standard CNNs (StdCNNs) are translation...
12/07/2017

A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations

Recent work has shown that neural network-based vision classifiers exhib...