Adversarial Vulnerability of Neural Networks Increases With Input Dimension

Over the past four years, neural networks have proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients of the training objective when seen as a function of the inputs. For most current network architectures, we prove that the ℓ_1-norm of these gradients grows as the square root of the input-size. These nets therefore become increasingly vulnerable with growing image size. Over the course of our analysis we rediscover and generalize double-backpropagation, a technique that penalizes large gradients in the loss surface to reduce adversarial vulnerability and increase generalization performance. We show that this regularization-scheme is equivalent at first order to training with adversarial noise. Finally, we demonstrate that replacing strided by average-pooling layers decreases adversarial vulnerability. Our proofs rely on the network's weight-distribution at initialization, but extensive experiments confirm their conclusions after training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/06/2019

Batch Normalization is a Cause of Adversarial Vulnerability

Batch normalization (batch norm) is often used in an attempt to stabiliz...
research
03/24/2023

Improved Adversarial Training Through Adaptive Instance-wise Loss Smoothing

Deep neural networks can be easily fooled into making incorrect predicti...
research
10/11/2022

Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization

The adversarial vulnerability of deep neural networks (DNNs) has been ac...
research
11/26/2017

Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients

Deep neural networks have proven remarkably effective at solving many cl...
research
11/04/2022

An Adversarial Robustness Perspective on the Topology of Neural Networks

In this paper, we investigate the impact of neural networks (NNs) topolo...
research
11/11/2022

On the robustness of non-intrusive speech quality model by adversarial examples

It has been shown recently that deep learning based models are effective...

Please sign up or login with your details

Forgot password? Click here to reset