Adversarial Examples in Multi-Layer Random ReLU Networks

06/23/2021
by   Peter L. Bartlett, et al.
12

We consider the phenomenon of adversarial examples in ReLU networks with independent gaussian parameters. For networks of constant depth and with a large range of widths (for instance, it suffices if the width of each layer is polynomial in that of any other layer), small perturbations of input vectors lead to large changes of outputs. This generalizes results of Daniely and Schacham (2020) for networks of rapidly decreasing width and of Bubeck et al (2021) for two-layer networks. The proof shows that adversarial examples arise in these networks because the functions that they compute are very close to linear. Bottleneck layers in the network play a key role: the minimal width up to some point in the network determines scales and sensitivities of mappings computed up to that point. The main result is for networks with constant depth, but we also show that some constraint on depth is necessary for a result of this kind, because there are suitably deep networks that, with constant probability, compute a function that is close to constant.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/31/2022

Adversarial Examples in Random Neural Networks with General Activations

A substantial body of empirical work documents the lack of robustness in...
research
08/27/2016

A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples

Deep neural networks have been shown to suffer from a surprising weaknes...
research
01/18/2021

A simple geometric proof for the benefit of depth in ReLU networks

We present a simple proof for the benefit of depth in multi-layer feedfo...
research
10/28/2020

Most ReLU Networks Suffer from ℓ^2 Adversarial Perturbations

We consider ReLU networks with random weights, in which the dimension de...
research
02/08/2022

Width is Less Important than Depth in ReLU Neural Networks

We solve an open question from Lu et al. (2017), by showing that any tar...
research
07/13/2020

Probabilistic bounds on data sensitivity in deep rectifier networks

Neuron death is a complex phenomenon with implications for model trainab...
research
10/19/2020

Verifying the Causes of Adversarial Examples

The robustness of neural networks is challenged by adversarial examples ...

Please sign up or login with your details

Forgot password? Click here to reset