A single gradient step finds adversarial examples on random two-layers neural networks

04/08/2021
by   Sébastien Bubeck, et al.
11

Daniely and Schacham recently showed that gradient descent finds adversarial examples on random undercomplete two-layers ReLU neural networks. The term "undercomplete" refers to the fact that their proof only holds when the number of neurons is a vanishing fraction of the ambient dimension. We extend their result to the overcomplete case, where the number of neurons is larger than the dimension (yet also subexponential in the dimension). In fact we prove that a single step of gradient descent suffices. We also show this result for any subexponential width random neural network with smooth activation function.

READ FULL TEXT

page 16

page 18

research
10/28/2020

Most ReLU Networks Suffer from ℓ^2 Adversarial Perturbations

We consider ReLU networks with random weights, in which the dimension de...
research
06/15/2018

Random depthwise signed convolutional neural networks

Random weights in convolutional neural networks have shown promising res...
research
03/31/2022

Adversarial Examples in Random Neural Networks with General Activations

A substantial body of empirical work documents the lack of robustness in...
research
08/13/2023

Separable Gaussian Neural Networks: Structure, Analysis, and Function Approximations

The Gaussian-radial-basis function neural network (GRBFNN) has been a po...
research
11/03/2021

Regularization by Misclassification in ReLU Neural Networks

We study the implicit bias of ReLU neural networks trained by a variant ...
research
07/18/2023

Can Neural Network Memorization Be Localized?

Recent efforts at explaining the interplay of memorization and generaliz...
research
06/07/2022

Adversarial Reprogramming Revisited

Adversarial reprogramming, introduced by Elsayed, Goodfellow, and Sohl-D...

Please sign up or login with your details

Forgot password? Click here to reset