On the (Non-)Robustness of Two-Layer Neural Networks in Different Learning Regimes

03/22/2022
by   Elvis Dohmatob, et al.
0

Neural networks are known to be highly sensitive to adversarial examples. These may arise due to different factors, such as random initialization, or spurious correlations in the learning problem. To better understand these factors, we provide a precise study of robustness and generalization in different scenarios, from initialization to the end of training in different regimes, as well as intermediate scenarios, where initialization still plays a role due to "lazy" training. We consider over-parameterized networks in high dimensions with quadratic targets and infinite samples. Our analysis allows us to identify new trade-offs between generalization and robustness, whereby robustness can only get worse when generalization improves, and vice versa. We also show how linearized lazy training regimes can worsen robustness, due to improperly scaled random initialization. Our theoretical results are illustrated with numerical experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/15/2022

Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)

We study the average robustness notion in deep neural networks in (selec...
research
06/04/2021

Fundamental tradeoffs between memorization and robustness in random features and neural tangent regimes

This work studies the (non)robustness of two-layer neural networks in va...
research
09/18/2019

A Study on Binary Neural Networks Initialization

Initialization plays a crucial role in training neural models. Binary Ne...
research
06/21/2019

Limitations of Lazy Training of Two-layers Neural Networks

We study the supervised learning problem under either of the following t...
research
09/17/2020

A Principle of Least Action for the Training of Neural Networks

Neural networks have been achieving high generalization performance on m...
research
08/14/2021

Neuron Campaign for Initialization Guided by Information Bottleneck Theory

Initialization plays a critical role in the training of deep neural netw...
research
04/22/2022

How Sampling Impacts the Robustness of Stochastic Neural Networks

Stochastic neural networks (SNNs) are random functions and predictions a...

Please sign up or login with your details

Forgot password? Click here to reset