Uniform Convergence, Adversarial Spheres and a Simple Remedy

05/07/2021
by   Gregor Bachmann, et al.
0

Previous work has cast doubt on the general framework of uniform convergence and its ability to explain generalization in neural networks. By considering a specific dataset, it was observed that a neural network completely misclassifies a projection of the training data (adversarial set), rendering any existing generalization bound based on uniform convergence vacuous. We provide an extensive theoretical investigation of the previously studied data setting through the lens of infinitely-wide models. We prove that the Neural Tangent Kernel (NTK) also suffers from the same phenomenon and we uncover its origin. We highlight the important role of the output bias and show theoretically as well as empirically how a sensible choice completely mitigates the problem. We identify sharp phase transitions in the accuracy on the adversarial set and study its dependency on the training sample size. As a result, we are able to characterize critical sample sizes beyond which the effect disappears. Moreover, we study decompositions of a neural network into a clean and noisy part by considering its canonical decomposition into its different eigenfunctions and show empirically that for too small bias the adversarial phenomenon still persists.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

03/08/2021

Exact Gap between Generalization Error and Uniform Convergence in Random Feature Models

Recent work showed that there could be a large gap between the classical...
02/13/2019

Uniform convergence may be unable to explain generalization in deep learning

We cast doubt on the power of uniform convergence-based generalization b...
12/03/2019

Towards Understanding the Spectral Bias of Deep Learning

An intriguing phenomenon observed during training neural networks is the...
06/03/2021

Exploring Memorization in Adversarial Training

It is well known that deep learning models have a propensity for fitting...
04/17/2017

Adversarial and Clean Data Are Not Twins

Adversarial attack has cast a shadow on the massive success of deep neur...
03/10/2020

Frequency Bias in Neural Networks for Input of Non-Uniform Density

Recent works have partly attributed the generalization ability of over-p...
10/17/2021

Explaining generalization in deep learning: progress and fundamental limits

This dissertation studies a fundamental open challenge in deep learning ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.