Rethink the Connections among Generalization, Memorization and the Spectral Bias of DNNs

04/29/2020
by   Xiao Zhang, et al.
0

Over-parameterized deep neural networks (DNNs) with sufficient capacity to memorize random noise can achieve excellent generalization performance on normal datasets, challenging the bias-variance trade-off in classical learning theory. Recent studies claimed that DNNs first learn simple patterns and then memorize noise; some other works showed that DNNs have a spectral bias to learn target functions from low to high frequencies during training. These suggest some connections among generalization, memorization and the spectral bias of DNNs: the low-frequency components in the input space represent the patterns which can generalize, whereas the high-frequency components represent the noise which needs to be memorized. However, we show that it is not true: under the experimental setup of deep double descent, the high-frequency components of DNNs begin to diminish in the second descent, whereas the examples with random labels are still being memorized. Moreover, we find that the spectrum of DNNs can be applied to monitoring the test behavior, e.g., it can indicate when the second descent of the test error starts, even though the spectrum is calculated from the training set only.

READ FULL TEXT

page 1

page 2

page 3

page 6

page 7

page 8

page 9

page 11

research
01/19/2022

Overview frequency principle/spectral bias in deep learning

Understanding deep learning is increasingly emergent as it penetrates mo...
research
11/21/2019

Neural Network Memorization Dissection

Deep neural networks (DNNs) can easily fit a random labeling of the trai...
research
10/06/2021

Spectral Bias in Practice: The Role of Function Frequency in Generalization

Despite their ability to represent highly expressive functions, deep lea...
research
03/16/2022

Understanding robustness and generalization of artificial neural networks through Fourier masks

Despite the enormous success of artificial neural networks (ANNs) in man...
research
05/24/2019

Explicitizing an Implicit Bias of the Frequency Principle in Two-layer Neural Networks

It remains a puzzle that why deep neural networks (DNNs), with more para...
research
03/23/2022

Deep Frequency Filtering for Domain Generalization

Improving the generalization capability of Deep Neural Networks (DNNs) i...
research
06/18/2020

Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains

We show that passing input points through a simple Fourier feature mappi...

Please sign up or login with your details

Forgot password? Click here to reset