DeepAI AI Chat
Log In Sign Up

On the Spectral Bias of Deep Neural Networks

06/22/2018
by   Nasim Rahaman, et al.
0

It is well known that over-parametrized deep neural networks (DNNs) are an overly expressive class of functions that can memorize even random data with 100% training accuracy. This raises the question why they do not easily overfit real data. To answer this question, we study deep networks using Fourier analysis. We show that deep networks with finite weights (or trained for finite number of steps) are inherently biased towards representing smooth functions over the input space. Specifically, the magnitude of a particular frequency component (k) of deep ReLU network function decays at least as fast as O(k^-2), with width and depth helping polynomially and exponentially (respectively) in modeling higher frequencies. This shows for instance why DNNs cannot perfectly memorize peaky delta-like functions. We also show that DNNs can exploit the geometry of low dimensional data manifolds to approximate complex functions that exist along the manifold with simple functions when seen with respect to the input space. As a consequence, we find that all samples (including adversarial samples) classified by a network to belong to a certain class are connected by a path such that the prediction of the network along that path does not change. Finally we find that DNN parameters corresponding to functions with higher frequency components occupy a smaller volume in the parameter.

READ FULL TEXT

page 3

page 4

page 7

page 8

page 14

page 16

10/08/2020

Approximating smooth functions by deep neural networks with sigmoid activation function

We study the power of deep neural networks (DNNs) with sigmoid activatio...
06/16/2016

Exponential expressivity in deep neural networks through transient chaos

We combine Riemannian geometry with the mean field theory of high dimens...
08/05/2019

Efficient Approximation of Deep ReLU Networks for Functions on Low Dimensional Manifolds

Deep neural networks have revolutionized many real world applications, d...
12/29/2021

Deep neural network approximation theory for high-dimensional functions

The purpose of this article is to develop machinery to study the capacit...
01/19/2019

Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks

We study the training process of Deep Neural Networks (DNNs) from the Fo...
12/09/2019

Depth-Width Trade-offs for ReLU Networks via Sharkovsky's Theorem

Understanding the representational power of Deep Neural Networks (DNNs) ...
11/26/2018

A Differential Topological View of Challenges in Learning with Feedforward Neural Networks

Among many unsolved puzzles in theories of Deep Neural Networks (DNNs), ...