ResNet with one-neuron hidden layers is a Universal Approximator

06/28/2018
by   Hongzhou Lin, et al.
10

We demonstrate that a very deep ResNet with stacked modules with one neuron per hidden layer and ReLU activation functions can uniformly approximate any Lebesgue integrable function in d dimensions, i.e. ℓ_1(R^d). Because of the identity mapping inherent to ResNets, our network has alternating layers of dimension one and d. This stands in sharp contrast to fully connected networks, which are not universal approximators if their width is the input dimension d [Lu et al, 2017]. Hence, our result implies an increase in representational power for narrow deep networks by the ResNet architecture.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

06/16/2020

Minimum Width for Universal Approximation

The universal approximation property of width-bounded networks has been ...
11/22/2018

Universal Approximation by a Slim Network with Sparse Shortcut Connections

Over recent years, deep learning has become a mainstream method in machi...
10/25/2020

Neural Network Approximation: Three Hidden Layers Are Enough

A three-hidden-layer neural network with super approximation power is in...
07/09/2017

Deepest Neural Networks

This paper shows that a long chain of perceptrons (that is, a multilayer...
09/15/2020

ResNet-like Architecture with Low Hardware Requirements

One of the most computationally intensive parts in modern recognition sy...
08/07/2013

A Note on Topology Preservation in Classification, and the Construction of a Universal Neuron Grid

It will be shown that according to theorems of K. Menger, every neuron g...
04/03/2022

Correlation Functions in Random Fully Connected Neural Networks at Finite Width

This article considers fully connected neural networks with Gaussian ran...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.