Minimum Width for Universal Approximation

06/16/2020
by   Sejun Park, et al.
0

The universal approximation property of width-bounded networks has been studied as a dual of classical universal approximation results on depth-bounded networks. However, the critical width enabling the universal approximation has not been exactly characterized in terms of the input dimension d_x and the output dimension d_y. In this work, we provide the first definitive result in this direction for networks using the ReLU activation functions: The minimum width required for the universal approximation of the L^p functions is exactly max{d_x+1,d_y}. We also prove that the same conclusion does not hold for the uniform approximation with ReLU, but does hold with an additional threshold activation function. Our proof technique can be also used to derive a tighter upper bound on the minimum width required for the universal approximation using networks with general activation functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/19/2023

Minimum width for universal approximation using ReLU networks on compact domain

The universal approximation property of width-bounded networks has been ...
research
09/23/2022

Achieve the Minimum Width of Neural Networks for Universal Approximation

The universal approximation property (UAP) of neural networks is fundame...
research
11/10/2020

Expressiveness of Neural Networks Having Width Equal or Below the Input Dimension

The expressiveness of deep neural networks of bounded width has recently...
research
11/25/2022

Minimal Width for Universal Property of Deep RNN

A recurrent neural network (RNN) is a widely used deep-learning network ...
research
07/12/2020

Universal Approximation Power of Deep Neural Networks via Nonlinear Control Theory

In this paper, we explain the universal approximation capabilities of de...
research
05/25/2023

Data Topology-Dependent Upper Bounds of Neural Network Widths

This paper investigates the relationship between the universal approxima...
research
06/28/2018

ResNet with one-neuron hidden layers is a Universal Approximator

We demonstrate that a very deep ResNet with stacked modules with one neu...

Please sign up or login with your details

Forgot password? Click here to reset