Neural Network with Unbounded Activation Functions is Universal Approximator

05/14/2015
by   Sho Sonoda, et al.
0

This paper presents an investigation of the approximation property of neural networks with unbounded activation functions, such as the rectified linear unit (ReLU), which is the new de-facto standard of deep learning. The ReLU network can be analyzed by the ridgelet transform with respect to Lizorkin distributions. By showing three reconstruction formulas by using the Fourier slice theorem, the Radon transform, and Parseval's relation, it is shown that a neural network with unbounded activation functions still satisfies the universal approximation property. As an additional consequence, the ridgelet transform, or the backprojection filter in the Radon domain, is what the network learns after backpropagation. Subject to a constructive admissibility condition, the trained network can be obtained by simply discretizing the ridgelet transform, without backpropagation. Numerical examples not only support the consistency of the admissibility condition but also imply that some non-admissible cases result in low-pass filtering.

READ FULL TEXT
research
09/19/2023

Minimum width for universal approximation using ReLU networks on compact domain

The universal approximation property of width-bounded networks has been ...
research
07/03/2018

On decision regions of narrow deep neural networks

We show that for neural network functions that have width less or equal ...
research
05/21/2019

Universal Approximation with Deep Narrow Networks

The classical Universal Approximation Theorem certifies that the univers...
research
12/31/2018

Deep Residual Learning in the JPEG Transform Domain

We introduce a general method of performing Residual Network inference a...
research
10/02/2019

The option pricing model based on time values: an application of the universal approximation theory on unbounded domains

Hutchinson, Lo and Poggio raised the question that if learning works can...
research
06/16/2019

A Closer Look at Double Backpropagation

In recent years, an increasing number of neural network models have incl...
research
12/30/2021

A Unified and Constructive Framework for the Universality of Neural Networks

One of the reasons why many neural networks are capable of replicating c...

Please sign up or login with your details

Forgot password? Click here to reset