Deepest Neural Networks

07/09/2017
by   Raul Rojas, et al.
0

This paper shows that a long chain of perceptrons (that is, a multilayer perceptron, or MLP, with many hidden layers of width one) can be a universal classifier. The classification procedure is not necessarily computationally efficient, but the technique throws some light on the kind of computations possible with narrow and deep MLPs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/07/2022

Neural Tangent Kernel Analysis of Deep Narrow Neural Networks

The tremendous recent progress in analyzing the training dynamics of ove...
research
07/26/2023

Understanding Deep Neural Networks via Linear Separability of Hidden Layers

In this paper, we measure the linear separability of hidden layer output...
research
02/04/2022

A note on the complex and bicomplex valued neural networks

In this paper we first write a proof of the perceptron convergence algor...
research
11/25/2022

Minimal Width for Universal Property of Deep RNN

A recurrent neural network (RNN) is a widely used deep-learning network ...
research
06/28/2018

ResNet with one-neuron hidden layers is a Universal Approximator

We demonstrate that a very deep ResNet with stacked modules with one neu...
research
01/18/2017

Multilayer Perceptron Algebra

Artificial Neural Networks(ANN) has been phenomenally successful on vari...
research
02/07/2020

Universal Equivariant Multilayer Perceptrons

Group invariant and equivariant Multilayer Perceptrons (MLP), also known...

Please sign up or login with your details

Forgot password? Click here to reset