Universal Approximation of Markov Kernels by Shallow Stochastic Feedforward Networks

03/24/2015
by   Guido Montufar, et al.
0

We establish upper bounds for the minimal number of hidden units for which a binary stochastic feedforward network with sigmoid activation probabilities and a single hidden layer is a universal approximator of Markov kernels. We show that each possible probabilistic assignment of the states of n output units, given the states of k≥1 input units, can be approximated arbitrarily well by a network with 2^k-1(2^n-1-1) hidden units.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2020

Multi-Activation Hidden Units for Neural Networks with Random Weights

Single layer feedforward networks with random weights are successful in ...
research
10/22/2019

Stochastic Feedforward Neural Networks: Universal Approximation

In this chapter we take a look at the universal approximation question f...
research
04/29/2009

Adaptive Learning with Binary Neurons

A efficient incremental learning algorithm for classification tasks, cal...
research
08/18/2022

Quantitative Universal Approximation Bounds for Deep Belief Networks

We show that deep belief networks with binary hidden units can approxima...
research
08/24/2020

Efficient Design of Neural Networks with Random Weights

Single layer feedforward networks with random weights are known for thei...
research
06/11/2014

Techniques for Learning Binary Stochastic Feedforward Neural Networks

Stochastic binary hidden units in a multi-layer perceptron (MLP) network...
research
01/03/2023

Operator theory, kernels, and Feedforward Neural Networks

In this paper we show how specific families of positive definite kernels...

Please sign up or login with your details

Forgot password? Click here to reset