Maximal Information Divergence from Statistical Models defined by Neural Networks

03/01/2013
by   Guido Montufar, et al.
0

We review recent results about the maximal values of the Kullback-Leibler information divergence from statistical models defined by neural networks, including naive Bayes models, restricted Boltzmann machines, deep belief networks, and various classes of exponential families. We illustrate approaches to compute the maximal divergence from a given model starting from simple sub- or super-models. We give a new result for deep and narrow belief networks with finite-valued units.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2013

Universal Approximation Depth and Errors of Narrow Belief Networks with Discrete Units

We generalize recent theoretical work on the minimal number of layers of...
research
11/14/2014

Deep Narrow Boltzmann Machines are Universal Approximators

We show that deep narrow Boltzmann machines are universal approximators ...
research
06/09/2015

Training Restricted Boltzmann Machines via the Thouless-Anderson-Palmer Free Energy

Restricted Boltzmann machines are undirected neural networks which have ...
research
02/14/2014

Geometry and Expressive Power of Conditional Restricted Boltzmann Machines

Conditional restricted Boltzmann machines are undirected stochastic neur...
research
12/17/2021

A random energy approach to deep learning

We study a generic ensemble of deep belief networks which is parametrize...
research
11/05/2012

Kernels and Submodels of Deep Belief Networks

We study the mixtures of factorizing probability distributions represent...
research
05/07/2020

The numerical statistical fan for noisy experimental designs

Identifiability of polynomial models is a key requirement for multiple r...

Please sign up or login with your details

Forgot password? Click here to reset