Integrating Probabilistic Rules into Neural Networks: A Stochastic EM Learning Algorithm

03/20/2013
by   Gerhard Paaß, et al.
0

The EM-algorithm is a general procedure to get maximum likelihood estimates if part of the observations on the variables of a network are missing. In this paper a stochastic version of the algorithm is adapted to probabilistic neural networks describing the associative dependency of variables. These networks have a probability distribution, which is a special case of the distribution generated by probabilistic inference networks. Hence both types of networks can be combined allowing to integrate probabilistic rules as well as unspecified associations in a sound way. The resulting network may have a number of interesting features including cycles of probabilistic rules, hidden 'unobservable' variables, and uncertain and contradictory evidence.

READ FULL TEXT

page 1

page 3

page 4

page 6

research
02/06/2013

Update Rules for Parameter Estimation in Bayesian Networks

This paper re-examines the problem of parameter estimation in Bayesian n...
research
03/13/2013

MESA: Maximum Entropy by Simulated Annealing

Probabilistic reasoning systems combine different probabilistic rules an...
research
01/30/2013

Learning the Structure of Dynamic Probabilistic Networks

Dynamic probabilistic networks are a compact representation of complex s...
research
05/24/2023

Utility-Probability Duality of Neural Networks

It is typically understood that the training of modern neural networks i...
research
03/22/2013

Viterbi training in PRISM

VT (Viterbi training), or hard EM, is an efficient way of parameter lear...
research
09/18/2017

A Probabilistic Framework for Nonlinearities in Stochastic Neural Networks

We present a probabilistic framework for nonlinearities, based on doubly...
research
03/18/2015

GSNs : Generative Stochastic Networks

We introduce a novel training principle for probabilistic models that is...

Please sign up or login with your details

Forgot password? Click here to reset