Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On Boltzmann Machines

10/31/2012
by   Louis Yuanlong Shao, et al.
0

One conjecture in both deep learning and classical connectionist viewpoint is that the biological brain implements certain kinds of deep networks as its back-end. However, to our knowledge, a detailed correspondence has not yet been set up, which is important if we want to bridge between neuroscience and machine learning. Recent researches emphasized the biological plausibility of Linear-Nonlinear-Poisson (LNP) neuron model. We show that with neurally plausible settings, the whole network is capable of representing any Boltzmann machine and performing a semi-stochastic Bayesian inference algorithm lying between Gibbs sampling and variational inference.

READ FULL TEXT

page 6

page 14

research
05/17/2017

Approximate Bayesian inference as a gauge theory

In a published paper [Sengupta, 2016], we have proposed that the brain (...
research
11/02/2014

Variational Inference for Gaussian Process Modulated Poisson Processes

We present the first fully variational Bayesian inference scheme for con...
research
08/09/2019

Generalization Error Bounds for Deep Variational Inference

Variational inference is becoming more and more popular for approximatin...
research
10/03/2014

BayesPy: Variational Bayesian Inference in Python

BayesPy is an open-source Python software package for performing variati...
research
11/06/2017

Flexible statistical inference for mechanistic models of neural dynamics

Mechanistic models of single-neuron dynamics have been extensively studi...
research
08/04/2020

Exploring Variational Deep Q Networks

This study provides both analysis and a refined, research-ready implemen...
research
05/19/2018

Nonparametric Bayesian Deep Networks with Local Competition

Local competition among neighboring neurons is a common procedure taking...

Please sign up or login with your details

Forgot password? Click here to reset