On numerical approximation schemes for expectation propagation

11/14/2016
by   Alexis Roche, et al.
0

Several numerical approximation strategies for the expectation-propagation algorithm are studied in the context of large-scale learning: the Laplace method, a faster variant of it, Gaussian quadrature, and a deterministic version of variational sampling (i.e., combining quadrature with variational approximation). Experiments in training linear binary classifiers show that the expectation-propagation algorithm converges best using variational sampling, while it also converges well using Laplace-style methods with smooth factors but tends to be unstable with non-differentiable ones. Gaussian quadrature yields unstable behavior or convergence to a sub-optimal solution in most experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/10/2015

Stochastic Expectation Propagation for Large Scale Gaussian Process Classification

A method for large scale Gaussian process classification has been recent...
research
11/29/2011

Gaussian Probabilities and Expectation Propagation

While Gaussian probability densities are omnipresent in applied mathemat...
research
08/12/2012

How to sample if you must: on optimal functional sampling

We examine a fundamental problem that models various active sampling set...
research
09/22/2014

Expectation Propagation

Variational inference is a powerful concept that underlies many iterativ...
research
01/10/2013

Expectation Propagation for approximate Bayesian inference

This paper presents a new deterministic approximation technique in Bayes...
research
01/10/2013

Heteroscedastic Relevance Vector Machine

In this work we propose a heteroscedastic generalization to RVM, a fast ...
research
09/06/2019

Gradient Q(σ, λ): A Unified Algorithm with Function Approximation for Reinforcement Learning

Full-sampling (e.g., Q-learning) and pure-expectation (e.g., Expected Sa...

Please sign up or login with your details

Forgot password? Click here to reset