Black-box α-divergence Minimization

11/10/2015
by   José Miguel Hernández-Lobato, et al.
0

Black-box alpha (BB-α) is a new approximate inference method based on the minimization of α-divergences. BB-α scales to large datasets because it can be implemented using stochastic gradient descent. BB-α can be applied to complex probabilistic models with little effort since it only requires as input the likelihood function and its gradients. These gradients can be easily obtained using automatic differentiation. By changing the divergence parameter α, the method is able to interpolate between variational Bayes (VB) (α→ 0) and an algorithm similar to expectation propagation (EP) (α = 1). Experiments on probit regression and neural network regression and classification problems show that BB-α with non-standard settings of α, such as α = 0.5, usually produces better predictions than with α→ 0 (VB) or α = 1 (EP).

READ FULL TEXT

page 1

page 2

page 3

page 4

12/31/2013

Black Box Variational Inference

Variational inference has become a widely used method to approximate pos...
09/30/2019

Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML

In this paper, we study the problem of constrained robust (min-max) opti...
09/21/2017

Perturbative Black Box Variational Inference

Black box variational inference (BBVI) with reparameterization gradients...
04/13/2020

Adversarial Likelihood-Free Inference on Black-Box Generator

Generative Adversarial Network (GAN) can be viewed as an implicit estima...
07/20/2017

Learning to Draw Samples with Amortized Stein Variational Gradient Descent

We propose a simple algorithm to train stochastic neural networks to dra...
06/27/2018

Empirical Risk Minimization and Stochastic Gradient Descent for Relational Data

Empirical risk minimization is the principal tool for prediction problem...
09/14/2022

Modifying Squint for Prediction with Expert Advice in a Changing Environment

We provide a new method for online learning, specifically prediction wit...