An Introduction to Variational Inference

08/30/2021
by   Ankush Ganguly, et al.
0

Approximating complex probability densities is a core problem in modern statistics. In this paper, we introduce the concept of Variational Inference (VI), a popular method in machine learning that uses optimization techniques to estimate complex probability densities. This property allows VI to converge faster than classical methods, such as, Markov Chain Monte Carlo sampling. Conceptually, VI works by choosing a family of probability density functions and then finding the one closest to the actual probability density – often using the Kullback-Leibler (KL) divergence as the optimization metric. We introduce the Evidence Lower Bound to tractably compute the approximated probability density and we review the ideas behind mean-field variational inference. Finally, we discuss the applications of VI to variational auto-encoders (VAE) and VAE-Generative Adversarial Network (VAE-GAN). With this paper, we aim to explain the concept of VI and assist in future research with this approach.

READ FULL TEXT
research
01/04/2016

Variational Inference: A Review for Statisticians

One of the core problems of modern statistics is to approximate difficul...
research
09/28/2020

f-Divergence Variational Inference

This paper introduces the f-divergence variational inference (f-VI) that...
research
08/11/2023

Learning Distributions via Monte-Carlo Marginalization

We propose a novel method to learn intractable distributions from their ...
research
09/22/2022

Amortized Variational Inference: Towards the Mathematical Foundation and Review

The core principle of Variational Inference (VI) is to convert the stati...
research
05/18/2020

Learning and Inference in Imaginary Noise Models

Inspired by recent developments in learning smoothed densities with empi...
research
04/21/2022

Learn from Unpaired Data for Image Restoration: A Variational Bayes Approach

Collecting paired training data is difficult in practice, but the unpair...
research
07/10/2023

Law of Large Numbers for Bayesian two-layer Neural Network trained with Variational Inference

We provide a rigorous analysis of training by variational inference (VI)...

Please sign up or login with your details

Forgot password? Click here to reset