Variational f-divergence Minimization

07/27/2019
by   Mingtian Zhang, et al.
9

Probabilistic models are often trained by maximum likelihood, which corresponds to minimizing a specific f-divergence between the model and data distribution. In light of recent successes in training Generative Adversarial Networks, alternative non-likelihood training criteria have been proposed. Whilst not necessarily statistically efficient, these alternatives may better match user requirements such as sharp image generation. A general variational method for training probabilistic latent variable models using maximum likelihood is well established; however, how to train latent variable models using other f-divergences is comparatively unknown. We discuss a variational approach that, when combined with the recently introduced Spread Divergence, can be applied to train a large class of latent variable models using any f-divergence.

READ FULL TEXT

page 6

page 7

research
02/13/2018

Leveraging the Exact Likelihood of Deep Latent Variable Models

Deep latent variable models combine the approximation abilities of deep ...
research
03/24/2022

Bi-level Doubly Variational Learning for Energy-based Latent Variable Models

Energy-based latent variable models (EBLVMs) are more expressive than co...
research
06/02/2016

f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization

Generative neural samplers are probabilistic models that implement sampl...
research
02/27/2021

A Brief Introduction to Generative Models

We introduce and motivate generative modeling as a central task for mach...
research
08/04/2017

A Latent Variable Model for Two-Dimensional Canonical Correlation Analysis and its Variational Inference

Describing the dimension reduction (DR) techniques by means of probabili...
research
10/16/2020

Variational (Gradient) Estimate of the Score Function in Energy-based Latent Variable Models

The learning and evaluation of energy-based latent variable models (EBLV...
research
10/07/2020

Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders

Probabilistic models with hierarchical-latent-variable structures provid...

Please sign up or login with your details

Forgot password? Click here to reset