A Brief Introduction to Generative Models

02/27/2021
by   Alex Lamb, et al.
0

We introduce and motivate generative modeling as a central task for machine learning and provide a critical view of the algorithms which have been proposed for solving this task. We overview how generative modeling can be defined mathematically as trying to make an estimating distribution the same as an unknown ground truth distribution. This can then be quantified in terms of the value of a statistical divergence between the two distributions. We outline the maximum likelihood approach and how it can be interpreted as minimizing KL-divergence. We explore a number of approaches in the maximum likelihood family, while discussing their limitations. Finally, we explore the alternative adversarial approach which involves studying the differences between an estimating distribution and a real data distribution. We discuss how this approach can give rise to new divergences and methods that are necessary to make adversarial learning successful. We also discuss new evaluation metrics which are required by the adversarial approach.

READ FULL TEXT

page 2

page 25

07/27/2019

Variational f-divergence Minimization

Probabilistic models are often trained by maximum likelihood, which corr...
07/13/2020

Bridging Maximum Likelihood and Adversarial Learning via α-Divergence

Maximum likelihood (ML) and adversarial learning are two popular approac...
04/25/2021

Deep Probabilistic Graphical Modeling

Probabilistic graphical modeling (PGM) provides a framework for formulat...
05/19/2022

Why GANs are overkill for NLP

This work offers a novel theoretical perspective on why, despite numerou...
02/24/2022

Clarifying MCMC-based training of modern EBMs : Contrastive Divergence versus Maximum Likelihood

The Energy-Based Model (EBM) framework is a very general approach to gen...
06/28/2017

Generative Bridging Network in Neural Sequence Prediction

Maximum Likelihood Estimation (MLE) suffers from data sparsity problem i...