DeepAI AI Chat
Log In Sign Up

A Brief Introduction to Generative Models

by   Alex Lamb, et al.

We introduce and motivate generative modeling as a central task for machine learning and provide a critical view of the algorithms which have been proposed for solving this task. We overview how generative modeling can be defined mathematically as trying to make an estimating distribution the same as an unknown ground truth distribution. This can then be quantified in terms of the value of a statistical divergence between the two distributions. We outline the maximum likelihood approach and how it can be interpreted as minimizing KL-divergence. We explore a number of approaches in the maximum likelihood family, while discussing their limitations. Finally, we explore the alternative adversarial approach which involves studying the differences between an estimating distribution and a real data distribution. We discuss how this approach can give rise to new divergences and methods that are necessary to make adversarial learning successful. We also discuss new evaluation metrics which are required by the adversarial approach.


page 2

page 25


Variational f-divergence Minimization

Probabilistic models are often trained by maximum likelihood, which corr...

Bridging Maximum Likelihood and Adversarial Learning via α-Divergence

Maximum likelihood (ML) and adversarial learning are two popular approac...

Deep Probabilistic Graphical Modeling

Probabilistic graphical modeling (PGM) provides a framework for formulat...

Why GANs are overkill for NLP

This work offers a novel theoretical perspective on why, despite numerou...

Creative divergent synthesis with generative models

Machine learning approaches now achieve impressive generation capabiliti...

Deep Direct Likelihood Knockoffs

Predictive modeling often uses black box machine learning methods, such ...