An Introduction to Deep Generative Modeling

03/09/2021
by   Lars Ruthotto, et al.
101

Deep generative models (DGM) are neural networks with many hidden layers trained to approximate complicated, high-dimensional probability distributions using a large number of samples. When trained successfully, we can use the DGMs to estimate the likelihood of each observation and to create new samples from the underlying distribution. Developing DGMs has become one of the most hotly researched fields in artificial intelligence in recent years. The literature on DGMs has become vast and is growing rapidly. Some advances have even reached the public sphere, for example, the recent successes in generating realistic-looking images, voices, or movies; so-called deep fakes. Despite these successes, several mathematical and practical issues limit the broader use of DGMs: given a specific dataset, it remains challenging to design and train a DGM and even more challenging to find out why a particular model is or is not effective. To help advance the theoretical understanding of DGMs, we provide an introduction to DGMs and provide a concise mathematical framework for modeling the three most popular approaches: normalizing flows (NF), variational autoencoders (VAE), and generative adversarial networks (GAN). We illustrate the advantages and disadvantages of these basic approaches using numerical experiments. Our goal is to enable and motivate the reader to contribute to this proliferating research area. Our presentation also emphasizes relations between generative modeling and optimal transport.

READ FULL TEXT

page 6

page 10

page 12

page 16

page 17

page 18

page 19

research
06/02/2017

On Unifying Deep Generative Models

Deep generative models have achieved impressive success in recent years....
research
12/12/2019

An Efficient Explorative Sampling Considering the Generative Boundaries of Deep Generative Neural Networks

Deep generative neural networks (DGNNs) have achieved realistic and high...
research
12/01/2020

Refining Deep Generative Models via Wasserstein Gradient Flows

Deep generative modeling has seen impressive advances in recent years, t...
research
12/22/2022

A Mathematical Framework for Learning Probability Distributions

The modeling of probability distributions, specifically generative model...
research
10/03/2018

Spurious samples in deep generative models: bug or feature?

Traditional wisdom in generative modeling literature is that spurious sa...
research
12/13/2018

A Probe Towards Understanding GAN and VAE Models

This project report compares some known GAN and VAE models proposed prio...
research
03/01/2021

A survey on Variational Autoencoders from a GreenAI perspective

Variational AutoEncoders (VAEs) are powerful generative models that merg...

Please sign up or login with your details

Forgot password? Click here to reset