Gaussian mixture models with Wasserstein distance

06/12/2018
by   Benoit Gaujac, et al.
4

Generative models with both discrete and continuous latent variables are highly motivated by the structure of many real-world data sets. They present, however, subtleties in training often manifesting in the discrete latent being under leveraged. In this paper, we show that such models are more amenable to training when using the Optimal Transport framework of Wasserstein Autoencoders. We find our discrete latent variable to be fully leveraged by the model when trained, without any modifications to the objective function or significant fine tuning. Our model generates comparable samples to other approaches while using relatively simple neural networks, since the discrete latent variable carries much of the descriptive burden. Furthermore, the discrete latent provides significant control over generation.

READ FULL TEXT

page 3

page 6

page 7

page 8

research
10/07/2020

Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders

Probabilistic models with hierarchical-latent-variable structures provid...
research
10/04/2019

Stacked Wasserstein Autoencoder

Approximating distributions over complicated manifolds, such as natural ...
research
10/04/2021

A moment-matching metric for latent variable generative models

It can be difficult to assess the quality of a fitted model when facing ...
research
01/07/2020

Paraphrase Generation with Latent Bag of Words

Paraphrase generation is a longstanding important problem in natural lan...
research
05/12/2018

Gaussian Mixture Latent Vector Grammars

We introduce Latent Vector Grammars (LVeGs), a new framework that extend...
research
12/01/2021

Structural Sieves

This paper explores the use of deep neural networks for semiparametric e...
research
05/26/2018

Revisiting Reweighted Wake-Sleep

Discrete latent-variable models, while applicable in a variety of settin...

Please sign up or login with your details

Forgot password? Click here to reset