Statistical Regeneration Guarantees of the Wasserstein Autoencoder with Latent Space Consistency

10/08/2021
by   Anish Chakrabarty, et al.
0

The introduction of Variational Autoencoders (VAE) has been marked as a breakthrough in the history of representation learning models. Besides having several accolades of its own, VAE has successfully flagged off a series of inventions in the form of its immediate successors. Wasserstein Autoencoder (WAE), being an heir to that realm carries with it all of the goodness and heightened generative promises, matching even the generative adversarial networks (GANs). Needless to say, recent years have witnessed a remarkable resurgence in statistical analyses of the GANs. Similar examinations for Autoencoders, however, despite their diverse applicability and notable empirical performance, remain largely absent. To close this gap, in this paper, we investigate the statistical properties of WAE. Firstly, we provide statistical guarantees that WAE achieves the target distribution in the latent space, utilizing the Vapnik Chervonenkis (VC) theory. The main result, consequently ensures the regeneration of the input distribution, harnessing the potential offered by Optimal Transport of measures under the Wasserstein metric. This study, in turn, hints at the class of distributions WAE can reconstruct after suffering a compression in the form of a latent law.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/05/2018

Sliced-Wasserstein Autoencoder: An Embarrassingly Simple Generative Model

In this paper we study generative modeling via autoencoders while using ...
research
02/03/2019

Adversarial Networks and Autoencoders: The Primal-Dual Relationship and Generalization Bounds

Since the introduction of Generative Adversarial Networks (GANs) and Var...
research
09/15/2023

Quantifying Credit Portfolio sensitivity to asset correlations with interpretable generative neural networks

In this research, we propose a novel approach for the quantification of ...
research
05/23/2018

Cramer-Wold AutoEncoder

We propose a new generative model, Cramer-Wold Autoencoder (CWAE). Follo...
research
02/10/2020

Statistical Guarantees of Generative Adversarial Networks for Distribution Estimation

Generative Adversarial Networks (GANs) have achieved great success in un...
research
10/07/2022

Adversarial network training using higher-order moments in a modified Wasserstein distance

Generative-adversarial networks (GANs) have been used to produce data cl...

Please sign up or login with your details

Forgot password? Click here to reset