DeepAI AI Chat
Log In Sign Up

Adversarial Networks and Autoencoders: The Primal-Dual Relationship and Generalization Bounds

by   Hisham Husain, et al.
Australian National University

Since the introduction of Generative Adversarial Networks (GANs) and Variational Autoencoders (VAE), the literature on generative modelling has witnessed an overwhelming resurgence. The impressive, yet elusive empirical performance of GANs has lead to the rise of many GAN-VAE hybrids, with the hopes of GAN level performance and additional benefits of VAE, such as an encoder for feature reduction, which is not offered by GANs. Recently, the Wasserstein Autoencoder (WAE) was proposed, achieving performance similar to that of GANs, yet it is still unclear whether the two are fundamentally different or can be further improved into a unified model. In this work, we study the f-GAN and WAE models and make two main discoveries. First, we find that the f-GAN objective is equivalent to an autoencoder-like objective, which has close links, and is in some cases equivalent to the WAE objective - we refer to this as the f-WAE. This equivalence allows us to explicate the success of WAE. Second, the equivalence result allows us to, for the first time, prove generalization bounds for Autoencoder models (WAE and f-WAE), which is a pertinent problem when it comes to theoretical analyses of generative models. Furthermore, we show that the f-WAE objective is related to other statistical quantities such as the f-divergence and in particular, upper bounded by the Wasserstein distance, which then allows us to tap into existing efficient (regularized) OT solvers to minimize f-WAE. Our findings thus recommend the f-WAE as a tighter alternative to WAE, comment on generalization abilities and make a step towards unifying these models.


page 1

page 2

page 3

page 4


Statistical Regeneration Guarantees of the Wasserstein Autoencoder with Latent Space Consistency

The introduction of Variational Autoencoders (VAE) has been marked as a ...

GAN and VAE from an Optimal Transport Point of View

This short article revisits some of the ideas introduced in arXiv:1701.0...

PAC-Bayesian Generalization Bounds for Adversarial Generative Models

We extend PAC-Bayesian theory to generative models and develop generaliz...

Theoretical Insights into the Use of Structural Similarity Index In Generative Models and Inferential Autoencoders

Generative models and inferential autoencoders mostly make use of ℓ_2 no...

Wasserstein-2 Generative Networks

Modern generative learning is mainly associated with Generative Adversar...

A Unified f-divergence Framework Generalizing VAE and GAN

Developing deep generative models that flexibly incorporate diverse meas...

InfoGAN-CR: Disentangling Generative Adversarial Networks with Contrastive Regularizers

Training disentangled representations with generative adversarial networ...