AS-IntroVAE: Adversarial Similarity Distance Makes Robust IntroVAE

06/28/2022
by   Changjie Lu, et al.
0

Recently, introspective models like IntroVAE and S-IntroVAE have excelled in image generation and reconstruction tasks. The principal characteristic of introspective models is the adversarial learning of VAE, where the encoder attempts to distinguish between the real and the fake (i.e., synthesized) images. However, due to the unavailability of an effective metric to evaluate the difference between the real and the fake images, the posterior collapse and the vanishing gradient problem still exist, reducing the fidelity of the synthesized images. In this paper, we propose a new variation of IntroVAE called Adversarial Similarity Distance Introspective Variational Autoencoder (AS-IntroVAE). We theoretically analyze the vanishing gradient problem and construct a new Adversarial Similarity Distance (AS-Distance) using the 2-Wasserstein distance and the kernel trick. With weight annealing on AS-Distance and KL-Divergence, the AS-IntroVAE are able to generate stable and high-quality images. The posterior collapse problem is addressed by making per-batch attempts to transform the image so that it better fits the prior distribution in the latent space. Compared with the per-image approach, this strategy fosters more diverse distributions in the latent space, allowing our model to produce images of great diversity. Comprehensive experiments on benchmark datasets demonstrate the effectiveness of AS-IntroVAE on image generation and reconstruction tasks.

READ FULL TEXT

page 6

page 10

page 11

page 12

page 13

page 21

page 22

page 24

research
04/27/2018

Adversarial Training of Variational Auto-encoders for High Fidelity Image Generation

Variational auto-encoders (VAEs) provide an attractive solution to image...
research
04/04/2019

Riemannian Normalizing Flow on Variational Wasserstein Autoencoder for Text Modeling

Recurrent Variational Autoencoder has been widely used for language mode...
research
04/13/2020

Controllable Variational Autoencoder

Variational Autoencoders (VAE) and their variants have been widely used ...
research
10/31/2020

ControlVAE: Tuning, Analytical Properties, and Performance Analysis

This paper reviews the novel concept of controllable variational autoenc...
research
09/30/2021

Towards Better Data Augmentation using Wasserstein Distance in Variational Auto-encoder

VAE, or variational auto-encoder, compresses data into latent attributes...
research
04/12/2023

NoisyTwins: Class-Consistent and Diverse Image Generation through StyleGANs

StyleGANs are at the forefront of controllable image generation as they ...
research
09/15/2020

Challenging β-VAE with β< 1 for Disentanglement Via Dynamic Learning

This paper challenges the common assumption that the weight of β-VAE sho...

Please sign up or login with your details

Forgot password? Click here to reset