CR-VAE: Contrastive Regularization on Variational Autoencoders for Preventing Posterior Collapse

The Variational Autoencoder (VAE) is known to suffer from the phenomenon of posterior collapse, where the latent representations generated by the model become independent of the inputs. This leads to degenerated representations of the input, which is attributed to the limitations of the VAE's objective function. In this work, we propose a novel solution to this issue, the Contrastive Regularization for Variational Autoencoders (CR-VAE). The core of our approach is to augment the original VAE with a contrastive objective that maximizes the mutual information between the representations of similar visual inputs. This strategy ensures that the information flow between the input and its latent representation is maximized, effectively avoiding posterior collapse. We evaluate our method on a series of visual datasets and demonstrate, that CR-VAE outperforms state-of-the-art approaches in preventing posterior collapse.

READ FULL TEXT

page 1

page 7

page 8

research
05/28/2020

VMI-VAE: Variational Mutual Information Maximization Framework for VAE With Discrete and Continuous Priors

Variational Autoencoder is a scalable method for learning latent variabl...
research
07/23/2019

Noise Contrastive Variational Autoencoders

We take steps towards understanding the "posterior collapse (PC)" diffic...
research
07/19/2022

Forget-me-not! Contrastive Critics for Mitigating Posterior Collapse

Variational autoencoders (VAEs) suffer from posterior collapse, where th...
research
02/09/2022

Covariate-informed Representation Learning with Samplewise Optimal Identifiable Variational Autoencoders

Recently proposed identifiable variational autoencoder (iVAE, Khemakhem ...
research
07/26/2021

AAVAE: Augmentation-Augmented Variational Autoencoders

Recent methods for self-supervised learning can be grouped into two para...
research
09/30/2019

On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation

Variational Autoencoders (VAEs) are known to suffer from learning uninfo...
research
02/17/2021

Preventing Posterior Collapse Induced by Oversmoothing in Gaussian VAE

Variational autoencoders (VAEs) often suffer from posterior collapse, wh...

Please sign up or login with your details

Forgot password? Click here to reset