Consistency Regularization for Variational Auto-Encoders

05/31/2021
by   Samarth Sinha, et al.
0

Variational auto-encoders (VAEs) are a powerful approach to unsupervised learning. They enable scalable approximate posterior inference in latent-variable models using variational inference (VI). A VAE posits a variational family parameterized by a deep neural network called an encoder that takes data as input. This encoder is shared across all the observations, which amortizes the cost of inference. However the encoder of a VAE has the undesirable property that it maps a given observation and a semantics-preserving transformation of it to different latent representations. This "inconsistency" of the encoder lowers the quality of the learned representations, especially for downstream tasks, and also negatively affects generalization. In this paper, we propose a regularization method to enforce consistency in VAEs. The idea is to minimize the Kullback-Leibler (KL) divergence between the variational distribution when conditioning on the observation and the variational distribution when conditioning on a random semantic-preserving transformation of this observation. This regularization is applicable to any VAE. In our experiments we apply it to four different VAE variants on several benchmark datasets and found it always improves the quality of the learned representations but also leads to better generalization. In particular, when applied to the Nouveau Variational Auto-Encoder (NVAE), our regularization method yields state-of-the-art performance on MNIST and CIFAR-10. We also applied our method to 3D data and found it learns representations of superior quality as measured by accuracy on a downstream classification task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2022

Improving VAE-based Representation Learning

Latent variable models like the Variational Auto-Encoder (VAE) are commo...
research
08/16/2022

Training Latent Variable Models with Auto-encoding Variational Bayes: A Tutorial

Auto-encoding Variational Bayes (AEVB) is a powerful and general algorit...
research
07/28/2021

Unsupervised Learning of Neurosymbolic Encoders

We present a framework for the unsupervised learning of neurosymbolic en...
research
04/06/2018

Monocular Semantic Occupancy Grid Mapping with Convolutional Variational Auto-Encoders

In this work, we research and evaluate the usage of convolutional variat...
research
03/04/2020

Variational Auto-Encoder: not all failures are equal

We claim that a source of severe failures for Variational Auto-Encoders ...
research
09/30/2021

Towards Better Data Augmentation using Wasserstein Distance in Variational Auto-encoder

VAE, or variational auto-encoder, compresses data into latent attributes...
research
08/11/2023

Learning Distributions via Monte-Carlo Marginalization

We propose a novel method to learn intractable distributions from their ...

Please sign up or login with your details

Forgot password? Click here to reset