Use of Student's t-Distribution for the Latent Layer in a Coupled Variational Autoencoder

11/21/2020
by   Kevin R. Chen, et al.
35

A Coupled Variational Autoencoder, which incorporates both a generalized loss function and latent layer distribution, shows improvement in the accuracy and robustness of generated replicas of MNIST numerals. The latent layer uses a Student's t-distribution to incorporate heavy-tail decay. The loss function uses a coupled logarithm, which increases the penalty on images with outlier likelihood. The generalized mean of the generated image's likelihood is used to measure the performance of the algorithm's decisiveness, accuracy, and robustness.

READ FULL TEXT

page 5

page 6

page 7

research
06/03/2019

Coupled VAE: Improved Accuracy and Robustness of a Variational Autoencoder

We present a coupled Variational Auto-Encoder (VAE) method that improves...
research
07/11/2017

Least Square Variational Bayesian Autoencoder with Regularization

In recent years Variation Autoencoders have become one of the most popul...
research
11/15/2021

An adaptive dimension reduction algorithm for latent variables of variational autoencoder

Constructed by the neural network, variational autoencoder has the overf...
research
01/25/2019

Diffusion Variational Autoencoders

A standard Variational Autoencoder, with a Euclidean latent space, is st...
research
01/28/2022

Geometric instability of out of distribution data across autoencoder architecture

We study the map learned by a family of autoencoders trained on MNIST, a...
research
11/24/2021

3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch Feature Swapping for Bodies and Faces

Learning a disentangled, interpretable, and structured latent representa...
research
02/06/2022

Enhancing variational generation through self-decomposition

In this article we introduce the notion of Split Variational Autoencoder...

Please sign up or login with your details

Forgot password? Click here to reset