Disentangled Representation Learning Using (β-)VAE and GAN

Given a dataset of images containing different objects with different features such as shape, size, rotation, and x-y position; and a Variational Autoencoder (VAE); creating a disentangled encoding of these features in the hidden space vector of the VAE was the task of interest in this paper. The dSprite dataset provided the desired features for the required experiments in this research. After training the VAE combined with a Generative Adversarial Network (GAN), each dimension of the hidden vector was disrupted to explore the disentanglement in each dimension. Note that the GAN was used to improve the quality of output image reconstruction.

READ FULL TEXT

page 5

page 6

page 7

page 8

page 9

research
09/28/2019

Implicit Discriminator in Variational Autoencoder

Recently generative models have focused on combining the advantages of v...
research
03/04/2020

q-VAE for Disentangled Representation Learning and Latent Dynamical Systems

This paper proposes a novel variational autoencoder (VAE) derived from T...
research
04/07/2019

Teaching GANs to Sketch in Vector Format

Sketching is more fundamental to human cognition than speech. Deep Neura...
research
05/04/2023

Catch Missing Details: Image Reconstruction with Frequency Augmented Variational Autoencoder

The popular VQ-VAE models reconstruct images through learning a discrete...
research
11/19/2020

Dual Contradistinctive Generative Autoencoder

We present a new generative autoencoder model with dual contradistinctiv...
research
12/17/2019

Progressive VAE Training on Highly Sparse and Imbalanced Data

In this paper, we present a novel approach for training a Variational Au...
research
07/27/2023

Online Clustered Codebook

Vector Quantisation (VQ) is experiencing a comeback in machine learning,...

Please sign up or login with your details

Forgot password? Click here to reset