β-VAEs can retain label information even at high compression

12/06/2018
by   Emily Fertig, et al.
0

In this paper, we investigate the degree to which the encoding of a β-VAE captures label information across multiple architectures on Binary Static MNIST and Omniglot. Even though they are trained in a completely unsupervised manner, we demonstrate that a β-VAE can retain a large amount of label information, even when asked to learn a highly compressed representation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2017

Investigation of Using VAE for i-Vector Speaker Verification

New system for i-vector speaker recognition based on variational autoenc...
research
10/24/2021

Discrete acoustic space for an efficient sampling in neural text-to-speech

We present an SVQ-VAE architecture using a split vector quantizer for NT...
research
11/21/2020

SHOT-VAE: Semi-supervised Deep Generative Models With Label-aware ELBO Approximations

Semi-supervised variational autoencoders (VAEs) have obtained strong res...
research
11/26/2019

A Preliminary Study of Disentanglement With Insights on the Inadequacy of Metrics

Disentangled encoding is an important step towards a better representati...
research
06/18/2020

A Tutorial on VAEs: From Bayes' Rule to Lossless Compression

The Variational Auto-Encoder (VAE) is a simple, efficient, and popular d...
research
11/26/2021

A model of semantic completion in generative episodic memory

Many different studies have suggested that episodic memory is a generati...
research
07/19/2023

Impact of Disentanglement on Pruning Neural Networks

Deploying deep learning neural networks on edge devices, to accomplish t...

Please sign up or login with your details

Forgot password? Click here to reset