On the Latent Space of Wasserstein Auto-Encoders

02/11/2018
by   Paul K. Rubenstein, et al.
0

We study the role of latent space dimensionality in Wasserstein auto-encoders (WAEs). Through experimentation on synthetic and real datasets, we argue that random encoders should be preferred over deterministic encoders. We highlight the potential of WAEs for representation learning with promising results on a benchmark disentanglement task.

READ FULL TEXT

page 3

page 6

page 8

research
12/20/2019

Chart Auto-Encoders for Manifold Structured Data

Auto-encoding and generative models have made tremendous successes in im...
research
02/18/2021

On the advantages of stochastic encoders

Stochastic encoders have been used in rate-distortion theory and neural ...
research
03/31/2020

Cross Scene Prediction via Modeling Dynamic Correlation using Latent Space Shared Auto-Encoders

This work addresses on the following problem: given a set of unsynchroni...
research
04/18/2019

Catch Me If You Can

As advances in signature recognition have reached a new plateau of perfo...
research
09/22/2022

Assessing Robustness of EEG Representations under Data-shifts via Latent Space and Uncertainty Analysis

The recent availability of large datasets in bio-medicine has inspired t...
research
07/11/2020

Relation-Guided Representation Learning

Deep auto-encoders (DAEs) have achieved great success in learning data r...
research
07/17/2023

A benchmark of categorical encoders for binary classification

Categorical encoders transform categorical features into numerical repre...

Please sign up or login with your details

Forgot password? Click here to reset