DeepAI AI Chat
Log In Sign Up

On the Latent Space of Wasserstein Auto-Encoders

02/11/2018
by   Paul K. Rubenstein, et al.
0

We study the role of latent space dimensionality in Wasserstein auto-encoders (WAEs). Through experimentation on synthetic and real datasets, we argue that random encoders should be preferred over deterministic encoders. We highlight the potential of WAEs for representation learning with promising results on a benchmark disentanglement task.

READ FULL TEXT

page 3

page 6

page 8

12/20/2019

Chart Auto-Encoders for Manifold Structured Data

Auto-encoding and generative models have made tremendous successes in im...
02/18/2021

On the advantages of stochastic encoders

Stochastic encoders have been used in rate-distortion theory and neural ...
03/31/2020

Cross Scene Prediction via Modeling Dynamic Correlation using Latent Space Shared Auto-Encoders

This work addresses on the following problem: given a set of unsynchroni...
04/18/2019

Catch Me If You Can

As advances in signature recognition have reached a new plateau of perfo...
07/11/2020

Relation-Guided Representation Learning

Deep auto-encoders (DAEs) have achieved great success in learning data r...
09/22/2022

Assessing Robustness of EEG Representations under Data-shifts via Latent Space and Uncertainty Analysis

The recent availability of large datasets in bio-medicine has inspired t...

Code Repositories

Wasserstein-Auto-Encoders

Contains code relating to this arxiv paper https://arxiv.org/abs/1802.03761


view repo