Wasserstein-Auto-Encoders
Contains code relating to this arxiv paper https://arxiv.org/abs/1802.03761
view repo
We study the role of latent space dimensionality in Wasserstein auto-encoders (WAEs). Through experimentation on synthetic and real datasets, we argue that random encoders should be preferred over deterministic encoders. We highlight the potential of WAEs for representation learning with promising results on a benchmark disentanglement task.
READ FULL TEXTContains code relating to this arxiv paper https://arxiv.org/abs/1802.03761