Gromov-Wasserstein Autoencoders

09/15/2022
by   Nao Nakagawa, et al.
2

Learning concise data representations without supervisory signals is a fundamental challenge in machine learning. A prominent approach to this goal is likelihood-based models such as variational autoencoders (VAE) to learn latent representations based on a meta-prior, which is a general premise assumed beneficial for downstream tasks (e.g., disentanglement). However, such approaches often deviate from the original likelihood architecture to apply the introduced meta-prior, causing undesirable changes in their training. In this paper, we propose a novel representation learning method, Gromov-Wasserstein Autoencoders (GWAE), which directly matches the latent and data distributions. Instead of a likelihood-based objective, GWAE models have a trainable prior optimized by minimizing the Gromov-Wasserstein (GW) metric. The GW metric measures the distance structure-oriented discrepancy between distributions supported on incomparable spaces, e.g., with different dimensionalities. By restricting the family of the trainable prior, we can introduce meta-priors to control latent representations for downstream tasks. The empirical comparison with the existing VAE-based methods shows that GWAE models can learn representations based on meta-priors by changing the prior family without further modifying the GW objective.

READ FULL TEXT

page 26

page 30

page 31

page 32

page 33

page 34

research
11/24/2019

dpVAEs: Fixing Sample Generation for Regularized VAEs

Unsupervised representation learning via generative modeling is a staple...
research
05/28/2022

Improving VAE-based Representation Learning

Latent variable models like the Variational Auto-Encoder (VAE) are commo...
research
07/14/2020

Failure Modes of Variational Autoencoders and Their Effects on Downstream Tasks

Variational Auto-encoders (VAEs) are deep generative latent variable mod...
research
12/12/2018

Recent Advances in Autoencoder-Based Representation Learning

Learning useful representations with little or no supervision is a key c...
research
02/07/2020

Learning Autoencoders with Relational Regularization

A new algorithmic framework is proposed for learning autoencoders of dat...
research
07/19/2023

Symmetric Equilibrium Learning of VAEs

We view variational autoencoders (VAE) as decoder-encoder pairs, which m...
research
09/10/2019

Learning Priors for Adversarial Autoencoders

Most deep latent factor models choose simple priors for simplicity, trac...

Please sign up or login with your details

Forgot password? Click here to reset