Autoencoding beyond pixels using a learned similarity metric

We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.

READ FULL TEXT

page 5

page 6

09/28/2019

Implicit Discriminator in Variational Autoencoder

Recently generative models have focused on combining the advantages of v...
05/27/2020

Earballs: Neural Transmodal Translation

As is expressed in the adage "a picture is worth a thousand words", when...
06/04/2019

Improving Variational Autoencoder with Deep Feature Consistent and Generative Adversarial Training

We present a new method for improving the performances of variational au...
01/09/2023

Multiscale Metamorphic VAE for 3D Brain MRI Synthesis

Generative modeling of 3D brain MRIs presents difficulties in achieving ...
12/01/2017

Text Generation Based on Generative Adversarial Nets with Latent Variable

In this paper, we propose a model using generative adversarial net (GAN)...

Code Repositories

VAEGAN

Variational Autoencoder using a similarity metric learned by a generative adversarial network


view repo

autoencoder-vaegan

Keras / Tensorflow implementation of Larsen, https://arxiv.org/abs/1512.09300


view repo

Please sign up or login with your details

Forgot password? Click here to reset