Autoencoding beyond pixels using a learned similarity metric

We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.

READ FULL TEXT

page 5

page 6

research
09/28/2019

Implicit Discriminator in Variational Autoencoder

Recently generative models have focused on combining the advantages of v...
research
06/27/2019

Reconstructing Perceived Images from Brain Activity by Visually-guided Cognitive Representation and Adversarial Learning

Reconstructing perceived images based on brain signals measured with fun...
research
05/27/2020

Earballs: Neural Transmodal Translation

As is expressed in the adage "a picture is worth a thousand words", when...
research
06/04/2019

Improving Variational Autoencoder with Deep Feature Consistent and Generative Adversarial Training

We present a new method for improving the performances of variational au...
research
01/09/2023

Multiscale Metamorphic VAE for 3D Brain MRI Synthesis

Generative modeling of 3D brain MRIs presents difficulties in achieving ...
research
12/01/2017

Text Generation Based on Generative Adversarial Nets with Latent Variable

In this paper, we propose a model using generative adversarial net (GAN)...

Please sign up or login with your details

Forgot password? Click here to reset