Generative adversarial interpolative autoencoding: adversarial training on latent space interpolations encourage convex latent distributions

07/17/2018
by   Tim Sainburg, et al.
2

We present a neural network architecture based upon the Autoencoder (AE) and Generative Adversarial Network (GAN) that promotes a convex latent distribution by training adversarially on latent space interpolations. By using an AE as both the generator and discriminator of a GAN, we pass a pixel-wise error function across the discriminator, yielding an AE which produces non-blurry samples that match both high- and low-level features of the original images. Interpolations between images in this space remain within the latent-space distribution of real images as trained by the discriminator, and therfore preserve realistic resemblances to the network inputs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset