Learning Inverse Mappings with Adversarial Criterion

02/13/2018
by   Jiyi Zhang, et al.
0

We propose a flipped-Adversarial AutoEncoder (FAAE) that simultaneously trains a generative model G that maps an arbitrary latent code distribution to a data distribution and an encoder E that embodies an "inverse mapping" that encodes a data sample into a latent code vector. Unlike previous hybrid approaches that leverage adversarial training criterion in constructing autoencoders, FAAE minimizes reencoding errors in the la- tent space and exploit adversarial criterion in the data space. Experimental evaluations demonstrate that the proposed frameworks produces sharper reconstructed image t and while at the same time enabling inference that captures rich semantic representation of data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset