Learning Inverse Mappings with Adversarial Criterion
We propose a flipped-Adversarial AutoEncoder (FAAE) that simultaneously trains a generative model G that maps an arbitrary latent code distribution to a data distribution and an encoder E that embodies an "inverse mapping" that encodes a data sample into a latent code vector. Unlike previous hybrid approaches that leverage adversarial training criterion in constructing autoencoders, FAAE minimizes reencoding errors in the la- tent space and exploit adversarial criterion in the data space. Experimental evaluations demonstrate that the proposed frameworks produces sharper reconstructed image t and while at the same time enabling inference that captures rich semantic representation of data.
READ FULL TEXT