Y-Autoencoders: disentangling latent representations via sequential-encoding

07/25/2019
by   Massimiliano Patacchiola, et al.
6

In the last few years there have been important advancements in generative models with the two dominant approaches being Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). However, standard Autoencoders (AEs) and closely related structures have remained popular because they are easy to train and adapt to different tasks. An interesting question is if we can achieve state-of-the-art performance with AEs while retaining their good properties. We propose an answer to this question by introducing a new model called Y-Autoencoder (Y-AE). The structure and training procedure of a Y-AE enclose a representation into an implicit and an explicit part. The implicit part is similar to the output of an autoencoder and the explicit part is strongly correlated with labels in the training set. The two parts are separated in the latent space by splitting the output of the encoder into two paths (forming a Y shape) before decoding and re-encoding. We then impose a number of losses, such as reconstruction loss, and a loss on dependence between the implicit and explicit parts. Additionally, the projection in the explicit manifold is monitored by a predictor, that is embedded in the encoder and trained end-to-end with no adversarial losses. We provide significant experimental results on various domains, such as separation of style and content, image-to-image translation, and inverse graphics.

READ FULL TEXT

page 6

page 7

page 8

research
03/12/2018

Learning the Base Distribution in Implicit Generative Models

Popular generative model learning methods such as Generative Adversarial...
research
06/12/2019

Copulas as High-Dimensional Generative Models: Vine Copula Autoencoders

We propose a vine copula autoencoder to construct flexible generative mo...
research
07/07/2020

Gradient Origin Networks

This paper proposes a new type of implicit generative model that is able...
research
06/05/2018

Training Generative Reversible Networks

Generative models with an encoding component such as autoencoders curren...
research
02/20/2022

Disentangling Autoencoders (DAE)

Noting the importance of factorizing or disentangling the latent space, ...
research
02/24/2023

3D Generative Model Latent Disentanglement via Local Eigenprojection

Designing realistic digital humans is extremely complex. Most data-drive...
research
01/21/2018

Decoupled Learning for Conditional Adversarial Networks

Incorporating encoding-decoding nets with adversarial nets has been wide...

Please sign up or login with your details

Forgot password? Click here to reset