Plug-in Factorization for Latent Representation Disentanglement

05/27/2019
by   Jee Seok Yoon, et al.
0

In this work, we propose a Factorized Disentangler-Entangler Network (FDEN) that learns to decompose a latent representation into two mutually independent factors, namely, identity and style. Given a latent representation, the proposed framework draws a set of interpretable factors aligned to identity of an observed data and learns to maximize the independency between these factors. Our work introduces an idea for a plug-in method to disentangle latent representations of already learned deep models with no affect to the model. In doing so, it brings the possibilities of extending state-of-the-art models to solve different tasks and also maintain the performance of its original task. Thus, FDEN is naturally applicable to jointly perform multiple tasks such as few-shot learning and image-to-image translation in a single framework. We show the effectiveness of our work in disentangling a latent representation in two parts. First, to evaluate the alignment of factor to an identity, we perform few-shot learning using only the aligned factor. Then, to evaluate the effectiveness of decomposition of latent representation and to show that plugin method does not affect the deep model in its performance, we perform image-to-image style transfer by mixing factors of different images. These evaluations show, qualitatively and quantitatively, that our proposed framework can indeed disentangle a latent representation.

READ FULL TEXT

page 8

page 12

page 13

research
10/21/2021

StyleAlign: Analysis and Applications of Aligned StyleGAN Models

In this paper, we perform an in-depth study of the properties and applic...
research
06/16/2021

Smoothing the Disentangled Latent Style Space for Unsupervised Image-to-Image Translation

Image-to-Image (I2I) multi-domain translation models are usually evaluat...
research
06/21/2019

A Cyclically-Trained Adversarial Network for Invariant Representation Learning

We propose a cyclically-trained adversarial network to learn mappings fr...
research
11/26/2021

ManiFest: Manifold Deformation for Few-shot Image Translation

Most image-to-image translation methods require a large number of traini...
research
01/12/2020

Fine-grained Image-to-Image Transformation towards Visual Recognition

Existing image-to-image transformation approaches primarily focus on syn...
research
10/12/2022

What can we learn about a generated image corrupting its latent representation?

Generative adversarial networks (GANs) offer an effective solution to th...
research
04/29/2021

Learned Spatial Representations for Few-shot Talking-Head Synthesis

We propose a novel approach for few-shot talking-head synthesis. While r...

Please sign up or login with your details

Forgot password? Click here to reset