DSRGAN: Explicitly Learning Disentangled Representation of Underlying Structure and Rendering for Image Generation without Tuple Supervision

09/30/2019
by   Guang-Yuan Hao, et al.
33

We focus on explicitly learning disentangled representation for natural image generation, where the underlying spatial structure and the rendering on the structure can be independently controlled respectively, yet using no tuple supervision. The setting is significant since tuple supervision is costly and sometimes even unavailable. However, the task is highly unconstrained and thus ill-posed. To address this problem, we propose to introduce an auxiliary domain which shares a common underlying-structure space with the target domain, and we make a partially shared latent space assumption. The key idea is to encourage the partially shared latent variable to represent the similar underlying spatial structures in both domains, while the two domain-specific latent variables will be unavoidably arranged to present renderings of two domains respectively. This is achieved by designing two parallel generative networks with a common Progressive Rendering Architecture (PRA), which constrains both generative networks' behaviors to model shared underlying structure and to model spatially dependent relation between rendering and underlying structure. Thus, we propose DSRGAN (GANs for Disentangling Underlying Structure and Rendering) to instantiate our method. We also propose a quantitative criterion (the Normalized Disentanglability) to quantify disentanglability. Comparison to the state-of-the-art methods shows that DSRGAN can significantly outperform them in disentanglability.

READ FULL TEXT

page 1

page 6

page 7

page 8

research
04/28/2020

Neural Hair Rendering

In this paper, we propose a generic neural-based hair rendering pipeline...
research
01/24/2019

Learning Disentangled Representations with Reference-Based Variational Autoencoders

Learning disentangled representations from visual data, where different ...
research
12/22/2020

Learning Disentangled Semantic Representation for Domain Adaptation

Domain adaptation is an important but challenging task. Most of the exis...
research
02/12/2019

Density Estimation and Incremental Learning of Latent Vector for Generative Autoencoders

In this paper, we treat the image generation task using the autoencoder,...
research
06/22/2018

Variational Bi-domain Triplet Autoencoder

We investigate deep generative models, which allow us to use training da...
research
02/26/2020

NestedVAE: Isolating Common Factors via Weak Supervision

Fair and unbiased machine learning is an important and active field of r...
research
08/17/2021

Orthogonal Jacobian Regularization for Unsupervised Disentanglement in Image Generation

Unsupervised disentanglement learning is a crucial issue for understandi...

Please sign up or login with your details

Forgot password? Click here to reset