Face Identity Disentanglement via Latent Space Mapping

05/15/2020
by   Yotam Nitzan, et al.
0

Learning disentangled representations of data is a fundamental problem in artificial intelligence. Specifically, disentangled latent representations allow generative models to control and compose the disentangled factors in the synthesis process. Current methods, however, require extensive supervision and training, or instead, noticeably compromise quality. In this paper, we present a method that learn show to represent data in a disentangled way, with minimal supervision, manifested solely using available pre-trained networks. Our key insight is to decouple the processes of disentanglement and synthesis, by employing a leading pre-trained unconditional image generator, such as StyleGAN. By learning to map into its latent space, we leverage both its state-of-the-art quality generative power, and its rich and expressive latent space, without the burden of training it.We demonstrate our approach on the complex and high dimensional domain of human heads. We evaluate our method qualitatively and quantitatively, and exhibit its success with de-identification operations and with temporal identity coherency in image sequences. Through this extensive experimentation, we show that our method successfully disentangles identity from other facial attributes, surpassing existing methods, even though they require more training and supervision.

READ FULL TEXT

page 3

page 8

page 9

page 10

page 11

page 12

research
03/30/2022

High-resolution Face Swapping via Latent Semantics Disentanglement

We present a novel high-resolution face swapping method using the inhere...
research
08/25/2021

Heredity-aware Child Face Image Generation with Latent Space Disentanglement

Generative adversarial networks have been widely used in image synthesis...
research
12/04/2018

A Spectral Regularizer for Unsupervised Disentanglement

Generative models that learn to associate variations in the output along...
research
07/15/2021

StyleVideoGAN: A Temporal Generative Model using a Pretrained StyleGAN

Generative adversarial models (GANs) continue to produce advances in ter...
research
08/26/2019

Learning Disentangled Representations via Independent Subspaces

Image generating neural networks are mostly viewed as black boxes, where...
research
11/29/2019

Transflow Learning: Repurposing Flow Models Without Retraining

It is well known that deep generative models have a rich latent space, a...
research
12/10/2018

Disentangled Dynamic Representations from Unordered Data

We present a deep generative model that learns disentangled static and d...

Please sign up or login with your details

Forgot password? Click here to reset