StyleMask: Disentangling the Style Space of StyleGAN2 for Neural Face Reenactment

09/27/2022
by   Stella Bounareli, et al.
26

In this paper we address the problem of neural face reenactment, where, given a pair of a source and a target facial image, we need to transfer the target's pose (defined as the head pose and its facial expressions) to the source image, by preserving at the same time the source's identity characteristics (e.g., facial shape, hair style, etc), even in the challenging case where the source and the target faces belong to different identities. In doing so, we address some of the limitations of the state-of-the-art works, namely, a) that they depend on paired training data (i.e., source and target faces have the same identity), b) that they rely on labeled data during inference, and c) that they do not preserve identity in large head pose changes. More specifically, we propose a framework that, using unpaired randomly generated facial images, learns to disentangle the identity characteristics of the face from its pose by incorporating the recently introduced style space 𝒮 of StyleGAN2, a latent representation space that exhibits remarkable disentanglement properties. By capitalizing on this, we learn to successfully mix a pair of source and target style codes using supervision from a 3D model. The resulting latent code, that is subsequently used for reenactment, consists of latent units corresponding to the facial pose of the target only and of units corresponding to the identity of the source only, leading to notable improvement in the reenactment performance compared to recent state-of-the-art methods. In comparison to state of the art, we quantitatively and qualitatively show that the proposed method produces higher quality results even on extreme pose variations. Finally, we report results on real images by first embedding them on the latent space of the pretrained generator. We make the code and pretrained models publicly available at: https://github.com/StelaBou/StyleMask

READ FULL TEXT

page 7

page 11

page 12

page 13

page 14

page 15

page 16

page 17

research
07/20/2023

HyperReenact: One-Shot Reenactment via Jointly Learning to Refine and Retarget Faces

In this paper, we present our method for neural face reenactment, called...
research
04/13/2021

VariTex: Variational Neural Face Textures

Deep generative models can synthesize photorealistic images of human fac...
research
04/29/2021

Learned Spatial Representations for Few-shot Talking-Head Synthesis

We propose a novel approach for few-shot talking-head synthesis. While r...
research
03/13/2016

Learning Typographic Style

Typography is a ubiquitous art form that affects our understanding, perc...
research
11/19/2022

Face Swapping as A Simple Arithmetic Operation

We propose a novel high-fidelity face swapping method called "Arithmetic...
research
08/16/2022

Style Your Hair: Latent Optimization for Pose-Invariant Hairstyle Transfer via Local-Style-Aware Hair Alignment

Editing hairstyle is unique and challenging due to the complexity and de...
research
03/31/2020

TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting

We present a lightweight video motion retargeting approach TransMoMo tha...

Please sign up or login with your details

Forgot password? Click here to reset