HyperReenact: One-Shot Reenactment via Jointly Learning to Refine and Retarget Faces

07/20/2023
by   Stella Bounareli, et al.
0

In this paper, we present our method for neural face reenactment, called HyperReenact, that aims to generate realistic talking head images of a source identity, driven by a target facial pose. Existing state-of-the-art face reenactment methods train controllable generative models that learn to synthesize realistic facial images, yet producing reenacted faces that are prone to significant visual artifacts, especially under the challenging condition of extreme head pose changes, or requiring expensive few-shot fine-tuning to better preserve the source identity characteristics. We propose to address these limitations by leveraging the photorealistic generation ability and the disentangled properties of a pretrained StyleGAN2 generator, by first inverting the real images into its latent space and then using a hypernetwork to perform: (i) refinement of the source identity characteristics and (ii) facial pose re-targeting, eliminating this way the dependence on external editing methods that typically produce artifacts. Our method operates under the one-shot setting (i.e., using a single source frame) and allows for cross-subject reenactment, without requiring any subject-specific fine-tuning. We compare our method both quantitatively and qualitatively against several state-of-the-art techniques on the standard benchmarks of VoxCeleb1 and VoxCeleb2, demonstrating the superiority of our approach in producing artifact-free images, exhibiting remarkable robustness even under extreme head pose changes. We make the code and the pretrained models publicly available at: https://github.com/StelaBou/HyperReenact .

READ FULL TEXT

page 6

page 13

page 14

page 16

page 17

page 18

page 19

page 20

research
09/27/2022

StyleMask: Disentangling the Style Space of StyleGAN2 for Neural Face Reenactment

In this paper we address the problem of neural face reenactment, where, ...
research
04/29/2021

Learned Spatial Representations for Few-shot Talking-Head Synthesis

We propose a novel approach for few-shot talking-head synthesis. While r...
research
01/31/2022

Finding Directions in GAN's Latent Space for Neural Face Reenactment

This paper is on face/head reenactment where the goal is to transfer the...
research
04/22/2021

Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation

While accurate lip synchronization has been achieved for arbitrary-subje...
research
08/07/2021

Learning Facial Representations from the Cycle-consistency of Face

Faces manifest large variations in many aspects, such as identity, expre...
research
09/27/2021

WarpedGANSpace: Finding non-linear RBF paths in GAN latent space

This work addresses the problem of discovering, in an unsupervised manne...
research
08/03/2022

Free-HeadGAN: Neural Talking Head Synthesis with Explicit Gaze Control

We present Free-HeadGAN, a person-generic neural talking head synthesis ...

Please sign up or login with your details

Forgot password? Click here to reset