Head2Head: Video-based Neural Head Synthesis

05/22/2020
by   Mohammad Rami Koujan, et al.
7

In this paper, we propose a novel machine learning architecture for facial reenactment. In particular, contrary to the model-based approaches or recent frame-based methods that use Deep Convolutional Neural Networks (DCNNs) to generate individual frames, we propose a novel method that (a) exploits the special structure of facial motion (paying particular attention to mouth motion) and (b) enforces temporal consistency. We demonstrate that the proposed method can transfer facial expressions, pose and gaze of a source actor to a target video in a photo-realistic fashion more accurately than state-of-the-art methods.

READ FULL TEXT

page 2

page 5

page 6

page 7

research
06/17/2020

Head2Head++: Deep Facial Attributes Re-Targeting

Facial video re-targeting is a challenging problem aiming to modify the ...
research
05/29/2018

HeadOn: Real-time Reenactment of Human Portrait Videos

We propose HeadOn, the first real-time source-to-target reenactment appr...
research
08/03/2022

Free-HeadGAN: Neural Talking Head Synthesis with Explicit Gaze Control

We present Free-HeadGAN, a person-generic neural talking head synthesis ...
research
11/23/2022

ManVatar : Fast 3D Head Avatar Reconstruction Using Motion-Aware Neural Voxels

With NeRF widely used for facial reenactment, recent methods can recover...
research
11/25/2022

Dynamic Neural Portraits

We present Dynamic Neural Portraits, a novel approach to the problem of ...
research
12/01/2021

Neural Emotion Director: Speech-preserving semantic control of facial expressions in "in-the-wild" videos

In this paper, we introduce a novel deep learning method for photo-reali...
research
09/05/2019

Neural Style-Preserving Visual Dubbing

Dubbing is a technique for translating video content from one language t...

Please sign up or login with your details

Forgot password? Click here to reset