FACIAL: Synthesizing Dynamic Talking Face with Implicit Attribute Learning

08/18/2021
by   Chenxu Zhang, et al.
1

In this paper, we propose a talking face generation method that takes an audio signal as input and a short target video clip as reference, and synthesizes a photo-realistic video of the target face with natural lip motions, head poses, and eye blinks that are in-sync with the input audio signal. We note that the synthetic face attributes include not only explicit ones such as lip motions that have high correlations with speech, but also implicit ones such as head poses and eye blinks that have only weak correlation with the input audio. To model such complicated relationships among different face attributes with input audio, we propose a FACe Implicit Attribute Learning Generative Adversarial Network (FACIAL-GAN), which integrates the phonetics-aware, context-aware, and identity-aware information to synthesize the 3D face animation with realistic motions of lips, head poses, and eye blinks. Then, our Rendering-to-Video network takes the rendered face images and the attention map of eye blinks as input to generate the photo-realistic output video frames. Experimental results and user studies show our method can generate realistic talking face videos with not only synchronized lip motions, but also natural head movements and eye blinks, with better qualities than the results of state-of-the-art methods.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 7

page 8

research
07/20/2021

Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion

We propose an audio-driven talking-head method to generate photo-realist...
research
03/24/2023

MusicFace: Music-driven Expressive Singing Face Synthesis

It is still an interesting and challenging problem to synthesize a vivid...
research
04/06/2023

That's What I Said: Fully-Controllable Talking Face Generation

The goal of this paper is to synthesise talking faces with controllable ...
research
10/26/2016

Mask-off: Synthesizing Face Images in the Presence of Head-mounted Displays

A head-mounted display (HMD) could be an important component of augmente...
research
07/17/2020

Personalized Speech2Video with 3D Skeleton Regularization and Expressive Body Poses

In this paper, we propose a novel approach to convert given speech audio...
research
01/03/2022

DFA-NeRF: Personalized Talking Head Generation via Disentangled Face Attributes Neural Rendering

While recent advances in deep neural networks have made it possible to r...
research
09/22/2021

Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation

To the best of our knowledge, we first present a live system that genera...

Please sign up or login with your details

Forgot password? Click here to reset