That's What I Said: Fully-Controllable Talking Face Generation

04/06/2023
by   Youngjoon Jang, et al.
0

The goal of this paper is to synthesise talking faces with controllable facial motions. To achieve this goal, we propose two key ideas. The first is to establish a canonical space where every face has the same motion patterns but different identities. The second is to navigate a multimodal motion space that only represents motion-related features while eliminating identity information. To disentangle identity and motion, we introduce an orthogonality constraint between the two different latent spaces. From this, our method can generate natural-looking talking faces with fully controllable facial attributes and accurate lip synchronisation. Extensive experiments demonstrate that our method achieves state-of-the-art results in terms of both visual quality and lip-sync score. To the best of our knowledge, we are the first to develop a talking face generation framework that can accurately manifest full target facial motions including lip, head pose, and eye movements in the generated video without any additional supervision beyond RGB video with audio.

READ FULL TEXT

page 1

page 3

page 7

page 8

page 9

research
08/18/2021

FACIAL: Synthesizing Dynamic Talking Face with Implicit Attribute Learning

In this paper, we propose a talking face generation method that takes an...
research
04/22/2021

Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation

While accurate lip synchronization has been achieved for arbitrary-subje...
research
04/25/2016

Makeup like a superstar: Deep Localized Makeup Transfer Network

In this paper, we propose a novel Deep Localized Makeup Transfer Network...
research
10/11/2022

Controllable Radiance Fields for Dynamic Face Synthesis

Recent work on 3D-aware image synthesis has achieved compelling results ...
research
06/08/2021

LipSync3D: Data-Efficient Learning of Personalized 3D Talking Faces from Video using Pose and Lighting Normalization

In this paper, we present a video-based learning framework for animating...
research
10/06/2022

Audio-Visual Face Reenactment

This work proposes a novel method to generate realistic talking head vid...
research
04/03/2023

MetaHead: An Engine to Create Realistic Digital Head

Collecting and labeling training data is one important step for learning...

Please sign up or login with your details

Forgot password? Click here to reset