Visual Speech-Aware Perceptual 3D Facial Expression Reconstruction from Videos

by   Panagiotis P. Filntisis, et al.

The recent state of the art on monocular 3D face reconstruction from image data has made some impressive advancements, thanks to the advent of Deep Learning. However, it has mostly focused on input coming from a single RGB image, overlooking the following important factors: a) Nowadays, the vast majority of facial image data of interest do not originate from single images but rather from videos, which contain rich dynamic information. b) Furthermore, these videos typically capture individuals in some form of verbal communication (public talks, teleconferences, audiovisual human-computer interactions, interviews, monologues/dialogues in movies, etc). When existing 3D face reconstruction methods are applied in such videos, the artifacts in the reconstruction of the shape and motion of the mouth area are often severe, since they do not match well with the speech audio. To overcome the aforementioned limitations, we present the first method for visual speech-aware perceptual reconstruction of 3D mouth expressions. We do this by proposing a "lipread" loss, which guides the fitting process so that the elicited perception from the 3D reconstructed talking head resembles that of the original video footage. We demonstrate that, interestingly, the lipread loss is better suited for 3D reconstruction of mouth movements compared to traditional landmark losses, and even direct 3D supervision. Furthermore, the devised method does not rely on any text transcriptions or corresponding audio, rendering it ideal for training in unlabeled datasets. We verify the efficiency of our method through exhaustive objective evaluations on three large-scale datasets, as well as subjective evaluation with two web-based user studies.


page 1

page 7

page 8

page 13

page 14

page 15

page 16

page 17


AVFace: Towards Detailed Audio-Visual 4D Face Reconstruction

In this work, we present a multimodal solution to the problem of 4D face...

Write-a-speaker: Text-based Emotional and Rhythmic Talking-head Generation

In this paper, we propose a novel text-based talking-head video generati...

Emotional Speech-Driven Animation with Content-Emotion Disentanglement

To be widely adopted, 3D facial avatars need to be animated easily, real...

Realistic Speech-Driven Facial Animation with GANs

Speech-driven facial animation is the process that automatically synthes...

EMOCA: Emotion Driven Monocular Face Capture and Animation

As 3D facial avatars become more widely used for communication, it is cr...

Visual-Aware Text-to-Speech

Dynamically synthesizing talking speech that actively responds to a list...

A Comprehensive Multi-scale Approach for Speech and Dynamics Synchrony in Talking Head Generation

Animating still face images with deep generative models using a speech i...

Please sign up or login with your details

Forgot password? Click here to reset