Speech2Lip: High-fidelity Speech to Lip Generation by Learning from a Short Video

09/09/2023
by   Xiuzhe Wu, et al.
0

Synthesizing realistic videos according to a given speech is still an open challenge. Previous works have been plagued by issues such as inaccurate lip shape generation and poor image quality. The key reason is that only motions and appearances on limited facial areas (e.g., lip area) are mainly driven by the input speech. Therefore, directly learning a mapping function from speech to the entire head image is prone to ambiguity, particularly when using a short video for training. We thus propose a decomposition-synthesis-composition framework named Speech to Lip (Speech2Lip) that disentangles speech-sensitive and speech-insensitive motion/appearance to facilitate effective learning from limited training data, resulting in the generation of natural-looking videos. First, given a fixed head pose (i.e., canonical space), we present a speech-driven implicit model for lip image generation which concentrates on learning speech-sensitive motion and appearance. Next, to model the major speech-insensitive motion (i.e., head movement), we introduce a geometry-aware mutual explicit mapping (GAMEM) module that establishes geometric mappings between different head poses. This allows us to paste generated lip images at the canonical space onto head images with arbitrary poses and synthesize talking videos with natural head movements. In addition, a Blend-Net and a contrastive sync loss are introduced to enhance the overall synthesis performance. Quantitative and qualitative results on three benchmarks demonstrate that our model can be trained by a video of just a few minutes in length and achieve state-of-the-art performance in both visual quality and speech-visual synchronization. Code: https://github.com/CVMI-Lab/Speech2Lip.

READ FULL TEXT

page 1

page 3

page 4

page 7

page 8

research
07/16/2020

Talking-head Generation with Rhythmic Head Motion

When people deliver a speech, they naturally move heads, and this rhythm...
research
08/09/2021

AnyoneNet: Synchronized Speech and Talking Head Generation for Arbitrary Person

Automatically generating videos in which synthesized speech is synchroni...
research
02/05/2020

Prediction of head motion from speech waveforms with a canonical-correlation-constrained autoencoder

This study investigates the direct use of speech waveforms to predict he...
research
10/26/2022

Naturalistic Head Motion Generation from Speech

Synthesizing natural head motion to accompany speech for an embodied con...
research
07/19/2023

Implicit Identity Representation Conditioned Memory Compensation Network for Talking Head video Generation

Talking head video generation aims to animate a human face in a still im...
research
08/01/2023

Context-Aware Talking-Head Video Editing

Talking-head video editing aims to efficiently insert, delete, and subst...
research
09/22/2021

Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation

To the best of our knowledge, we first present a live system that genera...

Please sign up or login with your details

Forgot password? Click here to reset