Talking Head Generation with Probabilistic Audio-to-Visual Diffusion Priors

12/07/2022
by   Zhentao Yu, et al.
0

In this paper, we introduce a simple and novel framework for one-shot audio-driven talking head generation. Unlike prior works that require additional driving sources for controlled synthesis in a deterministic manner, we instead probabilistically sample all the holistic lip-irrelevant facial motions (i.e. pose, expression, blink, gaze, etc.) to semantically match the input audio while still maintaining both the photo-realism of audio-lip synchronization and the overall naturalness. This is achieved by our newly proposed audio-to-visual diffusion prior trained on top of the mapping between audio and disentangled non-lip facial representations. Thanks to the probabilistic nature of the diffusion prior, one big advantage of our framework is it can synthesize diverse facial motion sequences given the same audio clip, which is quite user-friendly for many real applications. Through comprehensive evaluations on public benchmarks, we conclude that (1) our diffusion prior outperforms auto-regressive prior significantly on almost all the concerned metrics; (2) our overall system is competitive with prior works in terms of audio-lip synchronization but can effectively sample rich and natural-looking lip-irrelevant facial motions while still semantically harmonized with the audio input.

READ FULL TEXT

page 1

page 7

page 8

page 10

page 11

page 12

page 13

research
08/23/2022

StyleTalker: One-shot Style-based Audio-driven Talking Head Video Generation

We propose StyleTalker, a novel audio-driven talking head generation mod...
research
04/18/2023

Audio-Driven Talking Face Generation with Diverse yet Realistic Facial Animations

Audio-driven talking face generation, which aims to synthesize talking f...
research
11/26/2022

Progressive Disentangled Representation Learning for Fine-Grained Controllable Talking Head Synthesis

We present a novel one-shot talking head synthesis method that achieves ...
research
08/11/2023

Controlling Character Motions without Observable Driving Source

How to generate diverse, life-like, and unlimited long head/body sequenc...
research
12/04/2021

Joint Audio-Text Model for Expressive Speech-Driven 3D Facial Animation

Speech-driven 3D facial animation with accurate lip synchronization has ...
research
09/22/2021

Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation

To the best of our knowledge, we first present a live system that genera...
research
11/17/2022

Listen, denoise, action! Audio-driven motion synthesis with diffusion models

Diffusion models have experienced a surge of interest as highly expressi...

Please sign up or login with your details

Forgot password? Click here to reset