Facial Keypoint Sequence Generation from Audio

11/02/2020
by   Prateek Manocha, et al.
0

Whenever we speak, our voice is accompanied by facial movements and expressions. Several recent works have shown the synthesis of highly photo-realistic videos of talking faces, but they either require a source video to drive the target face or only generate videos with a fixed head pose. This lack of facial movement is because most of these works focus on the lip movement in sync with the audio while assuming the remaining facial keypoints' fixed nature. To address this, a unique audio-keypoint dataset of over 150,000 videos at 224p and 25fps is introduced that relates the facial keypoint movement for the given audio. This dataset is then further used to train the model, Audio2Keypoint, a novel approach for synthesizing facial keypoint movement to go with the audio. Given a single image of the target person and an audio sequence (in any language), Audio2Keypoint generates a plausible keypoint movement sequence in sync with the input audio, conditioned on the input image to preserve the target person's facial characteristics. To the best of our knowledge, this is the first work that proposes an audio-keypoint dataset and learns a model to output the plausible keypoint sequence to go with audio of any arbitrary length. Audio2Keypoint generalizes across unseen people with a different facial structure allowing us to generate the sequence with the voice from any source or even synthetic voices. Instead of learning a direct mapping from audio to video domain, this work aims to learn the audio-keypoint mapping that allows for in-plane and out-of-plane head rotations, while preserving the person's identity using a Pose Invariant (PIV) Encoder.

READ FULL TEXT
research
12/11/2019

Neural Voice Puppetry: Audio-driven Facial Reenactment

We present Neural Voice Puppetry, a novel approach for audio-driven faci...
research
04/18/2023

Audio-Driven Talking Face Generation with Diverse yet Realistic Facial Animations

Audio-driven talking face generation, which aims to synthesize talking f...
research
10/07/2022

A Keypoint Based Enhancement Method for Audio Driven Free View Talking Head Synthesis

Audio driven talking head synthesis is a challenging task that attracts ...
research
01/15/2020

Everybody's Talkin': Let Me Talk as You Want

We present a method to edit a target portrait footage by taking a sequen...
research
12/14/2020

Multi Modal Adaptive Normalization for Audio to Video Generation

Speech-driven facial video generation has been a complex problem due to ...
research
02/19/2021

One Shot Audio to Animated Video Generation

We consider the challenging problem of audio to animated video generatio...
research
12/14/2020

Robust One Shot Audio to Video Generation

Audio to Video generation is an interesting problem that has numerous ap...

Please sign up or login with your details

Forgot password? Click here to reset