Audio-Visual Face Reenactment

10/06/2022
by   Madhav Agarwal, et al.
11

This work proposes a novel method to generate realistic talking head videos using audio and visual streams. We animate a source image by transferring head motion from a driving video using a dense motion field generated using learnable keypoints. We improve the quality of lip sync using audio as an additional input, helping the network to attend to the mouth region. We use additional priors using face segmentation and face mesh to improve the structure of the reconstructed faces. Finally, we improve the visual quality of the generations by incorporating a carefully designed identity-aware generator module. The identity-aware generator takes the source image and the warped motion features as input to generate a high-quality output with fine-grained details. Our method produces state-of-the-art results and generalizes well to unseen faces, languages, and voices. We comprehensively evaluate our approach using multiple metrics and outperforming the current techniques both qualitative and quantitatively. Our work opens up several applications, including enabling low bandwidth video calls. We release a demo video and additional information at http://cvit.iiit.ac.in/research/projects/cvit-projects/avfr.

READ FULL TEXT

page 1

page 3

page 4

page 5

page 7

page 8

research
08/17/2022

Extreme-scale Talking-Face Video Upsampling with Audio-Visual Priors

In this paper, we explore an interesting question of what can be obtaine...
research
11/22/2022

SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation

Generating talking head videos through a face image and a piece of speec...
research
11/27/2022

VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild

We present VideoReTalking, a new system to edit the faces of a real-worl...
research
04/06/2023

That's What I Said: Fully-Controllable Talking Face Generation

The goal of this paper is to synthesise talking faces with controllable ...
research
03/14/2023

DisCoHead: Audio-and-Video-Driven Talking Head Generation by Disentangled Control of Head Pose and Facial Expressions

For realistic talking head generation, creating natural head motion whil...
research
07/04/2023

A Comprehensive Multi-scale Approach for Speech and Dynamics Synchrony in Talking Head Generation

Animating still face images with deep generative models using a speech i...
research
08/29/2022

StableFace: Analyzing and Improving Motion Stability for Talking Face Generation

While previous speech-driven talking face generation methods have made s...

Please sign up or login with your details

Forgot password? Click here to reset