CPNet: Exploiting CLIP-based Attention Condenser and Probability Map Guidance for High-fidelity Talking Face Generation

05/23/2023
by   Jingning Xu, et al.
0

Recently, talking face generation has drawn ever-increasing attention from the research community in computer vision due to its arduous challenges and widespread application scenarios, e.g. movie animation and virtual anchor. Although persevering efforts have been undertaken to enhance the fidelity and lip-sync quality of generated talking face videos, there is still large room for further improvements of synthesis quality and efficiency. Actually, these attempts somewhat ignore the explorations of fine-granularity feature extraction/integration and the consistency between probability distributions of landmarks, thereby recurring the issues of local details blurring and degraded fidelity. To mitigate these dilemmas, in this paper, a novel CLIP-based Attention and Probability Map Guided Network (CPNet) is delicately designed for inferring high-fidelity talking face videos. Specifically, considering the demands of fine-grained feature recalibration, a clip-based attention condenser is exploited to transfer knowledge with rich semantic priors from the prevailing CLIP model. Moreover, to guarantee the consistency in probability space and suppress the landmark ambiguity, we creatively propose the density map of facial landmark as auxiliary supervisory signal to guide the landmark distribution learning of generated frame. Extensive experiments on the widely-used benchmark dataset demonstrate the superiority of our CPNet against state of the arts in terms of image and lip-sync quality. In addition, a cohort of studies are also conducted to ablate the impacts of the individual pivotal components.

READ FULL TEXT

page 1

page 3

page 5

page 6

research
08/16/2022

StyleFaceV: Face Video Generation via Decomposing and Recomposing Pretrained StyleGAN3

Realistic generative face video synthesis has long been a pursuit in bot...
research
04/20/2023

High-Fidelity and Freely Controllable Talking Head Video Generation

Talking head generation is to generate video based on a given source ide...
research
10/10/2021

Fine-grained Identity Preserving Landmark Synthesis for Face Reenactment

Recent face reenactment works are limited by the coarse reference landma...
research
04/06/2023

Face Animation with an Attribute-Guided Diffusion Model

Face animation has achieved much progress in computer vision. However, p...
research
08/31/2023

Towards High-Fidelity Text-Guided 3D Face Generation and Manipulation Using only Images

Generating 3D faces from textual descriptions has a multitude of applica...
research
08/16/2023

OnUVS: Online Feature Decoupling Framework for High-Fidelity Ultrasound Video Synthesis

Ultrasound (US) imaging is indispensable in clinical practice. To diagno...
research
03/31/2020

Learning Oracle Attention for High-fidelity Face Completion

High-fidelity face completion is a challenging task due to the rich and ...

Please sign up or login with your details

Forgot password? Click here to reset