MusicFace: Music-driven Expressive Singing Face Synthesis

03/24/2023
by   Pengfei Liu, et al.
0

It is still an interesting and challenging problem to synthesize a vivid and realistic singing face driven by music signal. In this paper, we present a method for this task with natural motions of the lip, facial expression, head pose, and eye states. Due to the coupling of the mixed information of human voice and background music in common signals of music audio, we design a decouple-and-fuse strategy to tackle the challenge. We first decompose the input music audio into human voice stream and background music stream. Due to the implicit and complicated correlation between the two-stream input signals and the dynamics of the facial expressions, head motions and eye states, we model their relationship with an attention scheme, where the effects of the two streams are fused seamlessly. Furthermore, to improve the expressiveness of the generated results, we propose to decompose head movements generation into speed generation and direction generation, and decompose eye states generation into the short-time eye blinking generation and the long-time eye closing generation to model them separately. We also build a novel SingingFace Dataset to support the training and evaluation of this task, and to facilitate future works on this topic. Extensive experiments and user study show that our proposed method is capable of synthesizing vivid singing face, which is better than state-of-the-art methods qualitatively and quantitatively.

READ FULL TEXT

page 1

page 3

page 4

page 8

page 9

page 10

page 16

research
08/18/2021

FACIAL: Synthesizing Dynamic Talking Face with Implicit Attribute Learning

In this paper, we propose a talking face generation method that takes an...
research
04/16/2021

Write-a-speaker: Text-based Emotional and Rhythmic Talking-head Generation

In this paper, we propose a novel text-based talking-head video generati...
research
02/02/2020

Music2Dance: DanceNet for Music-driven Dance Generation

Synthesize human motions from music, i.e., music to dance, is appealing ...
research
04/18/2023

Audio-Driven Talking Face Generation with Diverse yet Realistic Facial Animations

Audio-driven talking face generation, which aims to synthesize talking f...
research
02/24/2023

Pose-Controllable 3D Facial Animation Synthesis using Hierarchical Audio-Vertex Attention

Most of the existing audio-driven 3D facial animation methods suffered f...
research
09/16/2020

ChoreoNet: Towards Music to Dance Synthesis with Choreographic Action Unit

Dance and music are two highly correlated artistic forms. Synthesizing d...
research
10/26/2016

Mask-off: Synthesizing Face Images in the Presence of Head-mounted Displays

A head-mounted display (HMD) could be an important component of augmente...

Please sign up or login with your details

Forgot password? Click here to reset