Video synthesis of human upper body with realistic face

08/19/2019
by   Zhaoxiang Liu, et al.
0

This paper presents a generative adversarial learning-based human upper body video synthesis approach to generate an upper body video of target person that is consistent with the body motion, face expression, and pose of the person in source video. We use upper body keypoints, facial action units and poses as intermediate representations between source video and target video. Instead of directly transferring the source video to the target video, we firstly map the source person's facial action units and poses into the target person's facial landmarks, then combine the normalized upper body keypoints and generated facial landmarks with spatio-temporal smoothing to generate the corresponding target video's image. Experimental results demonstrated the effectiveness of our method.

READ FULL TEXT

page 2

page 3

research
08/19/2019

Video synthesis of human upper body with realistic fac

This paper presents a generative adversarial learning-based human upper ...
research
08/22/2018

Everybody Dance Now

This paper presents a simple method for "do as I do" motion transfer: gi...
research
08/20/2019

A Neural Virtual Anchor Synthesizer based on Seq2Seq and GAN Models

This paper presents a novel framework to generate realistic face video o...
research
07/29/2018

ReenactGAN: Learning to Reenact Faces via Boundary Transfer

We present a novel learning-based framework for face reenactment. The pr...
research
06/24/2018

Generative Models for Pose Transfer

We investigate nearest neighbor and generative models for transferring p...
research
06/06/2020

The Criminality From Face Illusion

The automatic analysis of face images can generate predictions about a p...
research
07/08/2022

FAIVConf: Face enhancement for AI-based Video Conference with Low Bit-rate

Recently, high-quality video conferencing with fewer transmission bits h...

Please sign up or login with your details

Forgot password? Click here to reset