Delving into the Frequency: Temporally Consistent Human Motion Transfer in the Fourier Space

09/01/2022
by   Guang Yang, et al.
7

Human motion transfer refers to synthesizing photo-realistic and temporally coherent videos that enable one person to imitate the motion of others. However, current synthetic videos suffer from the temporal inconsistency in sequential frames that significantly degrades the video quality, yet is far from solved by existing methods in the pixel domain. Recently, some works on DeepFake detection try to distinguish the natural and synthetic images in the frequency domain because of the frequency insufficiency of image synthesizing methods. Nonetheless, there is no work to study the temporal inconsistency of synthetic videos from the aspects of the frequency-domain gap between natural and synthetic videos. In this paper, we propose to delve into the frequency space for temporally consistent human motion transfer. First of all, we make the first comprehensive analysis of natural and synthetic videos in the frequency domain to reveal the frequency gap in both the spatial dimension of individual frames and the temporal dimension of the video. To close the frequency gap between the natural and synthetic videos, we propose a novel Frequency-based human MOtion TRansfer framework, named FreMOTR, which can effectively mitigate the spatial artifacts and the temporal inconsistency of the synthesized videos. FreMOTR explores two novel frequency-based regularization modules: 1) the Frequency-domain Appearance Regularization (FAR) to improve the appearance of the person in individual frames and 2) Temporal Frequency Regularization (TFR) to guarantee the temporal consistency between adjacent frames. Finally, comprehensive experiments demonstrate that the FreMOTR not only yields superior performance in temporal consistency metrics but also improves the frame-level visual quality of synthetic videos. In particular, the temporal consistency metrics are improved by nearly 30 the state-of-the-art model.

READ FULL TEXT

page 2

page 4

page 6

page 7

page 10

page 11

research
08/12/2019

Multi-Frame Content Integration with a Spatio-Temporal Attention Mechanism for Person Video Motion Transfer

Existing person video generation methods either lack the flexibility in ...
research
08/10/2022

A Detection Method of Temporally Operated Videos Using Robust Hashing

SNS providers are known to carry out the recompression and resizing of u...
research
10/09/2021

Temporally Consistent Video Colorization with Deep Feature Propagation and Self-regularization Learning

Video colorization is a challenging and highly ill-posed problem. Althou...
research
12/11/2020

Intrinsic Temporal Regularization for High-resolution Human Video Synthesis

Temporal consistency is crucial for extending image processing pipelines...
research
07/02/2023

Bidirectional Temporal Diffusion Model for Temporally Consistent Human Animation

We introduce a method to generate temporally coherent human animation fr...
research
12/20/2020

High-Fidelity Neural Human Motion Transfer from Monocular Video

Video-based human motion transfer creates video animations of humans fol...
research
01/15/2022

Learning Temporally and Semantically Consistent Unpaired Video-to-video Translation Through Pseudo-Supervision From Synthetic Optical Flow

Unpaired video-to-video translation aims to translate videos between a s...

Please sign up or login with your details

Forgot password? Click here to reset