Dancing Avatar: Pose and Text-Guided Human Motion Videos Synthesis with Image Diffusion Model

08/15/2023
by   Bosheng Qin, et al.
0

The rising demand for creating lifelike avatars in the digital realm has led to an increased need for generating high-quality human videos guided by textual descriptions and poses. We propose Dancing Avatar, designed to fabricate human motion videos driven by poses and textual cues. Our approach employs a pretrained T2I diffusion model to generate each video frame in an autoregressive fashion. The crux of innovation lies in our adept utilization of the T2I diffusion model for producing video frames successively while preserving contextual relevance. We surmount the hurdles posed by maintaining human character and clothing consistency across varying poses, along with upholding the background's continuity amidst diverse human movements. To ensure consistent human appearances across the entire video, we devise an intra-frame alignment module. This module assimilates text-guided synthesized human character knowledge into the pretrained T2I diffusion model, synergizing insights from ChatGPT. For preserving background continuity, we put forth a background alignment pipeline, amalgamating insights from segment anything and image inpainting techniques. Furthermore, we propose an inter-frame alignment module that draws inspiration from an auto-regressive pipeline to augment temporal consistency between adjacent frames, where the preceding frame guides the synthesis process of the current frame. Comparisons with state-of-the-art methods demonstrate that Dancing Avatar exhibits the capacity to generate human videos with markedly superior quality, both in terms of human and background fidelity, as well as temporal coherence compared to existing state-of-the-art approaches.

READ FULL TEXT

page 3

page 5

page 7

research
04/12/2023

DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion

We present DreamPose, a diffusion-based method for generating animated f...
research
11/01/2021

Render In-between: Motion Guided Video Synthesis for Action Interpolation

Upsampling videos of human activity is an interesting yet challenging ta...
research
06/13/2023

Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation

Large text-to-image diffusion models have exhibited impressive proficien...
research
05/29/2022

Feature-Aligned Video Raindrop Removal with Temporal Constraints

Existing adherent raindrop removal methods focus on the detection of the...
research
08/25/2023

Direction-aware Video Demoireing with Temporal-guided Bilateral Learning

Moire patterns occur when capturing images or videos on screens, severel...
research
08/07/2023

DiffSynth: Latent In-Iteration Deflickering for Realistic Video Synthesis

In recent years, diffusion models have emerged as the most powerful appr...
research
11/11/2019

Similarity-DT: Kernel Similarity Embedding for Dynamic Texture Synthesis

Dynamic texture (DT) exhibits statistical stationarity in the spatial do...

Please sign up or login with your details

Forgot password? Click here to reset