Denoising Diffusion Probabilistic Models for Styled Walking Synthesis

09/29/2022
by   Edmund J. C. Findlay, et al.
0

Generating realistic motions for digital humans is time-consuming for many graphics applications. Data-driven motion synthesis approaches have seen solid progress in recent years through deep generative models. These results offer high-quality motions but typically suffer in motion style diversity. For the first time, we propose a framework using the denoising diffusion probabilistic model (DDPM) to synthesize styled human motions, integrating two tasks into one pipeline with increased style diversity compared with traditional motion synthesis methods. Experimental results show that our system can generate high-quality and diverse walking motions.

READ FULL TEXT
research
12/16/2022

Unifying Human Motion Synthesis and Style Transfer with Denoising Diffusion Probabilistic Models

Generating realistic motions for digital humans is a core but challengin...
research
01/28/2021

A Causal Convolutional Neural Network for Motion Modeling and Synthesis

We propose a novel deep generative model based on causal convolutions fo...
research
11/15/2018

Motion Style Extraction Based on Sparse Coding Decomposition

We present a sparse coding-based framework for motion style decompositio...
research
02/07/2023

HumanMAC: Masked Motion Completion for Human Motion Prediction

Human motion prediction is a classical problem in computer vision and co...
research
10/09/2022

Computational Choreography using Human Motion Synthesis

Should deep learning models be trained to analyze human performance art?...
research
02/12/2023

Single Motion Diffusion

Synthesizing realistic animations of humans, animals, and even imaginary...
research
02/28/2023

Can We Use Diffusion Probabilistic Models for 3D Motion Prediction?

After many researchers observed fruitfulness from the recent diffusion p...

Please sign up or login with your details

Forgot password? Click here to reset