Controllable Motion Diffusion Model

06/01/2023
by   Yi Shi, et al.
17

Generating realistic and controllable motions for virtual characters is a challenging task in computer animation, and its implications extend to games, simulations, and virtual reality. Recent studies have drawn inspiration from the success of diffusion models in image generation, demonstrating the potential for addressing this task. However, the majority of these studies have been limited to offline applications that target at sequence-level generation that generates all steps simultaneously. To enable real-time motion synthesis with diffusion models in response to time-varying control signals, we propose the framework of the Controllable Motion Diffusion Model (COMODO). Our framework begins with an auto-regressive motion diffusion model (A-MDM), which generates motion sequences step by step. In this way, simply using the standard DDPM algorithm without any additional complexity, our framework is able to generate high-fidelity motion sequences over extended periods with different types of control signals. Then, we propose our reinforcement learning-based controller and controlling strategies on top of the A-MDM model, so that our framework can steer the motion synthesis process across multiple tasks, including target reaching, joystick-based control, goal-oriented control, and trajectory following. The proposed framework enables the real-time generation of diverse motions that react adaptively to user commands on-the-fly, thereby enhancing the overall user experience. Besides, it is compatible with the inpainting-based editing methods and can predict much more diverse motions without additional fine-tuning of the basic motion generation models. We conduct comprehensive experiments to evaluate the effectiveness of our framework in performing various tasks and compare its performance against state-of-the-art methods.

READ FULL TEXT

page 1

page 6

page 10

research
09/01/2022

FLAME: Free-form Language-based Motion Synthesis Editing

Text-based motion generation models are drawing a surge of interest for ...
research
12/06/2022

Pretrained Diffusion Models for Unified Human Motion Synthesis

Generative modeling of human motion has broad applications in computer a...
research
03/02/2023

Human Motion Diffusion as a Generative Prior

In recent months, we witness a leap forward as denoising diffusion model...
research
08/23/2023

Dance with You: The Diversity Controllable Dancer Generation via Diffusion Models

Recently, digital humans for interpersonal interaction in virtual enviro...
research
06/15/2023

R2-Diff: Denoising by diffusion as a refinement of retrieved motion for image-based motion prediction

Image-based motion prediction is one of the essential techniques for rob...
research
06/20/2023

Reinforcement Learning-based Virtual Fixtures for Teleoperation of Hydraulic Construction Machine

The utilization of teleoperation is a crucial aspect of the construction...
research
05/16/2019

MoGlow: Probabilistic and controllable motion synthesis using normalising flows

Data-driven modelling and synthesis of motion data is an active research...

Please sign up or login with your details

Forgot password? Click here to reset