MoCapAct: A Multi-Task Dataset for Simulated Humanoid Control

08/15/2022
by   Nolan Wagener, et al.
0

Simulated humanoids are an appealing research domain due to their physical capabilities. Nonetheless, they are also challenging to control, as a policy must drive an unstable, discontinuous, and high-dimensional physical system. One widely studied approach is to utilize motion capture (MoCap) data to teach the humanoid agent low-level skills (e.g., standing, walking, and running) that can then be re-used to synthesize high-level behaviors. However, even with MoCap data, controlling simulated humanoids remains very hard, as MoCap data offers only kinematic information. Finding physical control inputs to realize the demonstrated motions requires computationally intensive methods like reinforcement learning. Thus, despite the publicly available MoCap data, its utility has been limited to institutions with large-scale compute. In this work, we dramatically lower the barrier for productive research on this topic by training and releasing high-quality agents that can track over three hours of MoCap data for a simulated humanoid in the dm_control physics-based environment. We release MoCapAct (Motion Capture with Actions), a dataset of these expert agents and their rollouts, which contain proprioceptive observations and actions. We demonstrate the utility of MoCapAct by using it to train a single hierarchical policy capable of tracking the entire MoCap dataset within dm_control and show the learned low-level component can be re-used to efficiently learn downstream high-level tasks. Finally, we use MoCapAct to train an autoregressive GPT model and show that it can control a simulated humanoid to perform natural motion completion given a motion prompt. Videos of the results and links to the code and dataset are available at https://microsoft.github.io/MoCapAct.

READ FULL TEXT

page 2

page 3

page 5

research
04/05/2021

AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control

Synthesizing graceful and life-like behaviors for physically simulated c...
research
10/21/2020

Learning Spring Mass Locomotion: Guiding Policies with a Reduced-Order Model

In this paper, we describe an approach to achieve dynamic legged locomot...
research
01/31/2023

PADL: Language-Directed Physics-Based Character Control

Developing systems that can synthesize natural and life-like motions for...
research
12/24/2022

SHIRO: Soft Hierarchical Reinforcement Learning

Hierarchical Reinforcement Learning (HRL) algorithms have been demonstra...
research
10/08/2018

SFV: Reinforcement Learning of Physical Skills from Videos

Data-driven character animation based on motion capture can produce high...
research
06/15/2023

Hierarchical Planning and Control for Box Loco-Manipulation

Humans perform everyday tasks using a combination of locomotion and mani...
research
08/14/2023

Neural Categorical Priors for Physics-Based Character Control

Recent advances in learning reusable motion priors have demonstrated the...

Please sign up or login with your details

Forgot password? Click here to reset