Composite Motion Learning with Task Control

05/05/2023
by   Pei Xu, et al.
0

We present a deep learning method for composite and task-driven motion control for physically simulated characters. In contrast to existing data-driven approaches using reinforcement learning that imitate full-body motions, we learn decoupled motions for specific body parts from multiple reference motions simultaneously and directly by leveraging the use of multiple discriminators in a GAN-like setup. In this process, there is no need of any manual work to produce composite reference motions for learning. Instead, the control policy explores by itself how the composite motions can be combined automatically. We further account for multiple task-specific rewards and train a single, multi-objective control policy. To this end, we propose a novel framework for multi-objective learning that adaptively balances the learning of disparate motions from multiple sources and multiple goal-directed control objectives. In addition, as composite motions are typically augmentations of simpler behaviors, we introduce a sample-efficient method for training composite control policies in an incremental manner, where we reuse a pre-trained policy as the meta policy and train a cooperative policy that adapts the meta one for new composite tasks. We show the applicability of our approach on a variety of challenging multi-objective tasks involving both composite motion imitation and multiple goal-directed control.

READ FULL TEXT

page 9

page 10

page 11

page 13

research
12/24/2020

Towards Coordinated Robot Motions: End-to-End Learning of Motion Policies on Transform Trees

Robotic tasks often require generation of motions that satisfy multiple ...
research
08/12/2019

Convolutional Humanoid Animation via Deformation

In this paper we present a new deep learning-driven approach to image-ba...
research
10/14/2022

Computational Design of Active Kinesthetic Garments

Garments with the ability to provide kinesthetic force-feedback on-deman...
research
06/18/2023

Language-Guided Generation of Physically Realistic Robot Motion and Control

We aim to control a robot to physically behave in the real world followi...
research
08/14/2023

Adaptive Tracking of a Single-Rigid-Body Character in Various Environments

Since the introduction of DeepMimic [Peng et al. 2018], subsequent resea...
research
05/21/2021

A GAN-Like Approach for Physics-Based Imitation Learning and Interactive Control

We present a simple and intuitive approach for interactive control of ph...
research
02/26/2023

Multi-objective Generative Design of Three-Dimensional Composite Materials

Composite materials with 3D architectures are desirable in a variety of ...

Please sign up or login with your details

Forgot password? Click here to reset