MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies

05/23/2019
by   Xue Bin Peng, et al.
5

Humans are able to perform a myriad of sophisticated tasks by drawing upon skills acquired through prior experience. For autonomous agents to have this capability, they must be able to extract reusable skills from past experience that can be recombined in new ways for subsequent tasks. Furthermore, when controlling complex high-dimensional morphologies, such as humanoid bodies, tasks often require coordination of multiple skills simultaneously. Learning discrete primitives for every combination of skills quickly becomes prohibitive. Composable primitives that can be recombined to create a large variety of behaviors can be more suitable for modeling this combinatorial explosion. In this work, we propose multiplicative compositional policies (MCP), a method for learning reusable motor skills that can be composed to produce a range of complex behaviors. Our method factorizes an agent's skills into a collection of primitives, where multiple primitives can be activated simultaneously via multiplicative composition. This flexibility allows the primitives to be transferred and recombined to elicit new behaviors as necessary for novel tasks. We demonstrate that MCP is able to extract composable skills for highly complex simulated characters from pre-training tasks, such as motion imitation, and then reuse these skills to solve challenging continuous control tasks, such as dribbling a soccer ball to a goal, and picking up an object and transporting it to a target location.

READ FULL TEXT
research
03/04/2021

Toward Robust Long Range Policy Transfer

Humans can master a new task within a few trials by drawing upon skills ...
research
11/24/2020

CoMic: Complementary Task Learning & Mimicry for Reusable Skills

Learning to control complex bodies and reuse learned behaviors is a long...
research
12/17/2022

Cascaded Compositional Residual Learning for Complex Interactive Behaviors

Real-world autonomous missions often require rich interaction with nearb...
research
11/29/2020

Self-supervised Visual Reinforcement Learning with Object-centric Representations

Autonomous agents need large repertoires of skills to act reasonably on ...
research
11/30/2017

Learning to Compose Skills

We present a differentiable framework capable of learning a wide variety...
research
11/28/2018

Neural probabilistic motor primitives for humanoid control

We focus on the problem of learning a single motor module that can flexi...
research
12/09/2021

Learning Transferable Motor Skills with Hierarchical Latent Mixture Policies

For robots operating in the real world, it is desirable to learn reusabl...

Please sign up or login with your details

Forgot password? Click here to reset