MetaMorph: Learning Universal Controllers with Transformers

03/22/2022
by   Agrim Gupta, et al.
2

Multiple domains like vision, natural language, and audio are witnessing tremendous progress by leveraging Transformers for large scale pre-training followed by task specific fine tuning. In contrast, in robotics we primarily train a single robot for a single task. However, modular robot systems now allow for the flexible combination of general-purpose building blocks into task optimized morphologies. However, given the exponentially large number of possible robot morphologies, training a controller for each new design is impractical. In this work, we propose MetaMorph, a Transformer based approach to learn a universal controller over a modular robot design space. MetaMorph is based on the insight that robot morphology is just another modality on which we can condition the output of a Transformer. Through extensive experiments we demonstrate that large scale pre-training on a variety of robot morphologies results in policies with combinatorial generalization capabilities, including zero shot generalization to unseen robot morphologies. We further demonstrate that our pre-trained policy can be used for sample-efficient transfer to completely new robot morphologies and tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/17/2022

On the Effect of Pre-training for Transformer in Different Modality on Offline Reinforcement Learning

We empirically investigate how pre-training on data of different modalit...
research
07/31/2022

Aggretriever: A Simple Approach to Aggregate Textual Representation for Robust Dense Passage Retrieval

Pre-trained transformers has declared its success in many NLP tasks. One...
research
09/01/2021

CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations

Existing audio-language task-specific predictive approaches focus on bui...
research
09/22/2022

PACT: Perception-Action Causal Transformer for Autoregressive Robotics Pre-Training

Robotics has long been a field riddled with complex systems architecture...
research
04/21/2023

Spatial-Language Attention Policies for Efficient Robot Learning

We investigate how to build and train spatial representations for robot ...
research
10/11/2022

Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials

Recent progress in deep learning highlights the tremendous potential of ...
research
11/25/2022

A System for Morphology-Task Generalization via Unified Representation and Behavior Distillation

The rise of generalist large-scale models in natural language and vision...

Please sign up or login with your details

Forgot password? Click here to reset