Language-Driven Representation Learning for Robotics

02/24/2023
by   Siddharth Karamcheti, et al.
11

Recent work in visual representation learning for robotics demonstrates the viability of learning from large video datasets of humans performing everyday tasks. Leveraging methods such as masked autoencoding and contrastive learning, these representations exhibit strong transfer to policy learning for visuomotor control. But, robot learning encompasses a diverse set of problems beyond control including grasp affordance prediction, language-conditioned imitation learning, and intent scoring for human-robot collaboration, amongst others. First, we demonstrate that existing representations yield inconsistent results across these tasks: masked autoencoding approaches pick up on low-level spatial features at the cost of high-level semantics, while contrastive learning approaches capture the opposite. We then introduce Voltron, a framework for language-driven representation learning from human videos and associated captions. Voltron trades off language-conditioned visual reconstruction to learn low-level visual patterns, and visually-grounded language generation to encode high-level semantics. We also construct a new evaluation suite spanning five distinct robot learning problems x2013 a unified platform for holistically evaluating visual representations for robotics. Through comprehensive, controlled experiments across all five problems, we find that Voltron's language-driven representations outperform the prior state-of-the-art, especially on targeted problems requiring higher-level features.

READ FULL TEXT

page 2

page 3

page 7

page 8

page 9

page 10

page 26

page 27

research
08/04/2021

Enhancing Self-supervised Video Representation Learning via Multi-level Feature Optimization

The crux of self-supervised video representation learning is to build ge...
research
08/02/2021

Self-Supervised Disentangled Representation Learning for Third-Person Imitation Learning

Humans learn to imitate by observing others. However, robot imitation le...
research
04/19/2023

EC^2: Emergent Communication for Embodied Control

Embodied control requires agents to leverage multi-modal pre-training to...
research
04/25/2022

Task-Induced Representation Learning

In this work, we evaluate the effectiveness of representation learning a...
research
02/03/2023

Aligning Robot and Human Representations

To act in the world, robots rely on a representation of salient task asp...
research
06/24/2023

IERL: Interpretable Ensemble Representation Learning – Combining CrowdSourced Knowledge and Distributed Semantic Representations

Large Language Models (LLMs) encode meanings of words in the form of dis...
research
05/26/2021

Provable Representation Learning for Imitation with Contrastive Fourier Features

In imitation learning, it is common to learn a behavior policy to match ...

Please sign up or login with your details

Forgot password? Click here to reset