Open-World Object Manipulation using Pre-trained Vision-Language Models

03/02/2023
by   Austin Stone, et al.
0

For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary, e.g. "can you get me the pink stuffed whale?" to their sensory observations and actions. This brings up a notably difficult challenge for robots: while robot learning approaches allow robots to learn many different behaviors from first-hand experience, it is impractical for robots to have first-hand experiences that span all of this semantic information. We would like a robot's policy to be able to perceive and pick up the pink stuffed whale, even if it has never seen any data interacting with a stuffed whale before. Fortunately, static data on the internet has vast semantic information, and this information is captured in pre-trained vision-language models. In this paper, we study whether we can interface robot policies with these pre-trained models, with the aim of allowing robots to complete instructions involving object categories that the robot has never seen first-hand. We develop a simple approach, which we call Manipulation of Open-World Objects (MOO), which leverages a pre-trained vision-language model to extract object-identifying information from the language command and image, and conditions the robot policy on the current image, the instruction, and the extracted object information. In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments. In addition, we show how MOO generalizes to other, non-language-based input modalities to specify the object of interest such as finger pointing, and how it can be further extended to enable open-world navigation and manipulation. The project's website and evaluation videos can be found at https://robot-moo.github.io/

READ FULL TEXT

page 1

page 7

page 8

page 9

page 10

page 11

page 12

research
06/29/2023

KITE: Keypoint-Conditioned Policies for Semantic Manipulation

While natural language offers a convenient shared interface for humans a...
research
03/10/2023

Robotic Applications of Pre-Trained Vision-Language Models to Various Recognition Behaviors

In recent years, a number of models that learn the relations between vis...
research
01/30/2023

ESC: Exploration with Soft Commonsense Constraints for Zero-shot Object Navigation

The ability to accurately locate and navigate to a specific object is a ...
research
09/18/2023

Unsupervised Open-Vocabulary Object Localization in Videos

In this paper, we show that recent advances in video representation lear...
research
03/13/2023

Audio Visual Language Maps for Robot Navigation

While interacting in the world is a multi-sensory experience, many robot...
research
01/30/2023

RREx-BoT: Remote Referring Expressions with a Bag of Tricks

Household robots operate in the same space for years. Such robots increm...
research
05/15/2020

Grounding Language in Play

Natural language is perhaps the most versatile and intuitive way for hum...

Please sign up or login with your details

Forgot password? Click here to reset