LLM-Grounder: Open-Vocabulary 3D Visual Grounding with Large Language Model as an Agent

09/21/2023
by   Jianing Yang, et al.
0

3D visual grounding is a critical skill for household robots, enabling them to navigate, manipulate objects, and answer questions based on their environment. While existing approaches often rely on extensive labeled data or exhibit limitations in handling complex language queries, we propose LLM-Grounder, a novel zero-shot, open-vocabulary, Large Language Model (LLM)-based 3D visual grounding pipeline. LLM-Grounder utilizes an LLM to decompose complex natural language queries into semantic constituents and employs a visual grounding tool, such as OpenScene or LERF, to identify objects in a 3D scene. The LLM then evaluates the spatial and commonsense relations among the proposed objects to make a final grounding decision. Our method does not require any labeled training data and can generalize to novel 3D scenes and arbitrary text queries. We evaluate LLM-Grounder on the ScanRefer benchmark and demonstrate state-of-the-art zero-shot grounding accuracy. Our findings indicate that LLMs significantly improve the grounding capability, especially for complex language queries, making LLM-Grounder an effective approach for 3D vision-language tasks in robotics. Videos and interactive demos can be found on the project website https://chat-with-nerf.github.io/ .

READ FULL TEXT

page 1

page 3

page 6

research
05/24/2023

GRILL: Grounded Vision-language Pre-training via Aligning Text and Image Regions

Generalization to unseen tasks is an important ability for few-shot lear...
research
03/16/2023

LERF: Language Embedded Radiance Fields

Humans describe the physical world using natural language to refer to sp...
research
11/28/2022

OpenScene: 3D Scene Understanding with Open Vocabularies

Traditional 3D scene understanding approaches rely on labeled 3D dataset...
research
07/23/2022

Semantic Abstraction: Open-World 3D Scene Understanding from 2D Vision-Language Models

We study open-world 3D scene understanding, a family of tasks that requi...
research
07/06/2020

Jump Operator Planning: Goal-Conditioned Policy Ensembles and Zero-Shot Transfer

In Hierarchical Control, compositionality, abstraction, and task-transfe...
research
12/02/2017

Interactive Reinforcement Learning for Object Grounding via Self-Talking

Humans are able to identify a referred visual object in a complex scene ...
research
06/24/2023

DesCo: Learning Object Recognition with Rich Language Descriptions

Recent development in vision-language approaches has instigated a paradi...

Please sign up or login with your details

Forgot password? Click here to reset