Cross3DVG: Baseline and Dataset for Cross-Dataset 3D Visual Grounding on Different RGB-D Scans

05/23/2023
by   Taiki Miyanishi, et al.
7

We present Cross3DVG, a novel task for cross-dataset visual grounding in 3D scenes, revealing the limitations of existing 3D visual grounding models using restricted 3D resources and thus easily overfit to a specific 3D dataset. To facilitate Cross3DVG, we have created a large-scale 3D visual grounding dataset containing more than 63k diverse descriptions of 3D objects within 1,380 indoor RGB-D scans from 3RScan with human annotations, paired with the existing 52k descriptions on ScanRefer. We perform Cross3DVG by training a model on the source 3D visual grounding dataset and then evaluating it on the target dataset constructed in different ways (e.g., different sensors, 3D reconstruction methods, and language annotators) without using target labels. We conduct comprehensive experiments using established visual grounding models, as well as a CLIP-based 2D-3D integration method, designed to bridge the gaps between 3D datasets. By performing Cross3DVG tasks, we found that (i) cross-dataset 3D visual grounding has significantly lower performance than learning and evaluation with a single dataset, suggesting much room for improvement in cross-dataset generalization of 3D visual grounding, (ii) better detectors and transformer-based localization modules for 3D grounding are beneficial for enhancing 3D grounding performance and (iii) fusing 2D-3D data using CLIP demonstrates further performance improvements. Our Cross3DVG task will provide a benchmark for developing robust 3D visual grounding models capable of handling diverse 3D scenes while leveraging deep language understanding.

READ FULL TEXT

page 1

page 9

page 15

page 16

page 17

page 18

research
09/11/2023

Multi3DRefer: Grounding Text Description to Multiple 3D Objects

We introduce the task of localizing a flexible number of objects in real...
research
04/03/2019

Revisiting Visual Grounding

We revisit a particular visual grounding method: the "Image Retrieval Us...
research
08/08/2023

3D-VisTA: Pre-trained Transformer for 3D Vision and Text Alignment

3D vision-language grounding (3D-VL) is an emerging field that aims to c...
research
07/06/2021

VidLanKD: Improving Language Understanding via Video-Distilled Knowledge Transfer

Since visual perception can give rich information beyond text descriptio...
research
03/23/2023

ScanERU: Interactive 3D Visual Grounding based on Embodied Reference Understanding

Aiming to link natural language descriptions to specific regions in a 3D...
research
07/23/2023

Iterative Robust Visual Grounding with Masked Reference based Centerpoint Supervision

Visual Grounding (VG) aims at localizing target objects from an image ba...
research
12/13/2019

Grounding-Tracking-Integration

In this paper, we study tracking by language that localizes the target b...

Please sign up or login with your details

Forgot password? Click here to reset