Contextual Modeling for 3D Dense Captioning on Point Clouds

10/08/2022
by   Yufeng Zhong, et al.
0

3D dense captioning, as an emerging vision-language task, aims to identify and locate each object from a set of point clouds and generate a distinctive natural language sentence for describing each located object. However, the existing methods mainly focus on mining inter-object relationship, while ignoring contextual information, especially the non-object details and background environment within the point clouds, thus leading to low-quality descriptions, such as inaccurate relative position information. In this paper, we make the first attempt to utilize the point clouds clustering features as the contextual information to supply the non-object details and background environment of the point clouds and incorporate them into the 3D dense captioning task. We propose two separate modules, namely the Global Context Modeling (GCM) and Local Context Modeling (LCM), in a coarse-to-fine manner to perform the contextual modeling of the point clouds. Specifically, the GCM module captures the inter-object relationship among all objects with global contextual information to obtain more complete scene information of the whole point clouds. The LCM module exploits the influence of the neighboring objects of the target object and local contextual information to enrich the object representations. With such global and local contextual modeling strategies, our proposed model can effectively characterize the object representations and contextual information and thereby generate comprehensive and detailed descriptions of the located objects. Extensive experiments on the ScanRefer and Nr3D datasets demonstrate that our proposed method sets a new record on the 3D dense captioning task, and verify the effectiveness of our raised contextual modeling of point clouds.

READ FULL TEXT

page 3

page 4

page 7

research
04/22/2022

Spatiality-guided Transformer for 3D Dense Captioning on Point Clouds

Dense captioning in 3D point clouds is an emerging vision-and-language t...
research
11/30/2017

3DContextNet: K-d Tree Guided Hierarchical Learning of Point Clouds Using Local Contextual Cues

3D data such as point clouds and meshes are becoming more and more avail...
research
02/13/2023

Surface-biased Multi-Level Context 3D Object Detection

Object detection in 3D point clouds is a crucial task in a range of comp...
research
05/15/2023

AutoRecon: Automated 3D Object Discovery and Reconstruction

A fully automated object reconstruction pipeline is crucial for digital ...
research
03/10/2022

MORE: Multi-Order RElation Mining for Dense Captioning in 3D Scenes

3D dense captioning is a recently-proposed novel task, where point cloud...
research
03/01/2021

InstanceRefer: Cooperative Holistic Understanding for Visual Grounding on Point Clouds through Instance Multi-level Contextual Referring

Compared with the visual grounding in 2D images, the natural-language-gu...
research
04/02/2019

Context and Attribute Grounded Dense Captioning

Dense captioning aims at simultaneously localizing semantic regions and ...

Please sign up or login with your details

Forgot password? Click here to reset