VL-Fields: Towards Language-Grounded Neural Implicit Spatial Representations

05/21/2023
by   Nikolaos Tsagkas, et al.
0

We present Visual-Language Fields (VL-Fields), a neural implicit spatial representation that enables open-vocabulary semantic queries. Our model encodes and fuses the geometry of a scene with vision-language trained latent features by distilling information from a language-driven segmentation model. VL-Fields is trained without requiring any prior knowledge of the scene object classes, which makes it a promising representation for the field of robotics. Our model outperformed the similar CLIP-Fields model in the task of semantic segmentation by almost 10

READ FULL TEXT

page 1

page 4

page 6

research
03/20/2023

Neural Implicit Vision-Language Feature Fields

Recently, groundbreaking results have been presented on open-vocabulary ...
research
11/25/2021

NeSF: Neural Semantic Fields for Generalizable Semantic Segmentation of 3D Scenes

We present NeSF, a method for producing 3D semantic fields from posed RG...
research
10/11/2022

CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory

We propose CLIP-Fields, an implicit scene model that can be trained with...
research
04/04/2022

Learning Neural Acoustic Fields

Our environment is filled with rich and dynamic acoustic information. Wh...
research
04/24/2023

USA-Net: Unified Semantic and Affordance Representations for Robot Memory

In order for robots to follow open-ended instructions like "go open the ...
research
10/27/2022

HRTF Field: Unifying Measured HRTF Magnitude Representation with Neural Fields

Head-related transfer functions (HRTFs) are a set of functions describin...
research
07/05/2022

Pretraining on Interactions for Learning Grounded Affordance Representations

Lexical semantics and cognitive science point to affordances (i.e. the a...

Please sign up or login with your details

Forgot password? Click here to reset