Feature-Realistic Neural Fusion for Real-Time, Open Set Scene Understanding

10/06/2022
by   Kirill Mazur, et al.
10

General scene understanding for robotics requires flexible semantic representation, so that novel objects and structures which may not have been known at training time can be identified, segmented and grouped. We present an algorithm which fuses general learned features from a standard pre-trained network into a highly efficient 3D geometric neural field representation during real-time SLAM. The fused 3D feature maps inherit the coherence of the neural field's geometry representation. This means that tiny amounts of human labelling interacting at runtime enable objects or even parts of objects to be robustly and accurately segmented in an open set manner.

READ FULL TEXT

page 1

page 5

page 6

research
11/29/2021

iLabel: Interactive Neural Scene Labelling

Joint representation of geometry, colour and semantics using a 3D neural...
research
07/22/2019

DetectFusion: Detecting and Segmenting Both Known and Unknown Dynamic Objects in Real-time SLAM

We present DetectFusion, an RGB-D SLAM system that runs in real-time and...
research
03/20/2023

Neural Implicit Vision-Language Feature Fields

Recently, groundbreaking results have been presented on open-vocabulary ...
research
06/20/2017

Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects

In this paper we introduce Co-Fusion, a dense SLAM system that takes a l...
research
03/23/2021

iMAP: Implicit Mapping and Positioning in Real-Time

We show for the first time that a multilayer perceptron (MLP) can serve ...
research
02/14/2023

ConceptFusion: Open-set Multimodal 3D Mapping

Building 3D maps of the environment is central to robot navigation, plan...
research
11/28/2007

Representation and Measure of Structural Information

We introduce a uniform representation of general objects that captures t...

Please sign up or login with your details

Forgot password? Click here to reset