Contextually Guided Semantic Labeling and Search for 3D Point Clouds

11/22/2011
by   Abhishek Anand, et al.
0

RGB-D cameras, which give an RGB image to- gether with depths, are becoming increasingly popular for robotic perception. In this paper, we address the task of detecting commonly found objects in the 3D point cloud of indoor scenes obtained from such cameras. Our method uses a graphical model that captures various features and contextual relations, including the local visual appearance and shape cues, object co-occurence relationships and geometric relationships. With a large number of object classes and relations, the model's parsimony becomes important and we address that by using multiple types of edge potentials. We train the model using a maximum-margin learning approach. In our experiments over a total of 52 3D scenes of homes and offices (composed from about 550 views), we get a performance of 84.06 and home scenes respectively for 17 object classes each. We also present a method for a robot to search for an object using the learned model and the contextual information available from the current labelings of the scene. We applied this algorithm successfully on a mobile robot for the task of finding 12 object classes in 10 different offices and achieved a precision of 97.56 with 78.43

READ FULL TEXT

page 1

page 2

page 4

page 8

page 9

page 12

page 13

page 14

research
11/30/2017

3DContextNet: K-d Tree Guided Hierarchical Learning of Point Clouds Using Local Contextual Cues

3D data such as point clouds and meshes are becoming more and more avail...
research
06/19/2019

Neural Point-Based Graphics

We present a new point-based approach for modeling complex scenes. The a...
research
10/20/2022

Object Goal Navigation Based on Semantics and RGB Ego View

This paper presents an architecture and methodology to empower a service...
research
02/28/2020

Indoor Scene Recognition in 3D

Recognising in what type of environment one is located is an important p...
research
11/21/2020

Object Rearrangement Using Learned Implicit Collision Functions

Robotic object rearrangement combines the skills of picking and placing ...
research
01/25/2023

Implicit Shape Model Trees: Recognition of 3-D Indoor Scenes and Prediction of Object Poses for Mobile Robots

For a mobile robot, we present an approach to recognize scenes in arrang...
research
11/29/2017

A Generative Model of 3D Object Layouts in Apartments

Understanding indoor scenes is an important task in computer vision. Thi...

Please sign up or login with your details

Forgot password? Click here to reset