3D-RelNet: Joint Object and Relational Network for 3D Prediction

06/06/2019
by   Nilesh Kulkarni, et al.
0

We propose an approach to predict the 3D shape and pose for the objects present in a scene. Existing learning based methods that pursue this goal make independent predictions per object, and do not leverage the relationships amongst them. We argue that reasoning about these relationships is crucial, and present an approach to incorporate these in a 3D prediction framework. In addition to independent per-object predictions, we predict pairwise relations in the form of relative 3D pose, and demonstrate that these can be easily incorporated to improve object level estimates. We report performance across different datasets (SUNCG, NYUv2), and show that our approach significantly improves over independent prediction approaches while also outperforming alternate implicit reasoning methods.

READ FULL TEXT

page 1

page 3

page 6

page 7

page 12

page 13

page 14

research
10/12/2021

Fourier-based Video Prediction through Relational Object Motion

The ability to predict future outcomes conditioned on observed video fra...
research
02/15/2022

Hyper-relationship Learning Network for Scene Graph Generation

Generating informative scene graphs from images requires integrating and...
research
05/10/2019

Language-Conditioned Graph Networks for Relational Reasoning

Solving grounded language tasks often requires reasoning about relations...
research
09/17/2019

Is That a Chair? Imagining Affordances Using Simulations of an Articulated Human Body

For robots to exhibit a high level of intelligence in the real world, th...
research
10/26/2021

HR-RCNN: Hierarchical Relational Reasoning for Object Detection

Incorporating relational reasoning in neural networks for object recogni...
research
12/03/2020

Towards Part-Based Understanding of RGB-D Scans

Recent advances in 3D semantic scene understanding have shown impressive...

Please sign up or login with your details

Forgot password? Click here to reset