BB8: A Scalable, Accurate, Robust to Partial Occlusion Method for Predicting the 3D Poses of Challenging Objects without Using Depth

03/31/2017
by   Mahdi Rad, et al.
0

We introduce a novel method for 3D object detection and pose estimation from color images only. We first use segmentation to detect the objects of interest in 2D even in presence of partial occlusions and cluttered background. By contrast with recent patch-based methods, we rely on a "holistic" approach: We apply to the detected objects a Convolutional Neural Network (CNN) trained to predict their 3D poses in the form of 2D projections of the corners of their 3D bounding boxes for the pose of objects' parts. This, however, is not sufficient for handling objects from the recent T-LESS dataset: These objects exhibit an axis of rotational symmetry, and the similarity of two images of such an object under two different poses makes training the CNN challenging. We solve this problem by restricting the range of poses used for training, and by introducing a classifier to identify the range of a pose at run-time before estimating it. We also use an optional additional step that refines the predicted poses for hand pose estimation. We improve the state-of-the-art on the LINEMOD dataset from 73.7 to report results on the Occlusion dataset using color images only. We obtain 54 the T-LESS dataset, compared to the 67 sequences which uses both color and depth. The full approach is also scalable, as a single network can be trained for multiple objects simultaneously.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset