What can I do here? Leveraging Deep 3D saliency and geometry for fast and scalable multiple affordance detection

12/03/2018
by   Eduardo Ruiz, et al.
0

This paper develops and evaluates a novel method that allows for the detection of affordances in a scalable and multiple-instance manner on visually recovered pointclouds. Our approach has many advantages over alternative methods, as it is based on highly parallelizable, one-shot learning that is fast in commodity hardware. The approach is hybrid in that it uses a geometric representation together with a state-of-the-art deep learning method capable of identifying 3D scene saliency. The geometric component allows for a compact and efficient representation, boosting the performance of the deep network architecture which proved insufficient on its own. Moreover, our approach allows not only to predict whether an input scene affords or not the interactions, but also the pose of the objects that allow these interactions to take place. Our predictions align well with crowd-sourced human judgment as they are preferred with 87 almost four times (4x) better performance over a deep learning-only baseline and are seven times (7x) faster than previous art.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset