One-Shot Transfer of Affordance Regions? AffCorrs!

09/15/2022
by   Denis Hadjivelichkov, et al.
15

In this work, we tackle one-shot visual search of object parts. Given a single reference image of an object with annotated affordance regions, we segment semantically corresponding parts within a target scene. We propose AffCorrs, an unsupervised model that combines the properties of pre-trained DINO-ViT's image descriptors and cyclic correspondences. We use AffCorrs to find corresponding affordances both for intra- and inter-class one-shot part segmentation. This task is more difficult than supervised alternatives, but enables future work such as learning affordances via imitation and assisted teleoperation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset