One-Shot Transfer of Affordance Regions? AffCorrs!

09/15/2022
by   Denis Hadjivelichkov, et al.
15

In this work, we tackle one-shot visual search of object parts. Given a single reference image of an object with annotated affordance regions, we segment semantically corresponding parts within a target scene. We propose AffCorrs, an unsupervised model that combines the properties of pre-trained DINO-ViT's image descriptors and cyclic correspondences. We use AffCorrs to find corresponding affordances both for intra- and inter-class one-shot part segmentation. This task is more difficult than supervised alternatives, but enables future work such as learning affordances via imitation and assisted teleoperation.

READ FULL TEXT

page 5

page 6

page 7

page 12

page 14

page 15

page 16

page 17

research
11/28/2018

One-Shot Instance Segmentation

We tackle one-shot visual search by example for arbitrary object categor...
research
12/10/2021

Deep ViT Features as Dense Visual Descriptors

We leverage deep features extracted from a pre-trained Vision Transforme...
research
10/26/2017

Class Correlation affects Single Object Localization using Pre-trained ConvNets

The problem of object localization has become one of the mainstream prob...
research
03/24/2020

CRNet: Cross-Reference Networks for Few-Shot Segmentation

Over the past few years, state-of-the-art image segmentation algorithms ...
research
08/22/2023

Masked Momentum Contrastive Learning for Zero-shot Semantic Understanding

Self-supervised pretraining (SSP) has emerged as a popular technique in ...
research
03/24/2023

OPDMulti: Openable Part Detection for Multiple Objects

Openable part detection is the task of detecting the openable parts of a...
research
06/30/2023

A Parts Based Registration Loss for Detecting Knee Joint Areas

In this paper, a parts based loss is considered for finetune registering...

Please sign up or login with your details

Forgot password? Click here to reset