Where2Explore: Few-shot Affordance Learning for Unseen Novel Categories of Articulated Objects

09/14/2023
by   Chuanruo Ning, et al.
0

Articulated object manipulation is a fundamental yet challenging task in robotics. Due to significant geometric and semantic variations across object categories, previous manipulation models struggle to generalize to novel categories. Few-shot learning is a promising solution for alleviating this issue by allowing robots to perform a few interactions with unseen objects. However, extant approaches often necessitate costly and inefficient test-time interactions with each unseen instance. Recognizing this limitation, we observe that despite their distinct shapes, different categories often share similar local geometries essential for manipulation, such as pullable handles and graspable edges - a factor typically underutilized in previous few-shot learning works. To harness this commonality, we introduce 'Where2Explore', an affordance learning framework that effectively explores novel categories with minimal interactions on a limited number of instances. Our framework explicitly estimates the geometric similarity across different categories, identifying local areas that differ from shapes in the training categories for efficient exploration while concurrently transferring affordance knowledge to similar parts of the objects. Extensive experiments in simulated and real-world environments demonstrate our framework's capacity for efficient few-shot exploration and generalization.

READ FULL TEXT

page 2

page 4

page 8

page 9

page 14

research
04/19/2019

Context-Aware Zero-Shot Recognition

We present a novel problem setting in zero-shot learning, zero-shot obje...
research
09/08/2019

Meta-Transfer Networks for Zero-Shot Learning

Zero-Shot Learning (ZSL) aims at recognizing unseen categories using som...
research
10/18/2022

Zero-shot Point Cloud Segmentation by Transferring Geometric Primitives

We investigate transductive zero-shot point cloud semantic segmentation ...
research
06/28/2021

One-Shot Affordance Detection

Affordance detection refers to identifying the potential action possibil...
research
12/01/2021

AdaAfford: Learning to Adapt Manipulation Affordance for 3D Articulated Objects via Few-shot Interactions

Perceiving and interacting with 3D articulated objects, such as cabinets...
research
07/19/2022

Structure from Action: Learning Interactions for Articulated Object 3D Structure Discovery

Articulated objects are abundant in daily life. Discovering their parts,...
research
09/12/2019

Detecting Robotic Affordances on Novel Objects with Regional Attention and Attributes

This paper presents a framework for predicting affordances of object par...

Please sign up or login with your details

Forgot password? Click here to reset