Dexterous Robotic Grasping with Object-Centric Visual Affordances

09/03/2020
by   Priyanka Mandikal, et al.
10

Dexterous robotic hands are appealing for their agility and human-like morphology, yet their high degree of freedom makes learning to manipulate challenging. We introduce an approach for learning dexterous grasping. Our key idea is to embed an object-centric visual affordance model within a deep reinforcement learning loop to learn grasping policies that favor the same object regions favored by people. Unlike traditional approaches that learn from human demonstration trajectories (e.g., hand joint sequences captured with a glove), the proposed prior is object-centric and image-based, allowing the agent to anticipate useful affordance regions for objects unseen during policy learning. We demonstrate our idea with a 30-DoF five-fingered robotic hand simulator on 40 objects from two datasets, where it successfully and efficiently learns policies for stable grasps. Our affordance-guided policies are significantly more effective, generalize better to novel objects, and train 3 X faster than the baselines. Our work offers a step towards manipulation agents that learn by watching how people use objects, without requiring state and action information about the human body. Project website: http://vision.cs.utexas.edu/projects/graff-dexterous-affordance-grasp

READ FULL TEXT

page 2

page 4

page 5

page 6

page 7

research
02/01/2022

DexVIP: Learning Dexterous Grasping with Human Hand Pose Priors from Video

Dexterous multi-fingered robotic hands have a formidable action space, y...
research
11/20/2022

Efficient Representations of Object Geometry for Reinforcement Learning of Interactive Grasping Policies

Grasping objects of different shapes and sizes - a foundational, effortl...
research
03/01/2022

Affordance Learning from Play for Sample-Efficient Policy Learning

Robots operating in human-centered environments should have the ability ...
research
06/06/2023

A Grasp Pose is All You Need: Learning Multi-fingered Grasping with Deep Reinforcement Learning from Vision and Touch

Multi-fingered robotic hands could enable robots to perform sophisticate...
research
10/14/2021

Shaping embodied agent behavior with activity-context priors from egocentric video

Complex physical tasks entail a sequence of object interactions, each wi...
research
07/31/2023

Learning Generalizable Tool Use with Non-rigid Grasp-pose Registration

Tool use, a hallmark feature of human intelligence, remains a challengin...
research
09/17/2019

Split Deep Q-Learning for Robust Object Singulation

Extracting a known target object from a pile of other objects in a clutt...

Please sign up or login with your details

Forgot password? Click here to reset