DexVIP: Learning Dexterous Grasping with Human Hand Pose Priors from Video

02/01/2022
by   Priyanka Mandikal, et al.
0

Dexterous multi-fingered robotic hands have a formidable action space, yet their morphological similarity to the human hand holds immense potential to accelerate robot learning. We propose DexVIP, an approach to learn dexterous robotic grasping from human-object interactions present in in-the-wild YouTube videos. We do this by curating grasp images from human-object interaction videos and imposing a prior over the agent's hand pose when learning to grasp with deep reinforcement learning. A key advantage of our method is that the learned policy is able to leverage free-form in-the-wild visual data. As a result, it can easily scale to new objects, and it sidesteps the standard practice of collecting human demonstrations in a lab – a much more expensive and indirect way to capture human expertise. Through experiments on 27 objects with a 30-DoF simulated robot hand, we demonstrate that DexVIP compares favorably to existing approaches that lack a hand pose prior or rely on specialized tele-operation equipment to obtain human demonstrations, while also being faster to train. Project page: https://vision.cs.utexas.edu/projects/dexvip-dexterous-grasp-pose-prior

READ FULL TEXT

page 2

page 4

page 5

page 7

page 14

page 15

page 17

research
09/03/2020

Dexterous Robotic Grasping with Object-Centric Visual Affordances

Dexterous robotic hands are appealing for their agility and human-like m...
research
06/06/2023

A Grasp Pose is All You Need: Learning Multi-fingered Grasping with Deep Reinforcement Learning from Vision and Touch

Multi-fingered robotic hands could enable robots to perform sophisticate...
research
10/14/2021

Shaping embodied agent behavior with activity-context priors from egocentric video

Complex physical tasks entail a sequence of object interactions, each wi...
research
05/25/2023

Look Ma, No Hands! Agent-Environment Factorization of Egocentric Videos

The analysis and use of egocentric videos for robotic tasks is made chal...
research
04/23/2021

H2O: A Benchmark for Visual Human-human Object Handover Analysis

Object handover is a common human collaboration behavior that attracts a...
research
12/06/2021

DemoGrasp: Few-Shot Learning for Robotic Grasping with Human Demonstration

The ability to successfully grasp objects is crucial in robotics, as it ...
research
07/31/2023

Deep Reinforcement Learning of Dexterous Pre-grasp Manipulation for Human-like Functional Categorical Grasping

Many objects such as tools and household items can be used only if grasp...

Please sign up or login with your details

Forgot password? Click here to reset