SHOWMe: Benchmarking Object-agnostic Hand-Object 3D Reconstruction

09/19/2023
by   Anilkumar Swamy, et al.
0

Recent hand-object interaction datasets show limited real object variability and rely on fitting the MANO parametric model to obtain groundtruth hand shapes. To go beyond these limitations and spur further research, we introduce the SHOWMe dataset which consists of 96 videos, annotated with real and detailed hand-object 3D textured meshes. Following recent work, we consider a rigid hand-object scenario, in which the pose of the hand with respect to the object remains constant during the whole video sequence. This assumption allows us to register sub-millimetre-precise groundtruth 3D scans to the image sequences in SHOWMe. Although simpler, this hypothesis makes sense in terms of applications where the required accuracy and level of detail is important eg., object hand-over in human-robot collaboration, object scanning, or manipulation and contact point analysis. Importantly, the rigidity of the hand-object systems allows to tackle video-based 3D reconstruction of unknown hand-held objects using a 2-stage pipeline consisting of a rigid registration step followed by a multi-view reconstruction (MVR) part. We carefully evaluate a set of non-trivial baselines for these two stages and show that it is possible to achieve promising object-agnostic 3D hand-object reconstructions employing an SfM toolbox or a hand pose estimator to recover the rigid transforms and off-the-shelf MVR algorithms. However, these methods remain sensitive to the initial camera pose estimates which might be imprecise due to lack of textures on the objects or heavy occlusions of the hands, leaving room for improvements in the reconstruction. Code and dataset are available at https://europe.naverlabs.com/research/showme

READ FULL TEXT

page 1

page 4

page 5

page 11

page 12

page 14

page 15

page 16

research
04/28/2022

Articulated Objects in Free-form Hand Interaction

We use our hands to interact with and to manipulate objects. Articulated...
research
04/03/2017

3D Object Reconstruction from Hand-Object Interactions

Recent advances have enabled 3d object reconstruction approaches using a...
research
04/11/2019

Learning joint reconstruction of hands and manipulated objects

Estimating hand-object manipulations is essential for interpreting and i...
research
08/19/2021

D3D-HOI: Dynamic 3D Human-Object Interactions from Videos

We introduce D3D-HOI: a dataset of monocular videos with ground truth an...
research
11/30/2022

Reconstructing Hand-Held Objects from Monocular Video

This paper presents an approach that reconstructs a hand-held object fro...
research
01/18/2023

HMDO: Markerless Multi-view Hand Manipulation Capture with Deformable Objects

We construct the first markerless deformable interaction dataset recordi...
research
10/31/2022

A new benchmark for group distribution shifts in hand grasp regression for object manipulation. Can meta-learning raise the bar?

Understanding hand-object pose with computer vision opens the door to ne...

Please sign up or login with your details

Forgot password? Click here to reset