First-Person Hand Action Benchmark with RGB-D Videos and 3D Hand Pose Annotations

In this work we study the use of 3D hand poses to recognize first-person hand actions interacting with 3D objects. Towards this goal, we collected RGB-D video sequences of more than 100K frames of 45 daily hand action categories, involving 25 different objects in several hand grasp configurations. To obtain high quality hand pose annotations from real sequences, we used our own mo-cap system that automatically infers the location of each of the 21 joints of the hand via 6 magnetic sensors on the finger tips and the inverse-kinematics of a hand model. To the best of our knowledge, this is the first benchmark for RGB-D hand action sequences with 3D hand poses. Additionally, we recorded the 6D (i.e. 3D rotations and locations) object poses and provide 3D object models for a subset of hand-object interaction sequences. We present extensive experimental evaluations of RGB-D and pose-based action recognition by 18 baselines/state-of-the-art. The impact of using appearance features, poses and their combinations are measured, and the different training/testing protocols including cross-persons are evaluated. Finally, we assess how ready the current hand pose estimation is when hands are severely occluded by objects in egocentric views and its influence on action recognition. From the results, we see clear benefits of using hand pose as a cue for action recognition compared to other data modalities. Our dataset and experiments can be of interest to communities of 6D object pose, robotics, and 3D hand pose estimation as well as action recognition.

READ FULL TEXT

page 1

page 3

page 5

page 7

page 8

research
12/14/2018

Action Machine: Rethinking Action Recognition in Trimmed Videos

Existing methods in video action recognition mostly do not distinguish h...
research
04/22/2021

H2O: Two Hands Manipulating Objects for First Person Interaction Recognition

We present, for the first time, a comprehensive framework for egocentric...
research
09/08/2021

Egocentric View Hand Action Recognition by Leveraging Hand Surface and Hand Grasp Type

We introduce a multi-stage framework that uses mean curvature on a hand ...
research
07/02/2021

HO-3D_v3: Improving the Accuracy of Hand-Object Annotations of the HO-3D Dataset

HO-3D is a dataset providing image sequences of various hand-object inte...
research
09/20/2022

Hierarchical Temporal Transformer for 3D Hand Pose Estimation and Action Recognition from Egocentric RGB Videos

Understanding dynamic hand motions and actions from egocentric RGB video...
research
03/30/2020

Measuring Generalisation to Unseen Viewpoints, Articulations, Shapes and Objects for 3D Hand Pose Estimation under Hand-Object Interaction

In this work, we study how well different type of approaches generalise ...
research
08/21/2023

Local Spherical Harmonics Improve Skeleton-Based Hand Action Recognition

Hand action recognition is essential. Communication, human-robot interac...

Please sign up or login with your details

Forgot password? Click here to reset