Egocentric Hand Detection Via Dynamic Region Growing

11/10/2017
by   Shao Huang, et al.
0

Egocentric videos, which mainly record the activities carried out by the users of the wearable cameras, have drawn much research attentions in recent years. Due to its lengthy content, a large number of ego-related applications have been developed to abstract the captured videos. As the users are accustomed to interacting with the target objects using their own hands while their hands usually appear within their visual fields during the interaction, an egocentric hand detection step is involved in tasks like gesture recognition, action recognition and social interaction understanding. In this work, we propose a dynamic region growing approach for hand region detection in egocentric videos, by jointly considering hand-related motion and egocentric cues. We first determine seed regions that most likely belong to the hand, by analyzing the motion patterns across successive frames. The hand regions can then be located by extending from the seed regions, according to the scores computed for the adjacent superpixels. These scores are derived from four egocentric cues: contrast, location, position consistency and appearance continuity. We discuss how to apply the proposed method in real-life scenarios, where multiple hands irregularly appear and disappear from the videos. Experimental results on public datasets show that the proposed method achieves superior performance compared with the state-of-the-art methods, especially in complicated scenarios.

READ FULL TEXT

page 1

page 3

page 4

page 6

page 8

research
09/08/2017

Detecting Hands in Egocentric Videos: Towards Action Recognition

Recently, there has been a growing interest in analyzing human daily act...
research
08/12/2021

Learning Visual Affordance Grounding from Demonstration Videos

Visual affordance grounding aims to segment all possible interaction reg...
research
06/30/2021

Word-level Sign Language Recognition with Multi-stream Neural Networks Focusing on Local Regions

In recent years, Word-level Sign Language Recognition (WSLR) research ha...
research
05/02/2019

Egocentric Hand Track and Object-based Human Action Recognition

Egocentric vision is an emerging field of computer vision that is charac...
research
05/07/2020

A Hand Motion-guided Articulation and Segmentation Estimation

In this paper, we present a method for simultaneous articulation model e...
research
01/18/2017

Action Recognition: From Static Datasets to Moving Robots

Deep learning models have achieved state-of-the- art performance in reco...
research
11/28/2018

CrowdCam: Dynamic Region Segmentation

We consider the problem of segmenting dynamic regions in CrowdCam images...

Please sign up or login with your details

Forgot password? Click here to reset