Sign Language Recognition Analysis using Multimodal Data

09/24/2019
by   Al Amin Hosain, et al.
29

Voice-controlled personal and home assistants (such as the Amazon Echo and Apple Siri) are becoming increasingly popular for a variety of applications. However, the benefits of these technologies are not readily accessible to Deaf or Hard-ofHearing (DHH) users. The objective of this study is to develop and evaluate a sign recognition system using multiple modalities that can be used by DHH signers to interact with voice-controlled devices. With the advancement of depth sensors, skeletal data is used for applications like video analysis and activity recognition. Despite having similarity with the well-studied human activity recognition, the use of 3D skeleton data in sign language recognition is rare. This is because unlike activity recognition, sign language is mostly dependent on hand shape pattern. In this work, we investigate the feasibility of using skeletal and RGB video data for sign language recognition using a combination of different deep learning architectures. We validate our results on a large-scale American Sign Language (ASL) dataset of 12 users and 13107 samples across 51 signs. It is named as GMUASL51. We collected the dataset over 6 months and it will be publicly released in the hope of spurring further machine learning research towards providing improved accessibility for digital assistants.

READ FULL TEXT

page 3

page 4

page 6

research
11/05/2021

BBC-Oxford British Sign Language Dataset

In this work, we introduce the BBC-Oxford British Sign Language (BOBSL) ...
research
01/12/2010

A Topological derivative based image segmentation for sign language recognition system using isotropic filter

The need of sign language is increasing radically especially to hearing ...
research
01/07/2017

Sign Language Recognition Using Temporal Classification

Devices like the Myo armband available in the market today enable us to ...
research
05/14/2019

American Sign Language Alphabet Recognition using Deep Learning

Tremendous headway has been made in the field of 3D hand pose estimation...
research
09/29/2017

Impact of Three-Dimensional Video Scalability on Multi-View Activity Recognition using Deep Learning

Human activity recognition is one of the important research topics in co...
research
02/03/2022

Exploring Sub-skeleton Trajectories for Interpretable Recognition of Sign Language

Recent advances in tracking sensors and pose estimation software enable ...
research
12/03/2018

MS-ASL: A Large-Scale Data Set and Benchmark for Understanding American Sign Language

Computer Vision has been improved significantly in the past few decades....

Please sign up or login with your details

Forgot password? Click here to reset