American Sign Language Alphabet Recognition using Deep Learning

05/14/2019
by   Nikhil Kasukurthi, et al.
0

Tremendous headway has been made in the field of 3D hand pose estimation but the 3D depth cameras are usually inaccessible. We propose a model to recognize American Sign Language alphabet from RGB images. Images for the training were resized and pre-processed before training the Deep Neural Network. The model was trained on a squeezenet architecture to make it capable of running on mobile devices with an accuracy of 83.29

READ FULL TEXT

page 2

page 5

research
05/03/2017

Learning to Estimate 3D Hand Pose from Single RGB Images

Low-cost consumer depth cameras and deep learning have enabled reasonabl...
research
03/19/2015

Sign Language Fingerspelling Classification from Depth and Color Images using a Deep Belief Network

Automatic sign language recognition is an open problem that has received...
research
07/25/2021

Bangla sign language recognition using concatenated BdSL network

Sign language is the only medium of communication for the hearing impair...
research
03/20/2023

DIME-Net: Neural Network-Based Dynamic Intrinsic Parameter Rectification for Cameras with Optical Image Stabilization System

Optical Image Stabilization (OIS) system in mobile devices reduces image...
research
09/24/2019

Sign Language Recognition Analysis using Multimodal Data

Voice-controlled personal and home assistants (such as the Amazon Echo a...
research
02/01/2021

Despeckling Sentinel-1 GRD images by deep learning and application to narrow river segmentation

This paper presents a despeckling method for Sentinel-1 GRD images based...
research
09/30/2022

Combining Efficient and Precise Sign Language Recognition: Good pose estimation library is all you need

Sign language recognition could significantly improve the user experienc...

Please sign up or login with your details

Forgot password? Click here to reset