Vision-Based American Sign Language Classification Approach via Deep Learning

04/08/2022
by   Nelly Elsayed, et al.
0

Hearing-impaired is the disability of partial or total hearing loss that causes a significant problem for communication with other people in society. American Sign Language (ASL) is one of the sign languages that most commonly used language used by Hearing impaired communities to communicate with each other. In this paper, we proposed a simple deep learning model that aims to classify the American Sign Language letters as a step in a path for removing communication barriers that are related to disabilities.

READ FULL TEXT
research
08/14/2022

BDSL 49: A Comprehensive Dataset of Bangla Sign Language

Language is a method by which individuals express their thoughts. Each l...
research
03/22/2018

Entrenamiento de una red neuronal para el reconocimiento de imagenes de lengua de senas capturadas con sensores de profundidad

Due to the growth of the population with hearing problems, devices have ...
research
11/27/2019

Towards improving the e-learning experience for deaf students: e-LUX

Deaf people are more heavily affected by the digital divide than many wo...
research
10/18/2017

Using Deep Convolutional Networks for Gesture Recognition in American Sign Language

In the realm of multimodal communication, sign language is, and continue...
research
01/06/2022

ASL-Skeleton3D and ASL-Phono: Two Novel Datasets for the American Sign Language

Sign language is an essential resource enabling access to communication ...
research
10/12/2017

Sign-Constrained Regularized Loss Minimization

In practical analysis, domain knowledge about analysis target has often ...
research
01/27/2023

Semantic Network Model for Sign Language Comprehension

In this study, the authors propose a computational cognitive model for s...

Please sign up or login with your details

Forgot password? Click here to reset