Entrenamiento de una red neuronal para el reconocimiento de imagenes de lengua de senas capturadas con sensores de profundidad

03/22/2018
by   Rivas P. Pedro E., et al.
0

Due to the growth of the population with hearing problems, devices have been developed that facilitate the inclusion of deaf people in society, using technology as a communication tool, such as vision systems. Then, a solution to this problem is presented using neural networks and autoencoders for the classification of American Sign Language images. As a result, 99.5 and an error of 0.01684 were obtained for image classification

READ FULL TEXT

page 2

page 3

research
04/08/2022

Vision-Based American Sign Language Classification Approach via Deep Learning

Hearing-impaired is the disability of partial or total hearing loss that...
research
03/08/2022

A New 27 Class Sign Language Dataset Collected from 173 Individuals

After the interviews, it has been comprehended that speech-impaired indi...
research
11/16/2020

A New Dataset and Proposed Convolutional Neural Network Architecture for Classification of American Sign Language Digits

In our interviews with people who work with speech impaired persons, we ...
research
09/11/2018

Solving Sinhala Language Arithmetic Problems using Neural Networks

A methodology is presented to solve Arithmetic problems in Sinhala Langu...
research
01/06/2022

ASL-Skeleton3D and ASL-Phono: Two Novel Datasets for the American Sign Language

Sign language is an essential resource enabling access to communication ...
research
11/25/2019

SWift – A SignWriting improved fast transcriber

We present SWift (SignWriting improved fast transcriber), an advanced ed...
research
09/01/2009

A theory of intelligence: networked problem solving in animal societies

A society's single emergent, increasing intelligence arises partly from ...

Please sign up or login with your details

Forgot password? Click here to reset