Context Matters: Self-Attention for Sign Language Recognition

01/12/2021
by   Fares Ben Slimane, et al.
10

This paper proposes an attentional network for the task of Continuous Sign Language Recognition. The proposed approach exploits co-independent streams of data to model the sign language modalities. These different channels of information can share a complex temporal structure between each other. For that reason, we apply attention to synchronize and help capture entangled dependencies between the different sign language components. Even though Sign Language is multi-channel, handshapes represent the central entities in sign interpretation. Seeing handshapes in their correct context defines the meaning of a sign. Taking that into account, we utilize the attention mechanism to efficiently aggregate the hand features with their appropriate spatio-temporal context for better sign recognition. We found that by doing so the model is able to identify the essential Sign Language components that revolve around the dominant hand and the face areas. We test our model on the benchmark dataset RWTH-PHOENIX-Weather 2014, yielding competitive results.

READ FULL TEXT

page 5

page 7

research
12/06/2021

Skeletal Graph Self-Attention: Embedding a Skeleton Inductive Bias into Sign Language Production

Recent approaches to Sign Language Production (SLP) have adopted spoken ...
research
10/09/2016

Spatial Relationship Based Features for Indian Sign Language Recognition

In this paper, the task of recognizing signs made by hearing impaired pe...
research
10/08/2022

ArabSign: A Multi-modality Dataset and Benchmark for Continuous Arabic Sign Language Recognition

Sign language recognition has attracted the interest of researchers in r...
research
10/03/2022

Hierarchical I3D for Sign Spotting

Most of the vision-based sign language research to date has focused on I...
research
01/12/2010

A Topological derivative based image segmentation for sign language recognition system using isotropic filter

The need of sign language is increasing radically especially to hearing ...
research
08/22/2020

Quantitative Survey of the State of the Art in Sign Language Recognition

This work presents a meta study covering around 300 published sign langu...
research
07/27/2021

PiSLTRc: Position-informed Sign Language Transformer with Content-aware Convolution

Since the superiority of Transformer in learning long-term dependency, t...

Please sign up or login with your details

Forgot password? Click here to reset