Transferring Cross-domain Knowledge for Video Sign Language Recognition

03/08/2020
by   Dongxu Li, et al.
0

Word-level sign language recognition (WSLR) is a fundamental task in sign language interpretation. It requires models to recognize isolated sign words from videos. However, annotating WSLR data needs expert knowledge, thus limiting WSLR dataset acquisition. On the contrary, there are abundant subtitled sign news videos on the internet. Since these videos have no word-level annotation and exhibit a large domain gap from isolated signs, they cannot be directly used for training WSLR models. We observe that despite the existence of a large domain gap, isolated and news signs share the same visual concepts, such as hand gestures and body movements. Motivated by this observation, we propose a novel method that learns domain-invariant visual concepts and fertilizes WSLR models by transferring knowledge of subtitled news sign to them. To this end, we extract news signs using a base WSLR model, and then design a classifier jointly trained on news and isolated signs to coarsely align these two domain features. In order to learn domain-invariant features within each class and suppress domain-specific features, our method further resorts to an external memory to store the class centroids of the aligned news signs. We then design a temporal attention based on the learnt descriptor to improve recognition performance. Experimental results on standard WSLR datasets show that our method outperforms previous state-of-the-art methods significantly. We also demonstrate the effectiveness of our method on automatically localizing signs from sign news, achieving 28.1 for AP@0.5.

READ FULL TEXT

page 1

page 4

page 7

research
04/02/2020

Temporal Accumulative Features for Sign Language Recognition

In this paper, we propose a set of features called temporal accumulative...
research
04/02/2022

Word separation in continuous sign language using isolated signs and post-processing

Continuous Sign Language Recognition (CSLR) is a long challenging task i...
research
10/24/2021

Using Motion History Images with 3D Convolutional Networks in Isolated Sign Language Recognition

Sign language recognition using computational models is a challenging pr...
research
08/24/2020

Global-local Enhancement Network for NMFs-aware Sign Language Recognition

Sign language recognition (SLR) is a challenging problem, involving comp...
research
01/30/2018

Video-based Sign Language Recognition without Temporal Segmentation

Millions of hearing impaired people around the world routinely use some ...
research
05/11/2021

ChaLearn LAP Large Scale Signer Independent Isolated Sign Language Recognition Challenge: Design, Results and Future Research

The performances of Sign Language Recognition (SLR) systems have improve...
research
10/09/2016

Spatial Relationship Based Features for Indian Sign Language Recognition

In this paper, the task of recognizing signs made by hearing impaired pe...

Please sign up or login with your details

Forgot password? Click here to reset