DeepAI AI Chat
Log In Sign Up

Boosting Continuous Sign Language Recognition via Cross Modality Augmentation

by   Junfu Pu, et al.

Continuous sign language recognition (SLR) deals with unaligned video-text pair and uses the word error rate (WER), i.e., edit distance, as the main evaluation metric. Since it is not differentiable, we usually instead optimize the learning model with the connectionist temporal classification (CTC) objective loss, which maximizes the posterior probability over the sequential alignment. Due to the optimization gap, the predicted sentence with the highest decoding probability may not be the best choice under the WER metric. To tackle this issue, we propose a novel architecture with cross modality augmentation. Specifically, we first augment cross-modal data by simulating the calculation procedure of WER, i.e., substitution, deletion and insertion on both text label and its corresponding video. With these real and generated pseudo video-text pairs, we propose multiple loss terms to minimize the cross modality distance between the video and ground truth label, and make the network distinguish the difference between real and pseudo modalities. The proposed framework can be easily extended to other existing CTC based continuous SLR architectures. Extensive experiments on two continuous SLR benchmarks, i.e., RWTH-PHOENIX-Weather and CSL, validate the effectiveness of our proposed method.


page 1

page 2

page 3

page 4

page 5

page 6

page 8

page 9


Self-Sufficient Framework for Continuous Sign Language Recognition

The goal of this work is to develop self-sufficient framework for Contin...

XKD: Cross-modal Knowledge Distillation with Domain Alignment for Video Representation Learning

We present XKD, a novel self-supervised framework to learn meaningful re...

Sign Language Video Retrieval with Free-Form Textual Queries

Systems that can efficiently search collections of sign language videos ...

Cross-modal Image Retrieval with Deep Mutual Information Maximization

In this paper, we study the cross-modal image retrieval, where the input...

Universal Weighting Metric Learning for Cross-Modal Matching

Cross-modal matching has been a highlighted research topic in both visio...

Video-based Sign Language Recognition without Temporal Segmentation

Millions of hearing impaired people around the world routinely use some ...

Disentangled Representation Learning for Text-Video Retrieval

Cross-modality interaction is a critical component in Text-Video Retriev...