TSPNet: Hierarchical Feature Learning via Temporal Semantic Pyramid for Sign Language Translation

10/12/2020
by   Dongxu Li, et al.
0

Sign language translation (SLT) aims to interpret sign video sequences into text-based natural language sentences. Sign videos consist of continuous sequences of sign gestures with no clear boundaries in between. Existing SLT models usually represent sign visual features in a frame-wise manner so as to avoid needing to explicitly segmenting the videos into isolated signs. However, these methods neglect the temporal information of signs and lead to substantial ambiguity in translation. In this paper, we explore the temporal semantic structures of signvideos to learn more discriminative features. To this end, we first present a novel sign video segment representation which takes into account multiple temporal granularities, thus alleviating the need for accurate video segmentation. Taking advantage of the proposed segment representation, we develop a novel hierarchical sign video feature learning method via a temporal semantic pyramid network, called TSPNet. Specifically, TSPNet introduces an inter-scale attention to evaluate and enhance local semantic consistency of sign segments and an intra-scale attention to resolve semantic ambiguity by using non-local video context. Experiments show that our TSPNet outperforms the state-of-the-art with significant improvements on the BLEU score (from 9.58 to 13.41) and ROUGE score (from 31.80 to 34.96)on the largest commonly-used SLT dataset. Our implementation is available at https://github.com/verashira/TSPNet.

READ FULL TEXT

page 4

page 5

page 6

page 7

page 8

page 9

page 10

page 11

research
07/14/2023

Gloss Attention for Gloss-free Sign Language Translation

Most sign language translation (SLT) methods to date require the use of ...
research
04/02/2020

Temporal Accumulative Features for Sign Language Recognition

In this paper, we propose a set of features called temporal accumulative...
research
10/03/2022

Hierarchical I3D for Sign Spotting

Most of the vision-based sign language research to date has focused on I...
research
01/30/2018

Video-based Sign Language Recognition without Temporal Segmentation

Millions of hearing impaired people around the world routinely use some ...
research
03/21/2023

Natural Language-Assisted Sign Language Recognition

Sign languages are visual languages which convey information by signers'...
research
11/25/2020

Sign language segmentation with temporal convolutional networks

The objective of this work is to determine the location of temporal boun...
research
05/22/2023

Gloss-Free End-to-End Sign Language Translation

In this paper, we tackle the problem of sign language translation (SLT) ...

Please sign up or login with your details

Forgot password? Click here to reset