Gloss-free Sign Language Translation: Improving from Visual-Language Pretraining

07/27/2023
by   Benjia Zhou, et al.
0

Sign Language Translation (SLT) is a challenging task due to its cross-domain nature, involving the translation of visual-gestural language to text. Many previous methods employ an intermediate representation, i.e., gloss sequences, to facilitate SLT, thus transforming it into a two-stage task of sign language recognition (SLR) followed by sign language translation (SLT). However, the scarcity of gloss-annotated sign language data, combined with the information bottleneck in the mid-level gloss representation, has hindered the further development of the SLT task. To address this challenge, we propose a novel Gloss-Free SLT based on Visual-Language Pretraining (GFSLT-VLP), which improves SLT by inheriting language-oriented prior knowledge from pre-trained models, without any gloss annotation assistance. Our approach involves two stages: (i) integrating Contrastive Language-Image Pre-training (CLIP) with masked self-supervised learning to create pre-tasks that bridge the semantic gap between visual and textual representations and restore masked sentences, and (ii) constructing an end-to-end architecture with an encoder-decoder-like structure that inherits the parameters of the pre-trained Visual Encoder and Text Decoder from the first stage. The seamless combination of these novel designs forms a robust sign language representation and significantly improves gloss-free sign language translation. In particular, we have achieved unprecedented improvements in terms of BLEU-4 score on the PHOENIX14T dataset (>+5) and the CSL-Daily dataset (>+3) compared to state-of-the-art gloss-free SLT methods. Furthermore, our approach also achieves competitive results on the PHOENIX14T dataset when compared with most of the gloss-based methods. Our code is available at https://github.com/zhoubenjia/GFSLT-VLP.

READ FULL TEXT
research
07/03/2022

M-Adapter: Modality Adaptation for End-to-End Speech-to-Text Translation

End-to-end speech-to-text translation models are often initialized with ...
research
05/22/2023

Gloss-Free End-to-End Sign Language Translation

In this paper, we tackle the problem of sign language translation (SLT) ...
research
05/02/2023

SLTUNET: A Simple Unified Model for Sign Language Translation

Despite recent successes with neural models for sign language translatio...
research
08/16/2023

High-Fidelity Lake Extraction via Two-Stage Prompt Enhancement: Establishing a Novel Baseline and Benchmark

The extraction of lakes from remote sensing images is a complex challeng...
research
04/11/2022

ConSLT: A Token-level Contrastive Framework for Sign Language Translation

Sign language translation (SLT) is an important technology that can brid...
research
05/26/2021

Improving Sign Language Translation with Monolingual Data by Sign Back-Translation

Despite existing pioneering works on sign language translation (SLT), th...
research
02/02/2020

Neural Sign Language Translation by Learning Tokenization

Sign Language Translation has attained considerable success recently, ra...

Please sign up or login with your details

Forgot password? Click here to reset