Sign Language Translation from Instructional Videos

04/13/2023
by   Laia Tarres, et al.
21

The advances in automatic sign language translation (SLT) to spoken languages have been mostly benchmarked with datasets of limited size and restricted domains. Our work advances the state of the art by providing the first baseline results on How2Sign, a large and broad dataset. We train a Transformer over I3D video features, using the reduced BLEU as a reference metric for validation, instead of the widely used BLEU score. We report a result of 8.03 on the BLEU score, and publish the first open-source implementation of its kind to promote further advances.

READ FULL TEXT

page 4

page 13

research
12/02/2022

Tackling Low-Resourced Sign Language Translation: UPC at WMT-SLT 22

This paper describes the system developed at the Universitat Politècnica...
research
04/01/2020

Sign Language Translation with Transformers

Sign Language Translation (SLT) first uses a Sign Language Recognition (...
research
10/24/2022

Clean Text and Full-Body Transformer: Microsoft's Submission to the WMT22 Shared Task on Sign Language Translation

This paper describes Microsoft's submission to the first shared task on ...
research
08/18/2023

Is context all you need? Scaling Neural Sign Language Translation to Large Domains of Discourse

Sign Language Translation (SLT) is a challenging task that aims to gener...
research
12/06/2022

SignNet: Single Channel Sign Generation using Metric Embedded Learning

A true interpreting agent not only understands sign language and transla...
research
05/11/2023

The First Parallel Corpora for Kurdish Sign Language

Kurdish Sign Language (KuSL) is the natural language of the Kurdish Deaf...
research
08/08/2023

Gloss Alignment Using Word Embeddings

Capturing and annotating Sign language datasets is a time consuming and ...

Please sign up or login with your details

Forgot password? Click here to reset