-
Speak with signs: Active learning platform for Greek Sign Language, English Sign Language, and their translation
Sign Language is used to facilitate the communication between Deaf and n...
read it
-
Can Everybody Sign Now? Exploring Sign Language Video Generation from 2D Poses
Recent work have addressed the generation of human poses represented by ...
read it
-
A Comprehensive Study on Sign Language Recognition Methods
In this paper, a comparative experimental assessment of computer vision-...
read it
-
Zero-Shot Sign Language Recognition: Can Textual Data Uncover Sign Languages?
We introduce the problem of zero-shot sign language recognition (ZSSLR),...
read it
-
Transferring Cross-domain Knowledge for Video Sign Language Recognition
Word-level sign language recognition (WSLR) is a fundamental task in sig...
read it
-
TSPNet: Hierarchical Feature Learning via Temporal Semantic Pyramid for Sign Language Translation
Sign language translation (SLT) aims to interpret sign video sequences i...
read it
-
Towards Large-Scale Data Mining for Data-Driven Analysis of Sign Languages
Access to sign language data is far from adequate. We show that it is po...
read it
Watch, read and lookup: learning to spot signs from multiple supervisors
The focus of this work is sign spotting - given a video of an isolated sign, our task is to identify whether and where it has been signed in a continuous, co-articulated sign language video. To achieve this sign spotting task, we train a model using multiple types of available supervision by: (1) watching existing sparsely labelled footage; (2) reading associated subtitles (readily available translations of the signed content) which provide additional weak-supervision; (3) looking up words (for which no co-articulated labelled examples are available) in visual sign language dictionaries to enable novel sign spotting. These three tasks are integrated into a unified learning framework using the principles of Noise Contrastive Estimation and Multiple Instance Learning. We validate the effectiveness of our approach on low-shot sign spotting benchmarks. In addition, we contribute a machine-readable British Sign Language (BSL) dictionary dataset of isolated signs, BSLDict, to facilitate study of this task. The dataset, models and code are available at our project page.
READ FULL TEXT
Comments
There are no comments yet.