Keyword localisation in untranscribed speech using visually grounded speech models

02/02/2022
by   Kayode Olaleye, et al.
0

Keyword localisation is the task of finding where in a speech utterance a given query keyword occurs. We investigate to what extent keyword localisation is possible using a visually grounded speech (VGS) model. VGS models are trained on unlabelled images paired with spoken captions. These models are therefore self-supervised – trained without any explicit textual label or location information. To obtain training targets, we first tag training images with soft text labels using a pretrained visual classifier with a fixed vocabulary. This enables a VGS model to predict the presence of a written keyword in an utterance, but not its location. We consider four ways to equip VGS models with localisations capabilities. Two of these – a saliency approach and input masking – can be applied to an arbitrary prediction model after training, while the other two – attention and a score aggregation approach – are incorporated directly into the structure of the model. Masked-based localisation gives some of the best reported localisation scores from a VGS model, with an accuracy of 57 an utterance and need to predict its location. In a setting where localisation is performed after detection, an F_1 of 25 where a keyword spotting ranking pass is first performed, we get a localisation P@10 of 32 with unordered bag-of-word-supervision (from transcriptions), these models do not receive any textual or location supervision. Further analyses show that these models are limited by the first detection or ranking pass. Moreover, individual keyword localisation performance is correlated with the tagging performance from the visual classifier. We also show qualitatively how and where semantic mistakes occur, e.g. that the model locates surfer when queried with ocean.

READ FULL TEXT

page 1

page 9

page 10

research
06/16/2021

Attention-Based Keyword Localisation in Speech using Visual Grounding

Visually grounded speech models learn from images paired with spoken cap...
research
03/23/2017

Visually grounded learning of keyword prediction from untranscribed speech

During language acquisition, infants have the benefit of visual cues to ...
research
12/14/2020

Towards localisation of keywords in speech using weak supervision

Developments in weakly supervised and self-supervised models could enabl...
research
02/01/2023

Visually Grounded Keyword Detection and Localisation for Low-Resource Languages

This study investigates the use of Visually Grounded Speech (VGS) models...
research
08/23/2021

End-to-End Open Vocabulary Keyword Search

Recently, neural approaches to spoken content retrieval have become popu...
research
03/29/2022

CycleGAN-Based Unpaired Speech Dereverberation

Typically, neural network-based speech dereverberation models are traine...
research
07/12/2023

Useful but Distracting: Keyword Highlights and Time-Synchronization in Captions for Language Learning

Captions provide language learners with a scaffold for comprehension and...

Please sign up or login with your details

Forgot password? Click here to reset