DeepAI AI Chat
Log In Sign Up

Learning to Recognise Words using Visually Grounded Speech

by   Sebastiaan Scholten, et al.
Radboud Universiteit
Delft University of Technology

We investigated word recognition in a Visually Grounded Speech model. The model has been trained on pairs of images and spoken captions to create visually grounded embeddings which can be used for speech to image retrieval and vice versa. We investigate whether such a model can be used to recognise words by embedding isolated words and using them to retrieve images of their visual referents. We investigate the time-course of word recognition using a gating paradigm and perform a statistical analysis to see whether well known word competition effects in human speech processing influence word recognition. Our experiments show that the model is able to recognise words, and the gating paradigm reveals that words can be recognised from partial input as well and that recognition is negatively influenced by word competition from the word initial cohort.


page 3

page 4


Modelling word learning and recognition using visually grounded speech

Background: Computational models of speech recognition often assume that...

Word Recognition, Competition, and Activation in a Model of Visually Grounded Speech

In this paper, we study how word-like units are represented and activate...

Language learning using Speech to Image retrieval

Humans learn language by interaction with their environment and listenin...

Word Discovery in Visually Grounded, Self-Supervised Speech Models

We present a method for visually-grounded spoken term discovery. After t...

Towards Visually Grounded Sub-Word Speech Unit Discovery

In this paper, we investigate the manner in which interpretable sub-word...

Semantic sentence similarity: size does not always matter

This study addresses the question whether visually grounded speech recog...

Hearings and mishearings: decrypting the spoken word

We propose a model of the speech perception of individual words in the p...