DeepAI
Log In Sign Up

Representations of language in a model of visually grounded speech signal

02/07/2017
by   Grzegorz Chrupała, et al.
0

We present a visually grounded model of speech perception which projects spoken utterances and images to a joint semantic space. We use a multi-layer recurrent highway network to model the temporal nature of spoken speech, and show that it learns to extract both form and meaning-based linguistic knowledge from the input signal. We carry out an in-depth analysis of the representations used by different components of the trained model and show that encoding of semantic aspects tends to become richer as we go up the hierarchy of layers, whereas encoding of form-related aspects of the language input tends to initially increase and then plateau or decrease.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/12/2017

Encoding of phonology in a recurrent neural model of grounded speech

We study the representation and encoding of phonemes in a recurrent neur...
09/09/2019

Language learning using Speech to Image retrieval

Humans learn language by interaction with their environment and listenin...
04/27/2021

Visually grounded models of spoken language: A survey of datasets, architectures and evaluation techniques

This survey provides an overview of the evolution of visually grounded m...
06/01/2020

A Neural Network Model of Lexical Competition during Infant Spoken Word Recognition

Visual world studies show that upon hearing a word in a target-absent vi...
02/25/2022

Learning English with Peppa Pig

Attempts to computationally simulate the acquisition of spoken language ...
05/05/2021

ADAM: A Sandbox for Implementing Language Learning

We present ADAM, a software system for designing and running child langu...

Code Repositories

visually-grounded-speech

Representations of language in a model of visually grounded speech signal.


view repo