A computational model of early language acquisition from audiovisual experiences of young infants

06/24/2019
by   Okko Räsänen, et al.
0

Earlier research has suggested that human infants might use statistical dependencies between speech and non-linguistic multimodal input to bootstrap their language learning before they know how to segment words from running speech. However, feasibility of this hypothesis in terms of real-world infant experiences has remained unclear. This paper presents a step towards a more realistic test of the multimodal bootstrapping hypothesis by describing a neural network model that can learn word segments and their meanings from referentially ambiguous acoustic input. The model is tested on recordings of real infant-caregiver interactions using utterance-level labels for concrete visual objects that were attended by the infant when caregiver spoke an utterance containing the name of the object, and using random visual labels for utterances during absence of attention. The results show that beginnings of lexical knowledge may indeed emerge from individually ambiguous learning scenarios. In addition, the hidden layers of the network show gradually increasing selectivity to phonetic categories as a function of layer depth, resembling models trained for phone recognition in a supervised manner.

READ FULL TEXT

page 2

page 4

research
09/29/2021

Can phones, syllables, and words emerge as side-products of cross-situational audiovisual learning? – A computational investigation

Decades of research has studied how language learning infants learn to d...
research
10/21/2019

Disambiguating Speech Intention via Audio-Text Co-attention Framework: A Case of Prosody-semantics Interface

Understanding the intention of an utterance is challenging for some pros...
research
02/02/2018

Order matters: Distributional properties of speech to young children bootstraps learning of semantic representations

Some researchers claim that language acquisition is critically dependent...
research
06/30/2023

What do self-supervised speech models know about words?

Many self-supervised speech models (S3Ms) have been introduced over the ...
research
05/14/2023

Self-supervised Neural Factor Analysis for Disentangling Utterance-level Speech Representations

Self-supervised learning (SSL) speech models such as wav2vec and HuBERT ...
research
06/04/2020

A Computational Model of Early Word Learning from the Infant's Point of View

Human infants have the remarkable ability to learn the associations betw...
research
04/06/2021

An Initial Investigation for Detecting Partially Spoofed Audio

All existing databases of spoofed speech contain attack data that is spo...

Please sign up or login with your details

Forgot password? Click here to reset