A Neural Network Architecture for Learning Word-Referent Associations in Multiple Contexts

05/20/2019
by   Hansenclever F. Bassani, et al.
0

This article proposes a biologically inspired neurocomputational architecture which learns associations between words and referents in different contexts, considering evidence collected from the literature of Psycholinguistics and Neurolinguistics. The multi-layered architecture takes as input raw images of objects (referents) and streams of word's phonemes (labels), builds an adequate representation, recognizes the current context, and associates label with referents incrementally, by employing a Self-Organizing Map which creates new association nodes (prototypes) as required, adjusts the existing prototypes to better represent the input stimuli and removes prototypes that become obsolete/unused. The model takes into account the current context to retrieve the correct meaning of words with multiple meanings. Simulations show that the model can reach up to 78 situations and approximates well the learning rates of humans as reported by three different authors in five Cross-Situational Word Learning experiments, also displaying similar learning patterns in the different learning conditions.

READ FULL TEXT

page 2

page 20

research
11/14/2015

Learning to Represent Words in Context with Multilingual Supervision

We present a neural network architecture based on bidirectional LSTMs to...
research
03/12/2020

Learning word-referent mappings and concepts from raw inputs

How do children learn correspondences between the language and the world...
research
11/27/2017

Language Bootstrapping: Learning Word Meanings From Perception-Action Association

We address the problem of bootstrapping language acquisition for an arti...
research
03/17/2016

Self-organization of vocabularies under different interaction orders

Traditionally, the formation of vocabularies has been studied by agent-b...
research
08/04/2021

An analytical study of content and contexts of keywords on physics

This paper analysed author-assigned and title keywords into constituent ...
research
07/19/2019

An Unsupervised Character-Aware Neural Approach to Word and Context Representation Learning

In the last few years, neural networks have been intensively used to dev...
research
07/07/2016

Representing Verbs with Rich Contexts: an Evaluation on Verb Similarity

Several studies on sentence processing suggest that the mental lexicon k...

Please sign up or login with your details

Forgot password? Click here to reset