Language Bootstrapping: Learning Word Meanings From Perception-Action Association

11/27/2017
by   Giampiero Salvi, et al.
0

We address the problem of bootstrapping language acquisition for an artificial system similarly to what is observed in experiments with human infants. Our method works by associating meanings to words in manipulation tasks, as a robot interacts with objects and listens to verbal descriptions of the interactions. The model is based on an affordance network, i.e., a mapping between robot actions, robot perceptions, and the perceived effects of these actions upon objects. We extend the affordance model to incorporate spoken words, which allows us to ground the verbal symbols to the execution of actions and the perception of the environment. The model takes verbal descriptions of a task as the input and uses temporal co-occurrence to create links between speech utterances and the involved objects, actions, and effects. We show that the robot is able form useful word-to-meaning associations, even without considering grammatical structure in the learning process and in the presence of recognition errors. These word-to-meaning associations are embedded in the robot's own understanding of its actions. Thus, they can be directly used to instruct the robot to perform tasks and also allow to incorporate context in the speech recognition task. We believe that the encouraging results with our approach may afford robots with a capacity to acquire language descriptors in their operation's environment as well as to shed some light as to how this challenging process develops with human infants.

READ FULL TEXT
research
02/26/2019

Beyond the Self: Using Grounded Affordances to Interpret and Describe Others' Actions

We propose a developmental approach that allows a robot to interpret and...
research
03/05/2021

Disambiguating Affective Stimulus Associations for Robot Perception and Dialogue

Effectively recognising and applying emotions to interactions is a highl...
research
03/11/2019

Building an Affordances Map with Interactive Perception

Robots need to understand their environment to perform their task. If it...
research
05/20/2019

A Neural Network Architecture for Learning Word-Referent Associations in Multiple Contexts

This article proposes a biologically inspired neurocomputational archite...
research
09/01/2020

Hearings and mishearings: decrypting the spoken word

We propose a model of the speech perception of individual words in the p...
research
03/14/2022

Extracting associations and meanings of objects depicted in artworks through bi-modal deep networks

We present a novel bi-modal system based on deep networks to address the...
research
01/17/2022

Language Model-Based Paired Variational Autoencoders for Robotic Language Learning

Human infants learn language while interacting with their environment in...

Please sign up or login with your details

Forgot password? Click here to reset