Embodying Pre-Trained Word Embeddings Through Robot Actions

04/17/2021
by   Minori Toyoda, et al.
0

We propose a promising neural network model with which to acquire a grounded representation of robot actions and the linguistic descriptions thereof. Properly responding to various linguistic expressions, including polysemous words, is an important ability for robots that interact with people via linguistic dialogue. Previous studies have shown that robots can use words that are not included in the action-description paired datasets by using pre-trained word embeddings. However, the word embeddings trained under the distributional hypothesis are not grounded, as they are derived purely from a text corpus. In this letter, we transform the pre-trained word embeddings to embodied ones by using the robot's sensory-motor experiences. We extend a bidirectional translation model for actions and descriptions by incorporating non-linear layers that retrofit the word embeddings. By training the retrofit layer and the bidirectional translation model alternately, our proposed model is able to transform the pre-trained word embeddings to adapt to a paired action-description dataset. Our results demonstrate that the embeddings of synonyms form a semantic cluster by reflecting the experiences (actions and environments) of a robot. These embeddings allow the robot to properly generate actions from unseen words that are not paired with actions in a dataset.

READ FULL TEXT

page 1

page 3

page 4

page 7

research
10/25/2020

Autoencoding Improves Pre-trained Word Embeddings

Prior work investigating the geometry of pre-trained word embeddings hav...
research
03/08/2022

Learning Bidirectional Translation between Descriptions and Actions with Small Paired Data

This study achieved bidirectional translation between descriptions and a...
research
04/07/2021

Combining Pre-trained Word Embeddings and Linguistic Features for Sequential Metaphor Identification

We tackle the problem of identifying metaphors in text, treated as a seq...
research
01/27/2020

The POLAR Framework: Polar Opposites Enable Interpretability of Pre-Trained Word Embeddings

We introduce POLAR - a framework that adds interpretability to pre-train...
research
08/03/2019

Word2vec to behavior: morphology facilitates the grounding of language in machines

Enabling machines to respond appropriately to natural language commands ...
research
01/28/2019

Analogies Explained: Towards Understanding Word Embeddings

Word embeddings generated by neural network methods such as word2vec (W2...
research
04/01/2020

Adversarial Transfer Learning for Punctuation Restoration

Previous studies demonstrate that word embeddings and part-of-speech (PO...

Please sign up or login with your details

Forgot password? Click here to reset