Learning Bidirectional Action-Language Translation with Limited Supervision and Incongruent Input

01/09/2023
by   Ozan Özdemir, et al.
0

Human infant learning happens during exploration of the environment, by interaction with objects, and by listening to and repeating utterances casually, which is analogous to unsupervised learning. Only occasionally, a learning infant would receive a matching verbal description of an action it is committing, which is similar to supervised learning. Such a learning mechanism can be mimicked with deep learning. We model this weakly supervised learning paradigm using our Paired Gated Autoencoders (PGAE) model, which combines an action and a language autoencoder. After observing a performance drop when reducing the proportion of supervised training, we introduce the Paired Transformed Autoencoders (PTAE) model, using Transformer-based crossmodal attention. PTAE achieves significantly higher accuracy in language-to-action and action-to-language translations, particularly in realistic but difficult cases when only few supervised training samples are available. We also test whether the trained model behaves realistically with conflicting multimodal input. In accordance with the concept of incongruence in psychology, conflict deteriorates the model output. Conflicting action input has a more severe impact than conflicting language input, and more conflicting features lead to larger interference. PTAE can be trained on mostly unlabelled data where labeled data is scarce, and it behaves plausibly when tested with incongruent input.

READ FULL TEXT

page 2

page 14

research
03/02/2022

Weakly Supervised Correspondence Learning

Correspondence learning is a fundamental problem in robotics, which aims...
research
07/15/2022

Learning Flexible Translation between Robot Actions and Language Descriptions

Handling various robot action-language translation tasks flexibly is an ...
research
04/18/2016

Churn analysis using deep convolutional neural networks and autoencoders

Customer temporal behavioral data was represented as images in order to ...
research
01/17/2022

Language Model-Based Paired Variational Autoencoders for Robotic Language Learning

Human infants learn language while interacting with their environment in...
research
07/16/2021

Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences

Ordinary supervised learning is useful when we have paired training data...
research
08/27/2021

Subjective Learning for Open-Ended Data

Conventional machine learning methods typically assume that data is spli...
research
03/08/2022

Learning Bidirectional Translation between Descriptions and Actions with Small Paired Data

This study achieved bidirectional translation between descriptions and a...

Please sign up or login with your details

Forgot password? Click here to reset