Imitation and Mirror Systems in Robots through Deep Modality Blending Networks

06/15/2021
by   M. Y. Seker, et al.
0

Learning to interact with the environment not only empowers the agent with manipulation capability but also generates information to facilitate building of action understanding and imitation capabilities. This seems to be a strategy adopted by biological systems, in particular primates, as evidenced by the existence of mirror neurons that seem to be involved in multi-modal action understanding. How to benefit from the interaction experience of the robots to enable understanding actions and goals of other agents is still a challenging question. In this study, we propose a novel method, deep modality blending networks (DMBN), that creates a common latent space from multi-modal experience of a robot by blending multi-modal signals with a stochastic weighting mechanism. We show for the first time that deep learning, when combined with a novel modality blending scheme, can facilitate action recognition and produce structures to sustain anatomical and effect-based imitation capabilities. Our proposed system, can be conditioned on any desired sensory/motor value at any time-step, and can generate a complete multi-modal trajectory consistent with the desired conditioning in parallel avoiding accumulation of prediction errors. We further showed that given desired images from different perspectives, i.e. images generated by the observation of other robots placed on different sides of the table, our system could generate image and joint angle sequences that correspond to either anatomical or effect based imitation behavior. Overall, the proposed DMBN architecture not only serves as a computational model for sustaining mirror neuron-like capabilities, but also stands as a powerful machine learning architecture for high-dimensional multi-modal temporal data with robust retrieval capabilities operating with partial information in one or multiple modalities.

READ FULL TEXT

page 12

page 14

page 19

page 20

page 21

page 23

page 31

page 37

research
08/24/2022

Modality Mixer for Multi-modal Action Recognition

In multi-modal action recognition, it is important to consider not only ...
research
11/03/2020

Robust Latent Representations via Cross-Modal Translation and Alignment

Multi-modal learning relates information across observation modalities o...
research
03/24/2023

CoLa-Diff: Conditional Latent Diffusion Model for Multi-Modal MRI Synthesis

MRI synthesis promises to mitigate the challenge of missing MRI modality...
research
05/16/2019

Understanding of Object Manipulation Actions Using Human Multi-Modal Sensory Data

Object manipulation actions represent an important share of the Activiti...
research
02/25/2022

On Modality Bias Recognition and Reduction

Making each modality in multi-modal data contribute is of vital importan...
research
01/27/2023

Learning the Effects of Physical Actions in a Multi-modal Environment

Large Language Models (LLMs) handle physical commonsense information ina...

Please sign up or login with your details

Forgot password? Click here to reset