Investigation of Multimodal and Agential Interactions in Human-Robot Imitation, based on frameworks of Predictive Coding and Active Inference

02/05/2020
by   Wataru Ohata, et al.
0

This study proposes a model for multimodal, imitative interaction of agents, based on frameworks of predictive coding and active inference, using a variational Bayes recurrent neural network. The model dynamically predicts visual sensation and proprioception simultaneously through generative processes by associating both modalities. It also updates the internal state and generates actions by maximizing the lower bound. A key feature of the model is that the complexity of each modality, as well as of the entire network can be regulated independently. We hypothesize that regulation of complexity offers a common perspective over two distinct properties of embodied agents: coordination of multimodalities and strength of agent intention or belief in social interactions. We evaluate the hypotheses by conducting experiments on imitative human-robot interactions in two different scenarios using the model. First, regulation of complexity was changed between the vision module and the proprioception module during learning. The results showed that complexity of the vision module should be more strongly regulated than that of proprioception because of its greater randomness. Second, the strength of complexity regulation of the whole network in the robot was varied during test imitation after learning. We found that this affects human-robot interactions significantly. With weaker regulation of complexity, the robot tends to move more egocentrically, without adapting to the human counterpart. On the other hand, with stronger regulation, the robot tends to follow its human counterpart by adapting its internal state. Our study concludes that the strength with which complexity is regulated significantly affects the nature of dynamic interactions between different modalities and between individual agents in a social setting.

READ FULL TEXT
research
03/03/2021

Controlling the Sense of Agency in Dyadic Robot Interaction: An Active Inference Approach

This study investigated how social interaction among robotic agents chan...
research
05/26/2020

Learning Whole-Body Human-Robot Haptic Interaction in Social Contexts

This paper presents a learning-from-demonstration (LfD) framework for te...
research
10/14/2019

Imitating by generating: deep generative models for imitation of interactive tasks

To coordinate actions with an interaction partner requires a constant ex...
research
03/01/2017

Learning Social Affordance Grammar from Videos: Transferring Human Interactions to Human-Robot Interactions

In this paper, we present a general framework for learning social afford...
research
12/03/2022

Learning and Blending Robot Hugging Behaviors in Time and Space

We introduce an imitation learning-based physical human-robot interactio...
research
07/04/2019

Multimodal Uncertainty Reduction for Intention Recognition in Human-Robot Interaction

Assistive robots can potentially improve the quality of life and persona...
research
09/16/2016

Model-based Test Generation for Robotic Software: Automata versus Belief-Desire-Intention Agents

Robotic code needs to be verified to ensure its safety and functional co...

Please sign up or login with your details

Forgot password? Click here to reset