Online Motion Generation with Sensory Information and Instructions by Hierarchical RNN

12/14/2017
by   Kanata Suzuki, et al.
0

This paper proposes an approach for robots to perform co-working task alongside humans by using neuro-dynamical models. The proposed model comprised two models: an Autoencoder and a hierarchical recurrent neural network (RNN). We trained hierarchical RNN with various sensory-motor sequences and instructions. To acquire the interactive ability to switch and combine appropriate motions according to visual information and instructions from outside, we embedded the cyclic neuronal dynamics in a network. To evaluate our model, we designed a cloth-folding task that consists of four short folding motions and three patterns of instruction that indicate the direction of each short motion. The results showed that the robot can perform the task by switching or combining short motions with instructions and visual information. We also showed that the proposed model acquired relationships between the instructions and sensory-motor information in its internal neuronal dynamics. Supplementary video: https://www.youtube.com/watch?v=oUBTJNpXW4A

READ FULL TEXT

page 7

page 8

research
12/31/2017

Neurally Plausible Model of Robot Reaching Inspired by Infant Motor Babbling

In this paper we present a neurally plausible model of robot reaching in...
research
11/01/2016

Detecting Affordances by Visuomotor Simulation

The term "affordance" denotes the behavioral meaning of objects. We prop...
research
03/04/2019

Automated Generation of Reactive Programs from Human Demonstration for Orchestration of Robot Behaviors

Social robots or collaborative robots that have to interact with people ...
research
02/26/2022

Initialization of Latent Space Coordinates via Random Linear Projections for Learning Robotic Sensory-Motor Sequences

Robot kinematics data, despite being a high dimensional process, is high...
research
02/25/2021

Neuroevolution of a Recurrent Neural Network for Spatial and Working Memory in a Simulated Robotic Environment

Animals ranging from rats to humans can demonstrate cognitive map capabi...
research
06/27/2023

Style-transfer based Speech and Audio-visual Scene Understanding for Robot Action Sequence Acquisition from Videos

To realize human-robot collaboration, robots need to execute actions for...
research
11/01/2016

Learning recurrent representations for hierarchical behavior modeling

We propose a framework for detecting action patterns from motion sequenc...

Please sign up or login with your details

Forgot password? Click here to reset