From virtual demonstration to real-world manipulation using LSTM and MDN

03/12/2016
by   Rouhollah Rahmatizadeh, et al.
0

Robots assisting the disabled or elderly must perform complex manipulation tasks and must adapt to the home environment and preferences of their user. Learning from demonstration is a promising choice, that would allow the non-technical user to teach the robot different tasks. However, collecting demonstrations in the home environment of a disabled user is time consuming, disruptive to the comfort of the user, and presents safety challenges. It would be desirable to perform the demonstrations in a virtual environment. In this paper we describe a solution to the challenging problem of behavior transfer from virtual demonstration to a physical robot. The virtual demonstrations are used to train a deep neural network based controller, which is using a Long Short Term Memory (LSTM) recurrent neural network to generate trajectories. The training process uses a Mixture Density Network (MDN) to calculate an error signal suitable for the multimodal nature of demonstrations. The controller learned in the virtual environment is transferred to a physical robot (a Rethink Robotics Baxter). An off-the-shelf vision component is used to substitute for geometric knowledge available in the simulation and an inverse kinematics module is used to allow the Baxter to enact the trajectory. Our experimental studies validate the three contributions of the paper: (1) the controller learned from virtual demonstrations can be used to successfully perform the manipulation tasks on a physical robot, (2) the LSTM+MDN architectural choice outperforms other choices, such as the use of feedforward networks and mean-squared error based training signals and (3) allowing imperfect demonstrations in the training set also allows the controller to learn how to correct its manipulation mistakes.

READ FULL TEXT

page 2

page 5

page 6

research
07/10/2017

Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-To-End Learning from Demonstration

In this paper, we propose a multi-task learning from demonstration metho...
research
08/10/2021

Learning Autonomous Mobility Using Real Demonstration Data

This work proposed an efficient learning-based framework to learn feedba...
research
03/28/2022

Learning Personalized Human-Aware Robot Navigation Using Virtual Reality Demonstrations from a User Study

For the most comfortable, human-aware robot navigation, subjective user ...
research
11/07/2019

Benchmark for Skill Learning from Demonstration: Impact of User Experience, Task Complexity, and Start Configuration on Performance

In this work, we contribute a large-scale study benchmarking the perform...
research
07/20/2023

Predicting human motion intention for pHRI assistive control

This work addresses human intention identification during physical Human...
research
07/10/2021

Informing Real-time Corrections in Corrective Shared Autonomy Through Expert Demonstrations

Corrective Shared Autonomy is a method where human corrections are layer...
research
10/04/2022

Learning Depth Vision-Based Personalized Robot Navigation From Dynamic Demonstrations in Virtual Reality

For the best human-robot interaction experience, the robot's navigation ...

Please sign up or login with your details

Forgot password? Click here to reset