Modular Deep Q Networks for Sim-to-real Transfer of Visuo-motor Policies

10/21/2016
by   Fangyi Zhang, et al.
0

While deep learning has had significant successes in computer vision thanks to the abundance of visual data, collecting sufficiently large real-world datasets for robot learning can be costly. To increase the practicality of these techniques on real robots, we propose a modular deep reinforcement learning method capable of transferring models trained in simulation to a real-world robotic task. We introduce a bottleneck between perception and control, enabling the networks to be trained independently, but then merged and fine-tuned in an end-to-end manner. On a canonical, planar visually-guided robot reaching task a fine-tuned accuracy of 1.6 pixels is achieved, a significant improvement over naive transfer (17.5 px), showing the potential for more complicated and broader applications. Our method provides a technique for more efficiently improving hand-eye coordination on a real robotic system without relying entirely on large real-world robot datasets.

READ FULL TEXT

page 2

page 5

page 8

research
05/15/2017

Tuning Modular Networks with Weighted Losses for Hand-Eye Coordination

This paper introduces an end-to-end fine-tuning method to improve hand-e...
research
09/13/2016

3D Simulation for Robot Arm Control with Deep Q-Learning

Recent trends in robot arm control have seen a shift towards end-to-end ...
research
02/07/2018

Evaluation of Deep Reinforcement Learning Methods for Modular Robots

We propose a novel framework for Deep Reinforcement Learning (DRL) in mo...
research
09/18/2017

Sim-to-real Transfer of Visuo-motor Policies for Reaching in Clutter: Domain Randomization and Adaptation with Modular Networks

A modular method is proposed to learn and transfer visuo-motor policies ...
research
11/12/2015

Towards Vision-Based Deep Reinforcement Learning for Robotic Motion Control

This paper introduces a machine learning based system for controlling a ...
research
02/09/2021

Where is my hand? Deep hand segmentation for visual self-recognition in humanoid robots

The ability to distinguish between the self and the background is of par...
research
02/21/2022

Guided Visual Attention Model Based on Interactions Between Top-down and Bottom-up Information for Robot Pose Prediction

Learning to control a robot commonly requires mapping between robot stat...

Please sign up or login with your details

Forgot password? Click here to reset