Modular Deep Q Networks for Sim-to-real Transfer of Visuo-motor Policies
While deep learning has had significant successes in computer vision thanks to the abundance of visual data, collecting sufficiently large real-world datasets for robot learning can be costly. To increase the practicality of these techniques on real robots, we propose a modular deep reinforcement learning method capable of transferring models trained in simulation to a real-world robotic task. We introduce a bottleneck between perception and control, enabling the networks to be trained independently, but then merged and fine-tuned in an end-to-end manner. On a canonical, planar visually-guided robot reaching task a fine-tuned accuracy of 1.6 pixels is achieved, a significant improvement over naive transfer (17.5 px), showing the potential for more complicated and broader applications. Our method provides a technique for more efficiently improving hand-eye coordination on a real robotic system without relying entirely on large real-world robot datasets.
READ FULL TEXT