A Practical Approach to Insertion with Variable Socket Position Using Deep Reinforcement Learning

10/02/2018
by   Mel Večerík, et al.
0

Insertion is a challenging haptic and visual control problem with significant practical value for manufacturing. Existing approaches in the model-based robotics community can be highly effective when task geometry is known, but are complex and cumbersome to implement, and must be tailored to each individual problem by a qualified engineer. Within the learning community there is a long history of insertion research, but existing approaches are typically either too sample-inefficient to run on real robots, or assume access to high-level object features, e.g. socket pose. In this paper we show that relatively minor modifications to an off-the-shelf Deep-RL algorithm (DDPG), combined with a small number of human demonstrations, allows the robot to quickly learn to solve these tasks efficiently and robustly. Our approach requires no modeling or simulation, no parameterized search or alignment behaviors, no vision system aside from raw images, and no reward shaping. We evaluate our approach on a narrow-clearance peg-insertion task and a deformable clip-insertion task, both of which include variability in the socket position. Our results show that these tasks can be solved reliably on the real robot in less than 10 minutes of interaction time, and that the resulting policies are robust to variance in the socket position and orientation.

READ FULL TEXT
research
03/10/2020

SQUIRL: Robust and Efficient Learning from Video Demonstration of Long-Horizon Robotic Manipulation Tasks

Recent advances in deep reinforcement learning (RL) have demonstrated it...
research
09/12/2017

Pre-training Neural Networks with Human Demonstrations for Deep Reinforcement Learning

Deep reinforcement learning (deep RL) has achieved superior performance ...
research
09/20/2018

Zero-shot Sim-to-Real Transfer with Modular Priors

Current end-to-end Reinforcement Learning (RL) approaches are severely l...
research
12/10/2019

AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos

Robotic reinforcement learning (RL) holds the promise of enabling robots...
research
03/02/2018

TACO: Learning Task Decomposition via Temporal Alignment for Control

Many advanced Learning from Demonstration (LfD) methods consider the dec...
research
05/22/2019

Practical Robot Learning from Demonstrations using Deep End-to-End Training

Robots need to learn behaviors in intuitive and practical ways for wides...
research
04/06/2020

NiLBS: Neural Inverse Linear Blend Skinning

In this technical report, we investigate efficient representations of ar...

Please sign up or login with your details

Forgot password? Click here to reset