Deep Reinforcement Learning for Contact-Rich Skills Using Compliant Movement Primitives
In recent years, industrial robots have been installed in various industries to handle advanced manufacturing and high precision tasks. However, further integration of industrial robots is hampered by their limited flexibility, adaptability and decision making skills compared to human operators. Assembly tasks are especially challenging for robots since they are contact-rich and sensitive to even small uncertainties. While reinforcement learning (RL) offers a promising framework to learn contact-rich control policies from scratch, its applicability to high-dimensional continuous state-action spaces remains rather limited due to high brittleness and sample complexity. To address those issues, we propose different pruning methods that facilitate convergence and generalization. In particular, we divide the task into free and contact-rich sub-tasks, perform the control in Cartesian rather than joint space, and parameterize the control policy. Those pruning methods are naturally implemented within the framework of dynamic movement primitives (DMP). To handle contact-rich tasks, we extend the DMP framework by introducing a coupling term that acts like the human wrist and provides active compliance under contact with the environment. We demonstrate that the proposed method can learn insertion skills that are invariant to space, size, shape, and closely related scenarios, while handling large uncertainties. Finally we demonstrate that the learned policy can be easily transferred from simulations to real world and achieve similar performance on UR5e robot.
READ FULL TEXT