Reaching, Grasping and Re-grasping: Learning Fine Coordinated Motor Skills

02/11/2020 ∙ by Wenbin Hu, et al. ∙ 0

The ability to adapt to uncertainties, recover from failures, and sensori-motor coordination between hand and fingers are essential skills for fully autonomous robotic grasping. In this paper, we use model-free Deep Reinforcement Learning to obtain a unified control policy for controlling the finger actions and the motion of hand to accomplish seamlessly combined tasks of reaching, grasping and re-grasping. We design a task-orientated reward function to guide the policy exploration, analyze and demonstrate the effectiveness of each reward term. To acquire a robust re-grasping motion, we deploy different initial states during training to experience potential failures that the robot would encounter during grasping due to inaccurate perception or disturbances. The performance of learned policy is evaluated on three different tasks: grasping a static target, grasping a dynamic target, and re-grasping. The quality of learned grasping policy is evaluated based on success rates in different scenarios and the recovery time from failures. The results indicate that the learned policy is able to achieve stable grasps of a static or moving object. Moreover, the policy can adapt to new environmental changes on the fly and execute collision-free re-grasp after a failed attempt, even in difficult configurations within a short recovery time.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.