Reaching, Grasping and Re-grasping: Learning Multimode Grasping Skills

02/11/2020 ∙ by Wenbin Hu, et al. ∙ 0

The ability to adapt to uncertainties, recover from failures, and coordinate between hand and fingers are essential sensorimotor skills for fully autonomous robotic grasping. In this paper, we aim to study a unified feedback control policy for generating the finger actions and the motion of hand to accomplish seamlessly coordinated tasks of reaching, grasping and re-grasping. We proposed a set of quantified metrics for task-orientated rewards to guide the policy exploration, and we analyzed and demonstrated the effectiveness of each reward term. To acquire a robust re-grasping motion, we deployed different initial states in training to experience failures that the robot would encounter during grasping due to inaccurate perception or disturbances. The performance of learned policy is evaluated on three different tasks: grasping a static target, grasping a dynamic target, and re-grasping. The quality of learned grasping policy was evaluated based on success rates in different scenarios and the recovery time from failures. The results indicate that the learned policy is able to achieve stable grasps of a static or moving object. Moreover, the policy can adapt to new environmental changes on the fly and execute collision-free re-grasp after a failed attempt within a short recovery time even in difficult configurations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Reactive adaption to new changes and recovery from failures are important features for any control policy for real-world robot applications in the future. For this, autonomous grasping is the fundamental capability of many robotic manipulation tasks. However, the combined control of reaching, grasping and re-grasping in a dynamically changing, non-stationary environment remains a challenge.

In the traditional planning approaches, reaching and grasping are inherently different and usually planned separately and deployed sequentially. For grasping of a moving ball, vision and proximity sensors have been used from a top-view [2015_Suzuki]. Marturi et al. developed an approach of planning pre-grasp posture online and tracking a moving object, where the grasp motion was determined by a human operator [2019_Marturi]. Planning of the complete reaching and grasping motion is quite time-consuming and is often implemented in an open-loop or partially reactive controlled manner [2010_KROEMER]. Generally, current planning based methods have good results in solving reaching [2015_Suzuki] or grasping problem [2017_Hang] individually, but the switch between controllers is designed manually. As a next-level performance with increased robustness, reaching, grasping and even re-grasping should be addressed simultaneously in one unified policy.

Machine learning methods provide a promising option for autonomous control, as they alleviate the requirements of manual design and prior knowledge as they can autonomously explore the whole operation space and react in potential corner cases. Recently, vision-based data-driven methods show prominent performances [2014_Bohg, 2018_Lu] in the optimization of grasping static objects. Compared with classical grasp synthesis, learning-based approaches improve the performance of grasping unknown objects dramatically [2015_Kappler, 2015_Pinto, 2016_Levine, 2017_Mahler]. However, training of the model requires very large data, either collected from simulation [2015_Kappler] or self-supervised real robot experiment which is time-consuming [2015_Pinto]. Furthermore, sampling and ranking of grasp candidates often takes long computation time [2015_Pinto, 2017_Mahler], which limits the capability of reactive control. The success of a grasp strongly relies on precise object perception and accurate hardware control. In case of a failure, no recovery strategy is being deployed, and the whole pipeline is reset and another attempt is repeated instead of an on-line, reactive adjustment [2015_Kappler].

Fig. 1: Grasping a moving object.

Recent research in Deep Reinforcement Learning (DRL) has shown promising capabilities of solving continuous control tasks with high-dimensional state and action spaces, such as pouring liquids

[2015_Hangl], multi-finger grasping [2019_ICRA_mbkrb], in-hand manipulation [2018_OpenAI] or bipedal locomotion tasks [2018_Yang]

. In a Reinforcement Learning framework, an agent learns a policy from scratch by maximizing the expected cumulative return from autonomous interactions with environment. In contrast to other Machine Learning techniques, such as unsupervised and supervised learning, no pre-collected training data is required as the agent autonomously generates the training data by interacting with the environment, and infers the quality of its state and actions through reward signals. Not requiring pre-generated training data is especially useful in large continuous action and state spaces, because labelling whether one action under current state is good or bad is infeasible due to the infinite amount of possible combinations.

The focus of this paper is to study a unified control policy for reaching, grasping and re-grasping, which requires synergetic behaviours, fine coordination between hand and fingers. This unified control policy is obtained from training an agent through DRL without any human interference or hard-coded control architecture. Although the task spaces for hand and fingers are different, they have to observe each other’s state and coordinate properly, especially when the hand is approaching the object, otherwise unwanted collision might happen. In this work, we are neither aiming to benchmark with the reaching ability of planning methods nor with the grasp quality of cutting-edge data-driven methods. Instead, we intend to learn a unified policy with coordinated motor skills for the entire grasping loop. Most importantly, the policy should be capable of re-grasping quickly in case of failure; a problem which has been partially-addressed in aforementioned methods. Real-time failure recovery and re-grasp can significantly increase the robustness and efficiency, and further enable real-world applications, compared to the methods which simply restarts the whole pipeline.

The main contributions of this paper are threefold:

  • A unified policy of reaching, grasping and re-grasping learned from Deep Reinforcement Learning.

  • A task-orientated reward function and special initial states for learning a robust policy.

  • Learned policy that is able to achieve robust grasp of static or moving objects, adjust its motion on-line under sudden changes, and re-grasp quickly after failures, even in some challenging configurations, as shown in Fig. 1.

In the remainder of this paper, we first discuss the related work in the next section. We elaborate the reinforcement learning algorithm in Section III. The details of the simulation design for policy learning are presented in Section IV. The results of learned policies are analyzed and evaluated in Section V. Finally we summarize the paper and suggest future work in Section VI.

Ii Related Work

Visual servoing methods [1993_vs] offer a proper solution to the integration of reaching and grasping. The real-time action is determined by the current vision input, so that the visual servoing methods inherently have the capability of reactive control. However, most applications require large amounts of prior knowledge about the environment and the task [1997_won], or complex hierarchical control architectures [2003_akio].

According to the grasping task, force [2011_Pastor] or tactile information [2018_Roberto, 2018_Francois, 2016_Chebotar] has been used as feedback to close the control loop and guide the re-grasping motion. These approaches, however, only modify the applied finger forces or the hand posture for a static target, but are not addressing the problem of handling moving object, or the coordination between hand and fingers.

Some researches solve the reaching and grasping problem through a combination of trajectory planning with policy learning [2010_KROEMER]

or imitation learning

[2007_Ratliff]. These methods achieved good results on grasping static novel targets, but the requirements of prior knowledge and hand-crafted control architecture limit the capability of handling environmental changes, such as the sudden movement of the object.

In order to reduce the reliable of knowledge about the system and task’s solution, Lampe et al. [2013_Lampe] combine the classic visual servoing method with DRL. The controller is learned from scratch by success or failure, to control the robot arm to reach and grasp a moving bowl on the table from top-down. The entire combined system is split into two parts: long-range controller mainly for reaching and short-range controller for more precise motion of reaching and grasping. The two controllers are generated differently and use different cameras as vision input. Also, the switch of them is triggered by hand-crafted condition. Compared to [2013_Lampe], instead of manual partition of controller, our approach learns the policy of reaching and grasping in a holistic manner as one policy.

One major deficiency of learning-based grasp detection methods is the long computation time caused by large CNN and individually sampled and ranked grasp candidates [2015_Pinto, 2017_Mahler]. Morrison et al. overcome the problem by proposing a lightweight network structure that enables reactive close-loop control [2018_Morrison]. The learned controller can dynamically track and grasp novel objects in clutter and achieve high success rate. However, this approach has not yet considered the re-grasp problem in case of a failed attempt. In our approach, we introduce challenging configurations that cause failure grasps as additional initial states to train the collision-free re-grasp motion.

Iii Preliminaries

In this section we briefly introduce deep reinforcement learning and the proximal policy optimization algorithm that we use for problem formulation and policy learning.

Iii-a Reinforcement Learning(RL)

The task of reaching and grasping an object is considered as a finite-horizon discounted Markov decision process (MDP), consisting of a state space

, an action space , a distribution of initial states , the state transition dynamics , a reward function , and a discount factor . Every learning episode starts with a sampled initial state . Thereafter, at every timestep, the agent chooses one action based on current state and the policy to be executed. After execution, the agent will receive a reward and the state observation from the environment. The goal of the agent is to maximize the expected discounted sum of rewards .

Iii-B Proximal Policy Optimization(PPO)

In this work, we use an on-policy deep reinforcement learning algorithm named Proximal Policy Optimization (PPO) [2017_Schulman] for policy optimization. We implement PPO in an actor-critic fashion, with the actor consisting of a policy parameterized by

and a critic consisting of estimated value function

parameterized by .

The objective function of PPO is

(1)

where

denotes the probability ratio

and denotes the estimate of advantage value suggesting whether the action is better or worse than the average action the policy takes at .

is a hyperparameter designed to clip the probability ratio and constrain the policy update. This objective function allows the policy to update towards action distributions with positive advantage while avoiding excessively large policy changes.

The goal of PPO is to maximize the objective function , therefore is updated by gradient ascent w.r.t Eq. 1. The estimated value function

is trained by minimizing the loss function:

(2)

where is the discounted reward during timestep , is the discount factor and is the total number of timestep during the samped path. is updated by gradient descent w.r.t Eq. 2. Both of the policy and the value function

are parameterized with a fully-connected neural network with two hidden layers of 64 units.

Iv Policy Learning of Dynamic Grasping

In this section, we present the details of learning a unified policy for reaching, grasping and re-grasping in simulation as demonstrated in Algorithm 1. First, we introduce the simulation environment. Afterwards we describe the control framework. Then, we explain the definition of the state and action space, as well as the design of the reward. Finally, we introduce the structure of training episodes.

1:for  do
2:     if  then
3:         Normal initial state with random object position
4:     else
5:         One special initial state in Fig. 6
6:     end if
7:     for  do
8:         Get the current state
9:         Run policy and get the action
10:         Execute based on low-level controller in Fig. 4
11:         Compute reward with Eq. 3
12:         Collecting tuple
13:     end for
14:     
15:     Update by stochastic gradient ascent w.r.t. Eq. 1
16:     Update

by stochastic gradient descent w.r.t.

Eq. 2
17:end for
Algorithm 1 Policy learning for dynamic grasping.

Iv-a Simulation Setup

For simulating stable, realistic contacts and dynamics, we use the physics simulation engine MuJoCo [2012_Todorov]. We use the Barrett Hand as the end-effector, attached on the Franka Emika robot arm. For policy training, the target object is a cube, and the simulation setup is in Fig. 2.

In simulation, the agent will learn the proper motion of the robot hand, including the translational and rotational hand movement and the finger actions to accomplish the combined task of reaching, grasping and re-grasping of an object. The action space only involves the motion of the end-effector. The motion of the arm is generated through off-the-shelf inverse kinematics solver and motion planner. Since we set early termination signal of self-collision and hitting joint limit, and the object is placed within a limited workspace, the learned motion is safe and collision-free.

In this paper, for policy training, we use a cube as the grasping target. We utilize geometry key points, which can be seen as a sparse point cloud, to convey the object surface information. The key points consist of vertexes, centre of facets, centre of edges and the geometry centre, totaling 27 points. The geometry centre is estimated from other key points. For the testing objects displayed in Fig. 3

, we utilize the geometry key points of their bounding boxes as the observation information. Acquisition of the bounding box and geometry key points can be achieved with Computer Vision methods

[2017_Han]. Considering the difficulty of obtaining complete point cloud in real experiment, we only utilize the partial point cloud of the object. We also introduce sensor noises into the positions of geometry key points. More details are in Section V-G.

Fig. 2: Barrett Hand attached on the Franka Emika arm and the target object placed on the ground. Blue points: geometry key points of object’s bounding box. Green points: vertexes for forming hand convex hull. Red regions on hand: virtual sites for detecting contact and attaching force sensors.
Fig. 3: The training cube and testing objects with virtual bounding boxes.
Fig. 4: Block diagram of the control framework.

Iv-B Control Framework

The control framework is designed in a hierarchical architecture, consisting of a high-level and a low-level loop, as shown in Fig. 4. The sensory feedback processor filters the raw robot states and extracts the geometry key points from the object’s bounding box, estimating the object’s geometry centre. The high-level controller is responsible for producing actions based on the processed environment state, with a frequency of 50Hz. Given the actions output from high-level controller, the 500Hz low-level proportional-derivative (PD) controller is responsible for computing target hand pose and velocity, as well as the finger joint torques, and feed them into the physics simulation.

Iv-C State and Action Space

We use the term hand as the execution entity for reaching motion and fingers as the entity for grasping. To learn the synergy between hand and fingers, the agent needs to have full awareness of hand and fingers states. Therefore it takes the state as input where refers to the three dimensional object position which is estimated from object’s bounding box; refers to the hand rotation angle around axis; refers to the three dimensional finger joint angles; refers to the distances between each fingertip and the nearest object key point, which we find helpful when the hand is close to the object; refers to the contact force measurements. Leveraging contact forces feedback in learning can improve the performance of learned grasp [2019_ICRA_mbkrb]. The force sensors are attached on the inner hand as shown in Fig. 2, which will return the magnitude of the contact force.

The grasp detection problem concerns choosing the proper grasp pose and contact points based on the object’s shape and that is beyond the scope of this paper. Therefore, to focus on the reactive control of reaching, grasping and re-grasping, we limit the DoF of the end-effector so that it can only approach and grasp the object laterally. We set the palm to always facing the lateral way, and constrain the translational motion of end-effector to the two dimensional plane at a certain height. Only the rotation around axis is allowed. Hence, the action consists of the hand translational velocities, rotational velocity around axis, and the finger torques.

Considering the implementability of the policy in the real world, the working space is bounded within the reach of the robot arm. The maximum end-effector velocity is bounded within 1m/s and the finger torque is bounded within 2Nm.

(a) Reaching.
(b) Grasping.
Fig. 5: Visualization of computing the topology reward , in reaching and grasping. Blue points: object geometric key points. Green area: the hand convex hull. In Fig. 4(b) four object key points are inside the convex hull.

Iv-D Reward Design

Reward design is one of the most important aspects in learning a good policy. With a poorly designed reward function, the learning may not converge to the desired policy and may lead to bad performance and safety issues [2016_Amodei]. Reward design is a way to guide the policy search with the researchers’ prior knowledge. From experience of how we reach and grasp things like mugs from a table, and observations on how infants learns to reach and grasp, we propose the following reward function, which is the linear combination of multiple positive reward terms and negative penalty terms with corresponding weights :

(3)

where . The different terms are computed by the following equations.

The term rewards the distance between the hand and the object, and guides the agent learning to approach the object, where refers to the positions of hand key points - three fingertips and centre of the palm; refers to number of the object key points and refers to their positions:

(4)

In , the vector refers to the unit vector pointing from the hand key points to the estimated object geometry centre; refers to the normal vector of hand key points. The dot product will lead the hand to learn to face towards the object and grasp it in a proper direction:

(5)

In the topology reward , refers to the total number of object key points observed; refers to the number of points which are inside the three dimensional convex hull formed by the hand and fingers. The convex hull is formed by multiple points including fingertips, finger joints and four corners of the square palm. Fig. 5 shows the convex hull formed by the hand and fingers in reaching and grasping motions:

(6)

In addition to , a contact term is added to encourage power (enveloping) grasping, which is more stable than precision (fingertip) grasp:

(7)

where refers to the number of contact points between object and hand. This term encourages more contact points of the hand with the object during the grasp, under the assumption that with more contact points, the more stable the grasp is. Note that only the contact points in the inner part of hand are counted in .

If the hand contacts the object with the outer side of fingers, the contact points are counted in , which is regarded as the penalty term:

(8)

A penalty on the object translational velocity is added to prevent the hand from pushing the object away and encourage a gentle grasping behavior:

(9)
(a) Potential collision.
(b) Unreliable grasp points.
Fig. 6: Two challenging initial states during training.

Iv-E Learning Episode

A single learning path lasts for 400 time steps in simulation. An episode starts with randomizing the object position, where the cube is being set at a random place on the ground within the operating range. In order to enable the hand to obtain the ability of grasping moving targets, we added random disturbances on the object. The candidate disturbances consist of forces applied on the centre of mass of object in four lateral directions. There is one disturbance duration in each episode, happening at any time in the first half of it, lasting for 0.3 seconds, so the object will slide towards the force direction if not being grasped already.

With randomly added disturbances, the agent can gather enough trials to learn to track and grasp moving target, but it usually fails to achieve the re-grasp motion if the object slips away during the grasp. Randomly distributed disturbance does not provide sufficient data points for the agent to learn the re-grasp. The re-grasp requires synergy motion of fingers and hand. To avoid collision between outer side of fingers and the object, the hand sometimes needs to move backward to make sure there is enough space for opening the fingers. Thus, apart from the normal training episode, we designed two initial states with special finger joints and object position to train the re-grasp policy, as shown in Fig. 6.

V Evaluation and Analysis

In this section, we present the results of the learned policy in simulation, and evaluate the capabilities of reaching, grasping and re-grasping with different metrics and tasks. Moreover, we discuss the necessity and effect of each reward term and initial training states. The evaluation tasks contain: (1) static grasp with random object position; (2) dynamic grasp where a force with random direction would be applied on the object for a certain duration; (3) re-grasp starting from initial configurations shown in Fig. 6 and (4) dynamic re-grasp where the object would be moved out of the hand during first grasp attempt.

Lift Recover
Static Target Grasp 97% 90% 69%
Dynamic Grasp() 98% 88% 74%
Dynamic Grasp() 78% 78% 60%
Close Fingers Re-grasp 100% 92% 56% 0.92
Shallow Grasp Re-grasp 100% 100% 91% 0.69
Dynamic Re-grasp 83% 79% 56% 1.48
TABLE I: The evaluation of the policy in different tasks.
(a) Torque command of each finger.
(b) Translational velocities of the hand.
Fig. 7: The output actions of learned policy during one canonical trial of grasping a static object placed at a random position.

V-a Evaluation Metrics

We evaluate the performance by two metrics: the success rate in lift-and-shake tests, and recovery time specially used for re-grasp motion.

In the lifting test, the robot first lifts up the hand, and if the object can be held for ten seconds, we regard the test as a success. In the shaking test, the robot first lifts the hand and then disturbance forces with random directions are applied on the object. If the grasp lasts for ten seconds, this trail is marked as a success. Here we set the force to and . We record the time that the policy spends to recover from the failure and accomplish a successful re-grasp attempt. The recovery time is the average time required for the agent to achieve a robust grasp which passes the lift-and-shake tests.

In Table I we present the evaluation of different tasks with different metrics. The results indicate that our approach can generate a robust control policy which can react to changes rapidly and execute re-grasp in case of failures, even under difficult configurations as shown in Fig. 6. The achieved grasps are stable for lifting in most cases and can resist external disturbances to some extent.

V-B Static Target

In this test, the object is placed at a random position, and the task is to reach and grasp it. The first row of Table I indicates that the learned policy is able to achieve a stable grasp for a static target.

Fig. 7 demonstrates the actions in a typical static grasp task. When the hand is relatively far from the object, the finger torques are negative, which means the fingers are extending and open, enlarging the grasping area. The moving velocity of hand reaches the maximum at the beginning and converge to zero at the end, indicating that the agent learns to slow down when the hand approaches the object. This behaviour coincides with how humans would grasp an object.

V-C Moving Target

In reinforcement learning, at every control step, the agent takes the current observation of the environment as input and outputs the corresponding actions. In this paper the agent has no perception of the object’s motion status such as velocity or acceleration. Therefore it is unable to predict the object’s future position. Without knowing the object velocity and acceleration, the agent will not be able to learn an optimal policy of grasping a moving object. However, due to the randomization of object position and the disturbances applied on the object during the training, combined with a high enough control frequency. The learned sub-optimal policy has decent tracking capability and is able to dynamically re-adjust to grasp a moving object as long as the object’s velocity is within the agent’s operational velocity.

Fig. 1 demonstrates the motion of grasping a moving object. In this test setting, the object locates at a random position. A force pointing to a random direction within the plane is applied on the object’s centre for 0.3 second. So the object will start moving at the beginning of the trial. According to Table I, the learned policy is able to adjust online based on the change of object position.

(a) Collision-free re-grasping from the initial configuration where the object is close to the fingers.
(b) Collision-free re-grasping with accurate finger-hand coordination from the initial configuration of Unreliable grasp.
(c) Collision-free re-grasping of a dynamically moving object.
Fig. 8: The snapshots of re-grasping motions generated from the learned policy.

V-D Re-grasp Test

In this test setting, in order to evaluate the ability of re-opening the fingers and applying another grasp attempt, the object will be moved out of the hand at the timing when the distance between fingertips and the object is below the threshold. There are two potential consequences. First, the object moves out of the cage formed by fingers and palm completely. Then the fingers will stop closing and re-open. If the object blocks the fingers from opening, the hand needs to retract backwards to create enough space before re-grasping. In the second situation, the object is caught by the fingertips while moving. This will cause a Unreliable shallow grasp, so the agent learns to release and re-grasp the object.

Fig. 7(a), Fig. 7(b) display the learned re-grasping motions from two aforementioned challenging configurations. In Fig. 7(c), the object suddenly moves away to right since the second snapshot, and as a responsive coordination, the fingers re-open and the hand chases the object until executing another grasping attempt in close proximity. The agent learns an effective manner to return to a proper pre-grasp posture without any collision and redundant motions. According to the results listed in Table I, the learned policy is able to recover from failures within 2 seconds, which is far more efficient than resetting the whole control pipeline, and achieve a stable grasp which can pass the lift-and-shake tests with high success rates.

Fig. 9: The learning curves in a typical learning process. The blue curve is the sum of all reward terms.

V-E Ablation Study

In order to show the necessity and effectiveness of each reward term, we remove them from the reward function, and compare the learned policies with and without them. Fig. 9 demonstrates the learning curves of each reward term, from which we can tell that every term contributes in the policy learning. Table II demonstrates the capabilities of different policies trained with incomplete reward function.

The and affect the learning first and guide the agent to approach the target. Without , the hand cannot learn to reach the object. guides the fingers to keep open when approaching the target. After removing it, the agent fails to learn the proper grasping motion.

The and come to influence afterwards and mainly guide the learning of grasping motion. plays an important role in the early stage of grasping learning after the hand learns to approach the object. It guides the fingers to wrap around the object, and also compensates the penalty on object velocity caused by the random actions from policy search. After removing , the hand learns to stay at a certain distance from the object with fingers open, in order to avoid the penalty on object velocity. encourages the palm and fingers to contact the object and leads to a firm grasp. After removing , the hand learns to stay at a closer distance from the object and the fingers will form a cage around it, instead of contacting and grasping it.

The penalty on object velocity is responsible for achieving gentle grasp motion and preventing the hand from moving the object. The agent can still learn the reaching and grasping without this reward term, but the hand will move the object randomly after a successful grasp. The collision penalty and the two special initial states in Fig. 6 are essential for learning the collision-free re-grasping. With and initial states, the hand will move backwards to create enough space for the fingers to open. After removing them, the hand will not move backwards and the outer part of fingers will collide with the object while opening.

Reach Grasp Re-grasp Lift
No
No
No
No
No
No
No special initial states
TABLE II: Capability tests of policies learned with incomplete reward functions.

V-F Evaluation on unseen objects

In order to evaluate the policy’s robustness and generalization ability, we repeat the static grasping task with four different unseen objects in Fig. 3, including a cylinder, a can, a mustard bottle and a wood block. The meshes of the objects are imported from the YCB database [2015_YCB]. Since the policy takes the geometry key points on processed bounding box of the target object, as well as the in-hand contact forces, as input, it shows promising capability of grasping columnar objects with irregular shapes and various sizes.

Peak Finger Joint Torque [] Peak Hand Velocity
Finger 1 Base Finger 2 Base Finger 3 Base X [] Y [] Yaw []
Limit 2 2 2 1 1 1
Static Grasp 1.81 1.01 1.52 0.86 0.81 0.72
Dynamic Grasp() 1.59 0.77 1.08 0.75 1 0.95
Dynamic Grasp() 1.69 0.78 0.96 0.78 0.96 0.81
Close Fingers Re-grasp 1.49 1.91 1.71 0.75 1 0.91
Shallow Grasp Re-grasp 1.12 0.91 0.99 0.81 0.62 0.54
TABLE III: Peak finger torques and hand velocities for different scenarios.
(a)
(b)
(c)
(d)
Fig. 10: Comparison of ideal and realistic noisy partial observations of object’s key points: (a) complete, noisy-free point cloud; (b-d) randomized noisy partial observations used in simulation for training and testing.

V-G Robustness on Partial Visual Observation

In real world experiments, it is hard to obtain the noisy-free complete point cloud of the target. Therefore, considering the sim-to-real transfer, during the policy training, we only utilize half of the geometry key points to encode object’s surface information and estimate the object position. We also introduce sensory noises to the key points positions, which are sampled from a normal distribution with

. Fig. 10 demonstrates the comparison of the complete point cloud of the cube and an example of noisy partial point cloud applied in the policy training. The results suggest that the learned policy is robust to partial noisy visual input.

V-H Feasibility of Hardware Experiment

Although the training and evaluation tests are conducted in simulation, we take the realism of generated policy into consideration by discouraging aggressive motions through penalties and restrict the hand velocities and finger torques. We also purposely introduce sensor noises in the state observation and only use partial key points of the object for the feedback. Therefore, these realistic setting in training paves a way for the sim-to-real transfer. Table III displays the average peak finger joint torques and hand velocities in different evaluation tasks. From the table we can ensure that the motions generated from learned policy are within the constraints and have enough safety margin. For implementation on different hardware platform, we only need to match the limits used in simulation with those of the real robots.

Vi Conclusion

In this paper, we used model-free reinforcement learning to acquire the combined reactive control policy of reaching and grasping. In simulation, we used a cube as the target for grasping, a three-fingered robot hand as the end-effector. The agent explores and optimizes the policy through trial-and-error, guided by a well-defined reward function. Apart from the initial learning state with random object configuration, we also incorporate two challenging initial states to train the re-grasping ability by inducing failed attempts. The training results showed that the learned agent can reach and grasp a static target, and also grasp a moving object and generate a collision-free re-grasp online after a failure. Through ablation experiments, we demonstrated how each reward term and the special initial states improve the capability and robustness of the learned policy.

As for future work, we will use a real robotic hand mounted on the Franka robot arm to transfer the learned policy into reality, which is also considered in our training phase as we applied the constraints on the hand motion restricted by the Franka arm. Further research will also be done on robot perception to employ more complicated visual feedback into the reinforcement learning framework.

References