ContactGrasp: Functional Multi-finger Grasp Synthesis from Contact

04/07/2019 ∙ by Samarth Brahmbhatt, et al. ∙ 0

Grasping and manipulating objects is an important human skill. Since most objects are designed to be manipulated by human hands, anthropomorphic hands can enable richer human-robot interaction. Desirable grasps are not only stable, but also functional: they enable post-grasp actions with the object. However, functional grasp synthesis for high-dof anthropomorphic hands from object shape alone is challenging. We present ContactGrasp, a framework that allows functional grasp synthesis from object shape and contact on the object surface. Contact can be manually specified or obtained through demonstrations. Our contact representation is object-centric and allows functional grasp synthesis even for hand models different than the one used for demonstration. Using a dataset of contact demonstrations from humans grasping diverse household objects, we synthesize functional grasps for three hand models and two functional intents.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Household objects are designed for use and manipulation by human hands. Humans excel at grasping and then performing actions with these objects. Enabling this skill for robots has the potential to unlock more productive and natural human-robot interaction. We take a step towards this by proposing ContactGrasp, a framework to synthesize functional grasps. Using object shape and contact demonstrations, ContactGrasp allows functional grasp synthesis for kinematically diverse hand models.

Recent work on robotic grasping of household objects focuses on using large amounts of data collected by trying random grasping actions, often using parallel-jaw or suction-cup end effectors [1, 2, 3, 4]. This approach generates lots of self-supervised data that enables training of robust grasp policies. However, the simplicity of the end effectors has a big hand in enabling this data-collection strategy. In addition, such end effectors are mostly suited for pick-and-place tasks and do not allow performing a post-grasp action (e.g. handing an object off, clicking the camera shutter button, switching on a flashlight, etc.). Functionality of a grasp is important because 1) it enables more natural collaboration in activities with humans, and 2) most household objects are designed with specific functional grasps in mind (e.g. spray bottle has contoured neck and squeezer, hammer has a long handle, etc.).

A large body of work addresses grasp synthesis from object geometry. These approaches model the hand as a kinematic tree of rigid mesh parts and intelligently sample its configuration space to synthesize a set of stable grasps [5, 6, 7]. However, these grasps lack the notion of functionality and most of them are far from how a human would grasp the object (see Figure 3 for examples).

Other approaches have utilized human demonstrations to close this gap. The human hand pose (captured by data gloves in [8], and by visual recognition in [9]) is mapped by hand-engineered transformations to a similar robot end-effector pose. Kinesthetic teaching [10] can deliver demonstrations directly in the target hand model space. However, such kinematic re-targeting methods are tied to a specific end-effector and require careful analysis to develop mappings to new end-effectors. Approaches that record the human hand pose suffer from the additional difficulty of orienting the hand pose w.r.t. the object (which requires embedding an additional 6-DOF magnetic tracker in the object [11]). In addition, they lack a clear objective to reproduce the demonstrated contact, as we show in Section VI. Analysis of contact during human grasping has shown that humans prefer to contact specific areas during functional grasping [12].

This motivates the development of an object-centric approach that is not tied to a specific end-effector, and that emphasizes the reproduction of demonstrated contact. Towards this end, we propose ContactGrasp, a framework which synthesizes grasps from both object geometry and contact on the object surface. Contact can be specified manually, through human demonstrations (e.g. ContactDB [12]), or a combination of both. We show in Section VI that ContactGrasp can be used to synthesize grasps that reproduce the demonstrated contact for multiple different hand models. Grasp synthesis from contact has the drawback that multiple hand configurations can sometimes result in the same contact pattern. To address this, ContactGrasp adopts a sample-and-rank approach which outputs a ranked set of grasps. We show qualitatively and quantitatively in Section VI that the desired grasp can be found among the top ranked grasps in this set.

To summarize, we make the following contributions in this paper:

  • Develop a multi-point contact representation that supports efficient grasp synthesis.

  • Propose a sample-and-rank approach for functional grasp synthesis from object shape and contact that can work with multiple hand models.

Ii Related Work

Grasp synthesis has been widely studied from many perspectives. Analytic approaches [13, 14, 15, 16, 17, 18, 19, 20] synthesize grasp(s) from object shape, which usually guarantee stability within their set of assumptions. These assumptions include perfect object models, rigid body approximation of the hand, Coulomb friction, simplified contact models, etc. While such approaches contribute valuable theoretical analysis of grasping, their simplifying assumptions are often quite restrictive. This makes purely analytic algorithms difficult to deploy in the real world.

In contrast, data-driven

approaches rely on machine learning techniques to learn features in some representation of the object (RGB image, 3D mesh, etc.) that can be used to predict grasps. For example, Li et al 

[21]

compute shape features for both objects and grasping hands, and use nearest-neighbor retrieval to synthesize grasps for novel objects. Newer deep-learning based algorithms typically require large amounts of training data, which is expensive to collect and label. In addition, they are limited to simple end effectors like parallel jaw grippers to enable efficient training data collection. For example, 

[22, 23, 24] predict grasps for a parallel jaw gripper from a dataset of manually annotated images. Some recent works like Pinto et al [4] have used self-supervision to automate data collection, again for a simple parallel jaw gripper. We refer the reader to [25] for an extensive survey of data-driven grasp synthesis.

Hybrid approaches use analytic criteria (e.g. [26, 27]) to sample a large number of grasps in a simulator like GraspIt! [28]. Real-world demonstrations in various forms are then used to process these samples: filter them or train a machine learning algorithm to predict success. For example, Mahler et al [1, 3, 2, 4] execute those grasps with a robot and record success/failure. Song et al [29] learn a Bayes Net to jointly model post-grasp task and discretized hand pose. Synthetic data generated using a grasp planner is labeled manually for post-grasp task suitability. However, the algorithm is not aware of contact and the pose discretization often results in predictions that are not in contact with the object.

Grasp Synthesis from Contact: The work of Hamer et al [30] is perhaps closest to ours. They record human demonstrations of grasping by in-hand scanning to get both object and hand pose. Contact is approximated as a single point per fingertip. Contact points are aggregated on the object surface and used to form a prior for hand pose synthesis and tracking. Varley et al [31] replace human demonstrations with synthetic data by running a grasp planner in simulation and recording the fingertip contact points. Ye at al [32] develop an algorithm to sample physically plausible contact points between the hand the object, given the wrist and object pose from a motion-capture system. These sampled points are then used to synthesize realistic-looking hand poses. In contrast, ContactGrasp allows realistic multi-point contact (see Figure 1), allows using kinematically diverse hand models, and demonstrates good performance across more complex and numerous objects.

Iii Contact Model and Human Demonstrations

As mentioned in Section I, ContactGrasp can leverage human demonstrations of grasp contact to synthesize similar grasps for various hand models. It has been shown that humans contact grasped objects with not only fingertips, but also the palm and non-tip areas of the fingers. Hence our contact model and grasp synthesis algorithm supports multi-point contact.

We define the contact map as a set of points sampled uniformly at random on the object surface, with a contact value of (attractive) or (repulsive). Contacted points in the demonstration are marked attractive, while others are marked repulsive. Repulsive points allow ContactGrasp to exploit negative information [33]. Figure 1 shows an example of the contact map for the ‘flashlight’ object, with attractive points in green and repulsive points in red.

Fig. 1: Contact map construction for the ‘flashlight’ object from ContactDB human demonstration. Points are randomly sampled on the object surface. green: attractive, red: repulsive.

Iii-a Human Contact Demonstrations

The contact model specified above supports manual specification. However, most of our experiments are performed using real-world contact maps from the ContactDB dataset [12]. ContactDB uses a thermal camera to observe the thermal after-prints left by heat transfer from hand to object during grasping. It is thus able to texture the object mesh surface with high-resolution contact maps. Participants grasp the objects with one of two post-grasp functional intents: handoff or use. Since ContactDB contact maps have continuous values , we threshold them at to determine whether a point is attractive or repulsive. Figure 1 shows an example of human contact demonstration from the ContactDB dataset.

(1)

Iv Hand Models

Fig. 2: Hand models used in our experiments, along with joint axes and special thumb contact point. Left: HumanHand [28], middle: Allegro Hand [34], right: Barrett Hand [35]

Given the contact map on the object surface, an articulated hand model is used to set up an optimization problem which seeks a hand pose that agrees with the observed contact map. We use three kinematically diverse hand models to show that the object-centric contact representation of ContactGrasp allows grasp synthesis with diverse hand models (See Table I and Figure 2). Each model is a kinematic tree, with parts modelled by rigid meshes. In addition to the articulation DOFs the overall position of the hand in 3D space is defined by a 6 DOF rigid body transform T. We denote the full pose of the hand as , where is the number of articulation DOFs of the hand. We compute a signed distance field (SDF) around each hand part and attach it to the part’s local coordinate system to support hand pose optimization. See Section V for further details.

Model Fingers Joints Articulation DOFs
HumanHand [28] 4 fingers, 1 thumb 15 20
Allegro [34] 3 fingers, 1 thumb 12 16
Barrett [35] 3 digits 7 4
TABLE I: Hand models used in our experiments.

V ContactGrasp: Grasp Synthesis

Fig. 3: Overview of the ContactGrasp algorithm. GraspIt! [5] is used to sample random grasps for the object geometry. These are then refined and ranked for agreement with a human-demonstrated contact map to synthesize functional grasps.

In this section, we describe the algorithm to synthesize grasps from object geometry and a contact map. Grasp synthesis consists of estimating the full configuration

of the hand, which includes the 6-DOF ‘palm’ pose as well as joint values. This is done through an appropriately initialized nonlinear optimization. Figure 3 shows an overview of the entire algorithm.

V-a Grasp Optimization

Intuitively, the objective of this optimization is to encourage contact at attractive points and discourage contact at repulsive points in the contact map. The full objective function consists of three terms, inspired from [36] (see Figure 5 for a visual example of each term):

V-A1 Grasp Term

Fig. 4:

Geometry of the activation function for a repulsive point

in the contact map for an object.
Fig. 5: Top: Various factors involved grasp optimization. Bottom: Optimized result.

The grasp term attracts (resp. repels) the closest hand segment to every attractive (resp. repulsive) point in the contact map. For a given contact map and hand pose , the term is stated as:

(2)

Where is the index of the hand segment that is closest to point , is the signed distance function (SDF) associated with hand segment , and is the indicator function. and

are hyperparameters controlling the strength of attractive and repulsive points. However, Eq. 

V-A1 unnecessarily penalizes some hand poses. As shown in Figure 4, we want to repulse a hand part away from a repulsive point

only if it is directly above the repulsive point i.e. the vector connecting

to the nearest point on the hand is parallel to the surface normal at . Since SDF measures Euclidean distances, it penalizes a nearby hand part even if it is not directly above . Hence we modify the activation condition for repulsive points in Eq. V-A1 as follows, taking the surface normal at point into consideration (see also Figure 4):

(3)

where

(4)

V-A2 Thumb Contact Term

It is well known that the thumb is especially important in human grasps [37]. To account for this, we specify a point on the thumb (or the part closest resembling the thumb) in the hand model (yellow dots in Figure 2). The thumb contact term encourages to be in contact with the object:

(5)

where is the signed distance function associated with the object, and hyperparameter controls the strength of this term.

V-A3 Intersection Term

The intersection term discourages intersection of the hand model with the object and self-intersection among segments of the hand, and is the same as the one used in [36]. Its strength is controlled by the hyperparameter .

V-A4 Optimization

The full objective function for grasp optimization is given by

(6)

We use Dense Articulated Real-time Tracking (DART) [33] to minimize Eq. 6 and get the optimized hand pose. Specifically, we modify the Contact Prior mechanism in DART [36] to 1) support repulsive points, and 2) remove the depth-map observation term. DART approximately minimizes Eq. 6 by running the Levenberg-Marquadt algorithm (see [33] for more details).

V-B Initializing the Grasp Optimization

Since the Levenberg-Marquadt grasp optimization is local, and search in the high-dimensional hand pose space has many local minima, providing good initialization to the optimizer becomes important. Towards this end, we develop an algorithm to sample diverse grasps for the object geometry which are agnostic to the contact map, using the publicly available GraspIt! grasp planner [28, 5]. These are later ranked for agreement with the contact map using the residue after minimizing Eq 6.

Fig. 6: Functional HumanHand grasps synthesized by ContactGrasp. Top: Use, bottom: Hand-off.
Fig. 7: Functional Allegro hand grasps synthesized by ContactGrasp. Top: Use, bottom: Hand-off.
Fig. 8: Functional Barrett hand grasps synthesized by ContactGrasp. Top: Use, bottom: Hand-off.
Fig. 9: Top-ranked grasps from GraspIt! [5], which are agnostic to contact map and hence are not functional, especially for the ‘use’ intent. For example, flashlight button is not accessible by the thumb, finger touches knife blade, cellphone screen and wineglass opening are blocked by palm. Top: Human Hand, Bottom: Barrett Hand.

Specifically, we seed the planner with a coarse grasp , where is the approach point on the object surface, is the roll angle around approach vector and is the distance from object surface. We use the negated surface normal at the approach point as the approach vector, making since the hand approach anti-parallel to the object surface normal (similar to [38]). Approach points are sampled uniformly at random over the object surface, whereas and are set from the discrete sets {0, 90, 180, 270} and {0 cm, 1 cm, 2 cm, 3 cm} respectively. GraspIt!’s Simulated Annealing planner then runs for 45K iterations for each seed to optimize the Contact Energy cost function [5], exploring in a cone around the specified approach vector. We pick the top 2 grasps after each run of the planner, and add them to the set of sampled diverse grasps . Note that contains full grasps, since GraspIt! provides a bridge to convert coarse grasps to full grasps through its planner. Figure 3 shows some grasps sampled in this manner for a cellphone.

The last step is to refine and rank the grasps in by running grasp optimization on each of them, and considering the residual after convergence (Eq. 6) as the negative score. Most of the grasps in result in large residuals because they are agnostic to the contact maps and hence can be out of the basin of convergence of the local optimization. Hence this ranking process causes the ‘correct’ hand pose to show up among the top-ranked, from which it can be easily identified.

To summarize, the process described in this section is recovers the correct full hand pose that agrees with a given contact map on an object. Manually annotating the full hand pose prohibitively expensive because of the high dimensionality.

Vi Results

We use the ContactGrasp algorithm (Section V) to synthesize grasps for a diverse set of household objects for two different post-grasp functional intents: using the object, and handing it off. Grasps are synthesized for the three hand models described in Section IV. We use human contact demonstrations for functional grasps from the ContactDB dataset [12]. 25 objects in ContactDB have demonstrations for both using the object and handing it off. From these, we select a subset of 19 objects (see supplementary video for a list) for which bi-manual grasps were not observed. Note that the grasp optimization strategy described in Section V supports bi-manual grasps. However, we focus on single-handed grasps in this paper owing to lack of left-handed hand models and need for a more complex initialization strategy for the optimization.

We set the hyperparameter (threshold on contact map value) to 0.3, (attractive contact point strength) to 150.0, (repulsive contact point strength) to 20.0, (thumb contact point strength) to 25.0, and (intersection term strength) to 100.0.

Vi-a Qualitative results

Figures 6,  7, and 8 show the synthesized grasps for the HumanHand, Allegro hand and Barrett hand respectively, for 8 objects and 2 functional intents(see supplementary video for other objects). They demonstrate functional grasps e.g. for the ‘use’ intent, the flashlight button is easily accessible by the thumb, fingers rest on mouse click buttons, knife is held by the handle. In contrast, the top-ranked grasps from GraspIt! are shown in Figure 9. They are stable but do not demonstrate functionality, especially for the ‘use’ intent e.g. flashlight button is not accessible by the thumb, finger touches knife blade, cellphone screen and wineglass opening are blocked by palm.

Vi-B Quantitative results

Table II shows the median rank (across all objects) of the synthesized grasp, when grasps are ranked according to 1) the DART residual, and 2) GraspIt!’s Contact Energy metric [5]. The median rank is significantly lower for the former. This indicates that approaches like [5] that consider only object geometry cannot guarantee functional contact. ContactGrasp can effectively leverage these approaches to synthesize functional grasps using demonstrations.

Fig. 10: Failure cases for ContactGrasp. Left: Random grasp sampling is unlikely to produce a grasp with fingers in narrow holes for objects like scissors. Contact-based local optimization is then likely to get stuck in local minima. Right: Some end effectors lack the structure to produce complicated functional grasps e.g. using a mouse by resting a finger on the click buttons.
Intent End-effector ContactGrasp rank GraspIt! rank
use HumanHand [28] 3(0.03%) 4484(34.41%)
Allegro [34] 22(0.69%) 1289(40.28%)
Barrett [35] 6(0.19%) 2362(74.09%)
handoff HumanHand [28] 1(0.01%) 3751(38.37%)
Allegro [34] 22(0.56%) 1258(38.20%)
Barrett [35] 24(0.75%) 830(25.94%)
TABLE II: Median rank of the correct grasp (lower is better).

Table III shows further quantitative results in the form of the value (see Eq. V-A1) for the synthesized grasp as well as the top-ranked grasp by GraspIt!’s Contact Energy metric. measures the grasp’s disagreement with the demonstrated contact map, and is significantly lower for ContactGrasp grasps. This verifies that ContactGrasp produces grasps that are significantly closer to the human demonstrations than GraspIt!, which is agnostic to the demonstrations.

In addition, we implement the kinematic retargeting algorithm of Tosun et at [39] for mapping the synthesized human hand grasp to the Allegro model. Each finger is treated as a separate kinematic chain (sampled with 50 points) to be re-targeted. The human pinky finger is discarded. The values for these mapped grasps are higher than those for the ContactGrasp Allegro grasps as well as human grasps. This shows that deterministic mapping of grasps between hand models does not reliably reproduce contact, supporting our motivation for developing ContactGrasp.

Intent End-effector ContactGrasp GraspIt!
use HumanHand [28] -0.07 0.17
Allegro [34] -0.06 0.32
Human - Allegro [39] 0.08 0.12
Barrett [35] -0.03 0.19
handoff HumanHand [28] -0.14 0.19
Allegro [34] -0.11 0.41
Human - Allegro [39] 0.05 0.15
Barrett [35] -0.04 0.22
TABLE III: Disagreement of the ContactGrasp and GraspIt! grasps from human-demonstrated contact, measured by (see Eq. V-A1, lower is better). Human-Allegro indicates grasps mapped from human to the Allegro model using [39].

Vi-C Failure Cases

Figure 10 shows failure cases of ContactGrasp. These occur when the target hand model does not possess the geometry to be able to achieve a complicated control pattern, or when the grasp sampling stage is not exhaustive enough to sample fine manipulation behaviors like fingers through holes.

Vii Conclusion

To summarize, we develop a multi-point contact model in this paper, which can plug-and-play with kinematically diverse hand models to synthesize grasps. These grasps can be modulated by hand-object contact demonstrations to be functional, supporting post-grasp actions like using the object or handing it off. We show that our approach ContactGrasp, which directly optimizes for contact, is superior than other approaches which kinematically re-target observed human grasps to the target hand model. We demonstrate the effectiveness of ContactGrasp by synthesizing functional grasps for 3 significantly diverse hand models, 19 household objects, and 2 functional intents.

References

  • [1] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg, “Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics,” 2017.
  • [2] J. Mahler, M. Matl, X. Liu, A. Li, D. Gealy, and K. Goldberg, “Dex-net 3.0: Computing robust robot suction grasp targets in point clouds using a new analytic model and deep learning,” arXiv preprint arXiv:1709.06670, 2017.
  • [3] J. Mahler, M. Matl, V. Satish, M. Danielczuk, B. DeRose, S. McKinley, and K. Goldberg, “Learning ambidextrous robot grasping policies,” Science Robotics, vol. 4, no. 26, p. eaau4984, 2019.
  • [4]

    L. Pinto and A. Gupta, “Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours,” in

    2016 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2016, pp. 3406–3413.
  • [5] M. Ciocarlie, C. Goldfeder, and P. Allen, “Dimensionality reduction for hand-independent dexterous robotic grasping,” in 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.
  • [6] M. A. Roa, M. J. Argus, D. Leidner, C. Borst, and G. Hirzinger, “Power grasp planning for anthropomorphic robot hands,” in 2012 IEEE International Conference on Robotics and Automation.   IEEE, 2012, pp. 563–569.
  • [7] M. Przybylski, T. Asfour, and R. Dillmann, “Planning grasps for robotic hands using a novel object representation based on the medial axis transform,” in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2011, pp. 1781–1788.
  • [8] S. Ekvall and D. Kragic, “Learning and evaluation of the approach vector for automatic grasp generation and planning,” in Proceedings 2007 IEEE International Conference on Robotics and Automation.   IEEE, 2007, pp. 4715–4720.
  • [9] J. Romero, H. Kjellstrom, and D. Kragic, “Modeling and evaluation of human-to-robot mapping of grasps,” in 2009 International Conference on Advanced Robotics.   IEEE, 2009, pp. 1–6.
  • [10] A. Herzog, P. Pastor, M. Kalakrishnan, L. Righetti, T. Asfour, and S. Schaal, “Template-based learning of grasp selection,” in 2012 IEEE International Conference on Robotics and Automation.   IEEE, 2012, pp. 2379–2384.
  • [11] G. Garcia-Hernando, S. Yuan, S. Baek, and T.-K. Kim, “First-person hand action benchmark with rgb-d videos and 3d hand pose annotations,” in

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , June 2018.
  • [12] S. Brahmbhatt, C. Ham, C. C. Kemp, and J. Hays, “ContactDB: Analyzing and Predicting Grasp Contact via Thermal Imaging,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. [Online]. Available: https://contactdb.cc.gatech.edu
  • [13] A. Bicchi and V. Kumar, “Robotic grasping and contact: A review,” in Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065), vol. 1.   IEEE, 2000, pp. 348–353.
  • [14] D. Prattichizzo, M. Malvezzi, M. Gabiccini, and A. Bicchi, “On the manipulability ellipsoids of underactuated robotic hands with compliance,” Robotics and Autonomous Systems, vol. 60, no. 3, pp. 337–346, 2012.
  • [15] C. Rosales, R. Suárez, M. Gabiccini, and A. Bicchi, “On the synthesis of feasible and prehensile robotic grasps,” in 2012 IEEE International Conference on Robotics and Automation.   IEEE, 2012, pp. 550–556.
  • [16] V.-D. Nguyen, “Constructing force-closure grasps,” The International Journal of Robotics Research, vol. 7, no. 3, pp. 3–16, 1988.
  • [17] M. A. Roa and R. Suárez, “Computation of independent contact regions for grasping 3-d objects,” IEEE Transactions on Robotics, vol. 25, no. 4, pp. 839–850, 2009.
  • [18] R. Krug, D. Dimitrov, K. Charusta, and B. Iliev, “On the efficient computation of independent contact regions for force closure grasps,” in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2010, pp. 586–591.
  • [19] A. Rodriguez, M. T. Mason, and S. Ferry, “From caging to grasping,” The International Journal of Robotics Research, vol. 31, no. 7, pp. 886–900, 2012.
  • [20] J. Seo, S. Kim, and V. Kumar, “Planar, bimanual, whole-arm grasping,” in 2012 IEEE International Conference on Robotics and Automation.   IEEE, 2012, pp. 3271–3277.
  • [21] Y. Li and N. Pollard, “A shape matching algorithm for synthesizing humanlike enveloping grasps,” in 5th IEEE-RAS International Conference on Humanoid Robots, 2005.   IEEE, 2005, pp. 442–449.
  • [22] I. Lenz, H. Lee, and A. Saxena, “Deep learning for detecting robotic grasps,” The International Journal of Robotics Research, vol. 34, no. 4-5, pp. 705–724, 2015.
  • [23] Y. Jiang, S. Moseson, and A. Saxena, “Efficient grasping from rgbd images: Learning using a new rectangle representation,” in 2011 IEEE International Conference on Robotics and Automation.   IEEE, 2011, pp. 3304–3311.
  • [24] J. Redmon and A. Angelova, “Real-time grasp detection using convolutional neural networks,” CoRR, vol. abs/1412.3128, 2014. [Online]. Available: http://arxiv.org/abs/1412.3128
  • [25] J. Bohg, A. Morales, T. Asfour, and D. Kragic, “Data-driven grasp synthesis—a survey,” IEEE Transactions on Robotics, vol. 30, no. 2, pp. 289–309, 2014.
  • [26] A. T. Miller and P. K. Allen, “Examples of 3d grasp quality computations,” in Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No. 99CH36288C), vol. 2.   IEEE, 1999, pp. 1240–1246.
  • [27] C. Ferrari and J. Canny, “Planning optimal grasps,” in Proceedings 1992 IEEE International Conference on Robotics and Automation.   IEEE, pp. 2290–2295.
  • [28] A. T. Miller and P. K. Allen, “Graspit! a versatile simulator for robotic grasping,” IEEE Robotics & Automation Magazine, vol. 11, no. 4, pp. 110–122, 2004.
  • [29]

    D. Song, C. H. Ek, K. Huebner, and D. Kragic, “Multivariate discretization for bayesian network structure learning in robot grasping,” in

    2011 IEEE International Conference on Robotics and Automation.   IEEE, 2011, pp. 1944–1950.
  • [30] H. Hamer, J. Gall, T. Weise, and L. Van Gool, “An object-dependent hand pose prior from sparse training data,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.   IEEE, 2010, pp. 671–678.
  • [31] J. Varley, J. Weisz, J. Weiss, and P. Allen, “Generating multi-fingered robotic grasps via deep learning,” in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2015, pp. 4415–4420.
  • [32] Y. Ye and C. K. Liu, “Synthesis of detailed hand manipulations using contact sampling,” ACM Transactions on Graphics (TOG), vol. 31, no. 4, p. 41, 2012.
  • [33] T. Schmidt, R. Newcombe, and D. Fox, “DART: Dense Articulated Real-Time Tracking,” in Proceedings of Robotics: Science and Systems, Berkeley, USA, July 2014.
  • [34] SimLab. Allegro Hand. [Online]. Available: http://www.simlab.co.kr/Allegro-Hand.htm
  • [35] B. Tech. Barrett Hand. [Online]. Available: https://www.barrett.com/about-barrethand
  • [36] T. Schmidt, K. Hertkorn, R. Newcombe, Z. Marton, M. Suppa, and D. Fox, “Depth-based tracking with physical constraints for robot manipulation,” in Robotics and Automation (ICRA), 2015 IEEE International Conference on.   IEEE, 2015, pp. 119–126.
  • [37] M. R. Cutkosky and R. D. Howe, “Human grasp choice and robotic grasp analysis,” in Dextrous robot hands.   Springer, 1990, pp. 5–31.
  • [38] R. Pelossof, A. Miller, P. Allen, and T. Jebara, “An svm learning approach to robotic grasping,” in IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA’04. 2004, vol. 4.   IEEE, 2004, pp. 3512–3518.
  • [39] T. Tosun, R. Mead, and R. Stengel, “A general method for kinematic retargeting: Adapting poses between humans and robots,” in ASME 2014 International Mechanical Engineering Congress and Exposition.   American Society of Mechanical Engineers, 2014, pp. V04AT04A027–V04AT04A027.