Towards Complex and Continuous Manipulation: A Gesture Based Anthropomorphic Robotic Hand Design

12/20/2020 ∙ by Li Tian, et al. ∙ Nanyang Technological University 0

Most current anthropomorphic robotic hands can realize part of the human hand functions, particularly for object grasping. However, due to the complexity of the human hand, few current designs target at daily object manipulations, even for simple actions like rotating a pen. To tackle this problem, we introduce a gesture based framework, which adopts the widely-used 33 grasping gestures of Feix as the bases for hand design and implementation of manipulation. In the proposed framework, we first measure the motion ranges of human fingers for each gesture, and based on the results, we propose a simple yet dexterous robotic hand design with 13 degrees of freedom. Furthermore, we adopt a frame interpolation based method, in which we consider the base gestures as the key frames to represent a manipulation task, and use the simple linear interpolation strategy to accomplish the manipulation. To demonstrate the effectiveness of our framework, we define a three-level benchmark, which includes not only 62 test gestures from previous research, but also multiple complex and continuous actions. Experimental results on this benchmark validate the dexterity of the proposed design and our video is available in <https://entuedu-my.sharepoint.com/:v:/g/personal/hanhui_li_staff_main_ntu_edu_sg/Ean2GpnFo6JPjIqbKy1KHMEBftgCkcDhnSX-9uLZ6T0rUg?e=ppCGbC>

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Fig. 1: Demonstration of the proposed gesture based framework. We consider 33 Feix’s Grasping gestures as the bases, and measure the ROM of the human hand for achieving each gesture. With the results, we design a simplified robotic hand with the necessary ROMs. Consequently, given a target manipulation, we can represent and complete it with multiple gestures.

Artificial hands remain one of the hardest problems in robotics [17], due to the lack of comprehensive understanding on the actuation and sensory systems of the human hand. Earlier studies [15, 19, 2] try to fully replicate functions of the human hand via complicated mechanical structures and actuation systems. Although these robotic hands have ranges of motion (ROM) and degrees of freedom (DOF) similar to those of human hands, or even have the ability to complete astounding manipulation tasks like solving a Rubik’s cube [2], they are costly to fabricate and their dexterity can still be improved.

Therefore, recent robotic hand designs focus more on simplifying mechanical structures and realizing partial functions of human hands. To the best of our knowledge, there are two notable development directions for such a purpose. The first direction is based on the concept of anatomically correct robotic hand [7, 27, 9, 24], which imitates the critical biomechanical structures of the human hand, such as bones, joints and tendons. This strategy does lower the complexity of the designing process, yet maintaining the balance between dexterity and complexity remains a challenge. For instance, in one of the latest anatomically correct robotic hand [24], 30 servo motors are used to realize 33 grasping gestures of Feix [10] and the Kapandji test [11].

The other trend includes the soft robotic hands [20] that utilize soft materials and actuators, such as gas/liquid pressure [6, 18, 30] and artificial muscles [13, 12, 8]. Robotic hands of this type have remarkable deformability, and hence they can accomplish the tasks of object grasping safely [22, 28, 16, 1]. However, their mechanical structures and kinematic models are different from that of the human hand significantly. Hence it is difficult for them to realize natural and human-like manipulations.

In our opinion, for the problem of object grasping, the performance of current methods is promising. Nevertheless, few current methods can tackle the problem of daily object manipulations, even for a simple action like spinning a pen. This is because they are highly specialized for their particular purposes and lose the adaptability. For example, a soft robotic hand [30] targeting at object grasping might not realize the functions of abduction and adduction for all fingers. This problem should not be ignored, as manipulating common objects in a human-like way, is one of the basic requirements for social robots and humanoids.

To address the above issues, we propose a novel framework, in which the design of robotic hands and object manipulations depend on a set of grasping gestures. The intuition behind our framework is that, most daily object manipulation tasks can be represented by the set of grasping gestures. Such a strategy is possible, as the pioneer research by Bullock et al. [5]

has tried to classify common manipulation tasks at a coarse level. Meanwhile, we can measure the ROM of each base gesture, so that our robotic hand design can be simplified to provide the only necessary ROMs. In this way, the proposed design can achieve the excellent balance between simplicity and dexterity. Besides, since there are extensive studies on achieving various grasping gestures, we can avoid building our solution from scratch, while current methods can be augmented with our framework to tackle the object manipulation problem.

Consequently, based on the proposed framework, we design a 13-DOF anthropomorphic hand, which is much simpler yet more versatile compared with previous methods [24, 23]. Besides, we propose a key-frame interpolation method, in which we select multiple base gestures as the key frames, and then adopt the simple linear interpolation to control the process of manipulating an object. To fully evaluate the dexterity of our design, we further introduce the complex and continuous manipulation benchmark, which has tasks of three difficulty levels.

In summary, this paper makes noteworthy contributions to anthropomorphic robotic hand design as follows:

•We introduce a novel framework to provide previous grasping-oriented methods with the adaptability to object manipulations, which also sheds light on designing highly dexterous robotic hands.

•We collect ROMs of a set of base gestures, which are valuable data for robotic hand design and algorithm development for object manipulation.

•We propose a simplified dexterous hand model of 13 DOFs and a key-frame interpolation method, which are validated on our three-level benchmark for object grasping and manipulation.

Ii The gesture based framework

Abbreviation Definition
Abd/Add abduction/adduction
DIP Distal interphalangeal
PIP Proximal interphalangeal
MCP Metacarpal
CMC Carpometacarpal
LUM Lumbrical muscles
DOF Degree of freedom
ROM Range of motion
CCM Complex and continuous manipulation
TABLE I: Terms and abbreviations used in this paper.

In this section, we introduce the proposed gesture based framework in detail. As demonstrated in Fig. 1, we first select a set of grasping gestures as the bases. We then measure the ROM of the human hand for each gesture, so that we can design a robotic hand with the necessary dexterity to accomplish the base gestures. Finally, given a manipulation task, we choose multiple gestures as the key frames, and adopt the interpolation method to control our robotic hand.

Due to the complexity of the human hand anatomy, we summarize the terms and abbreviations related to the human hand structures in Table I for easy understanding and reading.

Fig. 2: Measuring angles of joint rotation for exemplar gestures. DV and LV stand for the dorsal view (top) and the lateral view (bottom), respectively.

Ii-a The Base Gestures

Selecting the base gestures allows us to understand the essential functions that need to be implemented. In this paper, we choose the grasp taxonomy defined by Feix et al. [10], which has organized the common human hand configurations into 33 gestures. Any other gesture can be included but we find that these 33 gestures are adequate for various in-hand manipulation tasks.

Given the selected base gestures, we consider the human hand as the template for measuring the ROMs. Previous studies [21, 30] suggest that each finger can be characterized with a kinematic model of 3 DOFs (3 joints with flexion and extension only). Hence, for each gesture, we annotate 15 landmarks manually from two views, as demonstrated in Fig. 2. We use the angles of joint rotation in a fixed coordinate system as the metric of ROM. In order to obtain a precise template, besides flexion and extension, we also take abduction and adduction into account, and hence 4 joint rotation angles will be recorded for each finger. Each gesture is measured 3 times, and we use the average rotation angles as the final representation of the gesture.

Fig. 3: The 13-DOF kinematic model (left) and the fabricated hand of our gesture based design (right).

Ii-B Design of The Proposed Hand

The base gestures can support us to simplify the robotic hand design significantly. For example, we notice that in the abduction and adduction motions of the ring finger, the joint angle is in for all grasping gestures, which is ignorable compared with that of other fingers. Therefore, we can safely remove the tendons for controlling the abduction and adduction of the ring finger.

Specifically, our robotic hand design is a simplified system with 13 DOFs, as demonstrated in Fig. 3. Following the intra-finger constraint proposed in [14] that the distal interphalangeal (DIP) and proximal interphalangeal (PIP) joints are always bent together, we use 1 Bowden cable to control these 2 joints 111Strictly speaking, these two joints of the thumb are called the interphalangeal joint and the metacarpal joint, yet for the convenience of expression, we still consider them as DIP and PIP in this paper. simultaneously. Besides, since we do not need to implement the abduction and adduction motion of the ring finger, we further simplify the design by using 1 Bowden cable to control the DIP, PIP and metacarpal (MCP) joints of the ring finger together. For the rest 3 MCP joints, 2 Bowden cables are attached to each of them to realize the motions of flexion/extension and adduction/abduction. The same mechanism is applied to the carpometacarpal (CMC) joint of the thumb.

Biomimetic Hand [27] HR-hand [9] Ours
Bone 3D printed, ABS Molded, resin 3D printed, resin
Actuator Servo: MX-12W, AX-12A Pneumatic McKibben actuators Servo: MG996R
Tendon and Muscle FDP, EDC FDP, FDS, EDC, LUM, INT FDP, LUM INT
Driven Cable Soft cable Soft cable Bowden cable
Ligament Crocheted ligament Silicone ligament Fishing wire
Tendon Pulley Laser-cut elastic silicone Polyethylene tubes 3D printed with bones
Tissue and Skin Fingertip cap only N.A. Elastic tissue, silicone skin
Evaluation Method 15 gestures ROM of finger 62 gestures and CCM
TABLE II: Comparison of dexterous hand designs. Our mechanical structure is simpler yet able to complete more gestures.

Our design has preserved the advantages of anatomically correct robotic hands in resembling the biomechanical structures of the human hand. Moreover, As it needs to meet the minimum ROM requirements of the base grasping gestures, the dexterity of our fabricated hand is guaranteed.

Ii-C Materials and Fabrication

Fig. 4: Examples of the mechanical structures of our fingers, including the thumb (left) and the index finger (right).

Our design can be fabricated efficiently by leveraging the advantages of 3D printing in fast manufacturing. Based on our previous studies on 3D modeling of hands [26, 25], we construct a printable 3D model consisting of three layers, i.e., the skin, tissues, and bones. Note that the tissue layer is a unique soft deformable structure, so that the 3D model can complete various grasping gestures successfully.

The mechanical structures of the 3D model is shown in Fig. 4. As mentioned above, Bowden cables are used as tendons and muscles to drive the phalanges: For the thumb, the flexor pollicis longus is used to control the flexion and extension of the DIP joint and the PIP joint, while the flexor pollicis brevis for that of the CMC joint. And the adductor pollicis is used for the abduction and adduction of the thumb. For the index finger, the middle finger and the little finger, the flexor digitorum tendons are used for the flexion and extension of the DIP joints and the PIP joints, while the lumbrical muscles (LUM) for that of the MCP joints. The abduction and adduction motions of these finger are controlled by the interossei muscles. The structure of the ring finger is omitted, as all its three joints are controlled by the same Bowden cable. All the Bowden cables we used in this paper are of strands with a diameter of 0.75 mm, except for those mimicking the LUM, which are of strands with a diameter of 1.25 mm and have a higher modulus of rigidity.

Besides, the flexor pulleys are printed together with the bones to maintain the apposition of tendons and bones. Two types of plastic tendon sheaths are inserted into the bones to aid the routing of tendons. To connect the bones, we use fishing wires as the collateral ligaments, which has a diameter of 0.34 mm, straight tension of 17 lb and knot tension of 15.5 lb. Articular cartilages of sphere shape are used to reduce the friction between bones.

As to the 3D printing materials, we use the same materials as in [25]. Our actuation system is composed of 13 MG996R servo motors (torque: 11 kg/cm). In Table II, we summarize the configurations of our robotic hand, and compared it with two state-of-the-art methods. From this table, we can see that the complexity of our robotic hand is moderate, and as we will demonstrate in the experimental section, the proposed design is robust enough to handle different grasping and manipulation tasks.

Ii-D Key-frame Interpolation for Object Manipulation

Last but not least, we introduce a simple key-frame interpolation method to demonstrate the potential of the gesture based framework in tackling daily manipulation tasks. More advanced techniques, such as deep reinforcement learning

[2], can be applied within the proposed framework, but this is beyond the scope of this paper.

Given a target manipulation, e.g., rotating a pen 180 degrees, we can first represent it by multiple base gestures, which we refer as the key frames. These key frames can be selected manually, or via heuristic strategies, e.g., we can consider gestures as states in a Markov chain and learn to sample optimal states

[3]. Here we assume both the initial gesture and the end gesture are palmar pinch (with the index finger and the thumb), and the 4 selected key frames are adduction grip, tripod, palmar pinch (with the middle finger and the thumb), and prismatic 2 finger, as demonstrated in Fig. 1. Without loss of generality, let denote the 20 ROM values of one of the key frames, and of the next key frame. Then, given an interval of frames, we consider the simple linear interpolation method to obtain the ROM of the intermediate gesture at the -th frame as follows:

(1)

where . Such a method is practical due to the following two reasons: First, the 33 base gestures of Feix are static and stable per se, which indicates that if the interval or is small enough, the object can be considered to be relatively stable to the robotic hand. Second, within the selection process of the key frames, we can impose heuristic constrains, e.g., the motion of the robotic hand should not be obstructed by the object. In this way, we can decompose various daily manipulation tasks into gestures and perform them with our robotic hand.

Iii Experiment

To validate the dexterity and quality of our gesture based robotic hand, we conduct extensive experiments in this paper. We first measuring multiple kinematic quantities of our robotic hand, and then perform qualitative analysis on a three-level benchmark for object manipulation.

Iii-a Kinematics Analysis

To begin with, we measure the relationship between tendon excursion and joint rotation angle of our robotic hand. As suggested in [4], the tendon excursion and the joint angle of the human finger can be described as the following linear model:

(2)

where

is the instantaneous moment arm. This indicates that if the relationship between

and of our robotic hand also follows a similar linear function, then our design can replicate the movements of tendons precisely. Hence we record these data of our fabricated index finger for evaluation. The lengths of the distal, middle and proximal phalanges of the index finger are 22.2 mm, 24.2 mm and 41.9 mm, respectively. The results are reported in Fig. 5

and they demonstrate the obvious liner pattern. We further fit these data with the linear regression model (i.e.,

), and obtain the coefficient of determination for all three joints. All values are larger than 0.96, which suggest that the linear model fits these data well and hence validate our design.

Following [27], we also adopt the fingertip trajectory as a measure of motions. As presented in Fig. 6, the fingertip trajectories of our robotic hand show that it can reach a large area. These results suggest that our design has the notable flexibility and the potential for completing various gestures.

Fig. 5: The relationship between tendon excursion (mm) and joint rotation angle (degree) of the index finger. Best viewed on a high-resolution screen.
Fig. 6: Fingertip trajectories of our robotic hand.
Finger Joint Human Grasping ROM Ours
Thumb IP [0, 90] [0, 84] [0, 70]
MCP [0, 70] [0, 70] [0, 75]
CMC [0, 53] [0, 48] [0, 61]
Abd/Add [-40, 50] [0, 50] [0, 50]
Index DIP [0, 80] [0, 70] [0, 70]
PIP [0, 120] [0, 100] [0, 100]
MCP [0, 90] [0, 90] [-15, 82]
Abd/Add [-20, 25] [-20, 22] [-8, 26]
Middle DIP [0, 80] [0, 80] [0, 90]
PIP [0, 120] [0, 106] [0, 80]
MCP [0, 90] [0, 90] [-7, 95]
Abd/Add [-20, 25] [-20, 20] [-6, 15]
Ring DIP [0, 80] [0, 73] [0, 70]
PIP [0, 120] [0, 120] [0, 90]
MCP [0, 90] [0, 90] [-5, 95]
Abd/Add [-20, 25] [-18, 6] -
Little DIP [0, 80] [0, 80] [0, 69]
PIP [0, 120] [0, 110] [0, 100]
MCP [0, 90] [0, 90] [-4, 90]
Abd/Add [-20, 25] [-14, 20] [-9, 28]
TABLE III: Comparison of the ROM of human hand and our robotic hand, and that for completing 33 grasping gestures.

At last, we summarize the ROMs of the human hand 222data from https://www.verywellhealth.com and https://www.orthopaedicsone.com and our robotic hand, as well as our measured ROMs of completing 33 grasping gestures in Table III. These results show that it is not necessary to design a robotic hand which can fully reach the maximum ROMs of the human hand, since the ROMs of the grasping gestures are actually smaller. Besides, our design can duplicate about of the human hand ROMs and

of the grasping ROMs, which guarantees our probability of tackling manipulation tasks.

Iii-B Complex and Continuous Manipulation

We perform a series of object grasping and manipulation tasks to demonstrate the effectiveness of the proposed framework. Unlike conventional methods that only utilize single benchmark (e.g., Fexi’s grasping gestures) for evaluation, here we introduce the idea of complex and continuous manipulation, which organizes daily manipulation tasks into the following three difficult levels:

Fig. 7: 62 test gestures achieved by the proposed design: (a) 33 grasping gestures of Feix. (b) 11 Kapandji scores. (c) Translations and Rotations along the x, y, z axis, with 3 objects of different shapes (cylinder, cuboid and sphere). Best viewed on a high-resolution screen.
Fig. 8: CCM completed by the proposed design. (a) Palmar pinching and rotating a pen. (b) Tripod based rotation of a ping-pong ball. (c) Crawling. (d) flicking and spinning a pen. (e) flicking a balloon. (f) climbing up. (g) rotating the top layer and right layer of a Rubik’s cube, as well as overturning it. Best viewed on a high-resolution screen.
Fig. 9: Displacements of joints during rotating a pen. Pose 1-5 are Palmar Pinch, Adduction Grip, Tripod, Palmar Pinch and Prismatic 2 Finger.

Level 1: Single Grasping/Manipulation. Tasks at this level can be tackled by a single gesture, such as the grasping gestures of Feix. As far as we know, most current methods are tested at this level.

Level 2: Complex Manipulation. Tasks at this level require more than one gesture. For example, in our mentioned task of rotating a pen, 5 gestures are used in total. The key of completing tasks at this level is to ensure the high reproducibility of the base gestures and the stable transfer between gestures.

Level 3: Complex and Continuous Manipulation (CCM). For tasks at this level, we further impose the continuous constraint that, the manipulation must be completed within a limited time or with limited gestures. With this constraint, the mechanical design and the manipulation strategy must be optimized to remove redundant structures and gestures. We believe that CCM is one of the long-term targets of robotic hand research.

As to the proposed methods, tasks of Level 1 can be completed successfully. We select 3 widely used benchmarks, 62 gesture in total, for testing our robotic hand, including 33 grasping gestures of Feix, 11 Kapandji scores [11], and 3 objects of different shapes (cylinder, cuboid and sphere) for translation and rotation test [29], as demonstrated in Fig. 7. Each test is performed 10 times and the average success rate is larger than . This result has built the solid background for us to handle tasks of level 2 and 3.

Due to the fact that there is no standard benchmark for object manipulation, we propose 7 tasks that cover as many base gestures as possible. As demonstrated in Fig. 8, these tasks are (a) Palmar pinching and rotating a pen; (b) Tripod based rotation of a ping-pong ball along arbitrary direction; (c) Crawling; (d) flicking and spinning a pen; (e) flicking a balloon; (f) climbing up; and (g) rotating the top layer and right layer of a Rubik’s cube, as well as overturning the cube. With the proposed gesture based framework, we are able to complete all these tasks. Interested readers can refer to the supplemental video for more details.

At last, we also record the time and gestures for completing each of the 7 tasks, which could serve as the level 3 baseline for future research. As demonstrated in Fig. 8, most of these tasks are completed within a few seconds. It is noted that the task of rotating a pen consumes 23 seconds, as demonstrated in Fig. 9. This is because we use 5 gestures as the key frames. It is expected that with more advanced learning techniques, fewer gestures might be used and the processing time will be further reduced.

Iv Conclusion

In this paper, we propose the gesture based framework, of which the core is the set of base gestures and their ROMs. Given the widely-used gestures as the bases, we devise the simplified 13-DOF robotic hand to provide the essential functions for achieving them. We also demonstrate that complex manipulation tasks can be subdivided into multiple base gestures, and we adopt the key-frame interpolation method to complete the tasks. The proposed hand is evaluated on our CCM benchmark and demonstrates its remarkable dexterity. In all, we hope that the gesture based framework, as well as the three layer CCM benchmark can provide a considerable baseline for future studies.

References

  • [1] S. Abondance, C. B. Teeple, and R. J. Wood (2020) A dexterous soft robotic hand for delicate in-hand manipulation. IEEE Robotics and Automation Letters 5 (4), pp. 5502–5509. Cited by: §I.
  • [2] I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron, A. Paino, M. Plappert, G. Powell, R. Ribas, et al. (2019) Solving rubik’s cube with a robot hand. arXiv:1910.07113. Cited by: §I, §II-D.
  • [3] R. Alterovitz, T. Siméon, and K. Y. Goldberg (2007) The stochastic motion roadmap: a sampling framework for planning with markov motion uncertainty.. In Robotics: Science and systems, Vol. 3, pp. 233–241. Cited by: §II-D.
  • [4] K. An, Y. Ueba, E. Chao, W. Cooney, and R. Linscheid (1983) Tendon excursion and moment arm of index finger muscles. Journal of biomechanics 16 (6), pp. 419–425. Cited by: §III-A.
  • [5] I. M. Bullock, R. R. Ma, and A. M. Dollar (2012) A hand-centric classification of human and robot dexterous manipulation. IEEE Transactions on Haptics 6 (2), pp. 129–144. Cited by: §I.
  • [6] R. Deimel and O. Brock (2016) A novel type of compliant and underactuated robotic hand for dexterous grasping. The International Journal of Robotics Research 35 (1-3), pp. 161–185. Cited by: §I.
  • [7] A. D. Deshpande, Z. Xu, M. J. V. Weghe, B. H. Brown, J. Ko, L. Y. Chang, D. D. Wilkinson, S. M. Bidic, and Y. Matsuoka (2011) Mechanisms of the anatomically correct testbed hand. IEEE/ASME Transactions on Mechatronics 18 (1), pp. 238–250. Cited by: §I.
  • [8] R. S. Diteesawat, T. Helps, M. Taghavi, and J. Rossiter (2020) Characteristic analysis and design optimization of bubble artificial muscles. Soft Robotics. Cited by: §I.
  • [9] A. A. M. Faudzi, J. Ooga, T. Goto, M. Takeichi, and K. Suzumori (2017) Index finger of a human-like robotic hand using thin soft muscles. IEEE Robotics and Automation Letters 3 (1), pp. 92–99. Cited by: §I, TABLE II.
  • [10] T. Feix, J. Romero, H. Schmiedmayer, A. M. Dollar, and D. Kragic (2015) The grasp taxonomy of human grasp types. IEEE Transactions on human-machine systems 46 (1), pp. 66–77. Cited by: §I, §II-A.
  • [11] A. Kapandji (1986) Clinical test of apposition and counter-apposition of the thumb. Annales de chirurgie de la main: organe officiel des societes de chirurgie de la main 5 (1), pp. 67. Cited by: §I, §III-B.
  • [12] S. Kurumaya, H. Nabae, G. Endo, and K. Suzumori (2017) Design of thin mckibben muscle and multifilament structure. Sensors and Actuators A: Physical 261, pp. 66–74. Cited by: §I.
  • [13] C. Laschi, M. Cianchetti, B. Mazzolai, L. Margheri, M. Follador, and P. Dario (2012) Soft robot arm inspired by the octopus. Advanced Robotics 26 (7), pp. 709–727. Cited by: §I.
  • [14] J. Lin, Y. Wu, and T. S. Huang (2000) Modeling the constraints of human hand motion. In Proceedings workshop on human motion, pp. 121–126. Cited by: §II-B.
  • [15] C. Lovchik and M. A. Diftler (1999) The robonaut hand: a dexterous robot hand for space. In IEEE International Conference on Robotics and Automation, Vol. 2, pp. 907–912. Cited by: §I.
  • [16] Q. Lu and N. Rojas (2019) On soft fingertips for in-hand manipulation: modeling and implications for robot hand design. IEEE Robotics and Automation Letters 4 (3), pp. 2471–2478. Cited by: §I.
  • [17] C. Piazza, G. Grioli, M. Catalano, and A. Bicchi (2019) A century of robotic hands. Annual Review of Control, Robotics, and Autonomous Systems 2, pp. 1–32. Cited by: §I.
  • [18] P. Polygerinos, Z. Wang, K. C. Galloway, R. J. Wood, and C. J. Walsh (2015) Soft robotic glove for combined assistance and at-home rehabilitation. Robotics and Autonomous Systems 73, pp. 135–143. Cited by: §I.
  • [19] F. Rothling, R. Haschke, J. J. Steil, and H. Ritter (2007) Platform portable anthropomorphic grasping with the bielefeld 20-dof shadow and 9-dof tum hand. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2951–2956. Cited by: §I.
  • [20] D. Rus and M. T. Tolley (2015) Design, fabrication and control of soft robots. Nature 521 (7553), pp. 467–475. Cited by: §I.
  • [21] J. K. Salisbury and J. J. Craig (1982) Articulated hands: force control and kinematic issues. The International Journal of Robotics Research 1 (1), pp. 4–17. Cited by: §II-A.
  • [22] A. J. Spiers, B. Calli, and A. M. Dollar (2018) Variable-friction finger surfaces to enable within-hand manipulation via gripping and sliding. IEEE Robotics and Automation Letters 3 (4), pp. 4116–4123. Cited by: §I.
  • [23] S. Takamuku, A. Fukuda, and K. Hosoda (2008) Repetitive grasping with anthropomorphic skin-covered hand enables robust haptic recognition. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3212–3217. Cited by: §I.
  • [24] B. J. Tasi, M. Koller, and G. Cserey (2019) Design of the anatomically correct, biomechatronic hand. arXiv:1909.07966. Cited by: §I, §I.
  • [25] L. Tian, H. Li, M. F. K. B. A. Halil, N. M. Thalmann, D. Thalmann, and J. Zheng (2020) Fast 3d modeling of anthropomorphic robotic hands based on a multi-layer deformable design. arXiv:2011.03742. Cited by: §II-C, §II-C.
  • [26] L. Tian, N. Magnenat-Thalmann, D. Thalmann, and J. Zheng (2018) A methodology to model and simulate customized realistic anthropomorphic robotic hands. In Proceedings of Computer Graphics International, pp. 153–162. Cited by: §II-C.
  • [27] Z. Xu and E. Todorov (2016) Design of a highly biomimetic anthropomorphic robotic hand towards artificial limb regeneration. In IEEE International Conference on Robotics and Automation, pp. 3485–3492. Cited by: §I, TABLE II, §III-A.
  • [28] J. Zhou, X. Chen, U. Chang, J. Lu, C. C. Y. Leung, Y. Chen, Y. Hu, and Z. Wang (2019) A soft-robotic approach to anthropomorphic robotic hand dexterity. IEEE Access 7, pp. 101483–101495. Cited by: §I.
  • [29] J. Zhou, Y. Chen, D. C. F. Li, Y. Gao, Y. Li, S. S. Cheng, F. Chen, and Y. Liu (2020) 50 benchmarks for anthropomorphic hand function-based dexterity classification and kinematics-based hand design. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 9159–9165. Cited by: §III-B.
  • [30] J. Zhou, J. Yi, X. Chen, Z. Liu, and Z. Wang (2018) BCL-13: a 13-dof soft robotic hand for dexterous grasping and in-hand manipulation. IEEE Robotics and Automation Letters 3 (4), pp. 3379–3386. Cited by: §I, §I, §II-A.