Whole-Body Geometric Retargeting for Humanoid Robots

by   Kourosh Darvish, et al.

Humanoid robot teleoperation allows humans to integrate their cognitive capabilities with the apparatus to perform tasks that need high strength, manoeuvrability and dexterity. This paper presents a framework for teleoperation of humanoid robots using a novel approach for motion retargeting through inverse kinematics over the robot model. The proposed method enhances scalability for retargeting, i.e., it allows teleoperating different robots by different human users with minimal changes to the proposed system. Our framework enables an intuitive and natural interaction between the human operator and the humanoid robot at the configuration space level. We validate our approach by demonstrating whole-body retargeting with multiple robot models. Furthermore, we present experimental validation through teleoperation experiments using two state-of-the-art whole-body controllers for humanoid robots.


page 1

page 4

page 5

page 6

page 7


Task-Space Control Interface for SoftBank Humanoid Robots and its Human-Robot Interaction Applications

We present an open-source software interface, called mc_naoqi, that allo...

A Framework for On-line Learning of Underwater Vehicles Dynamic Models

Learning the dynamics of robots from data can help achieve more accurate...

Telexistence and Teleoperation for Walking Humanoid Robots

This paper proposes an architecture for achieving telexistence and teleo...

RoboChain: A Secure Data-Sharing Framework for Human-Robot Interaction

Robots have potential to revolutionize the way we interact with the worl...

A new approach to evaluating legibility: Comparing legibility frameworks using framework-independent robot motion trajectories

Robots that share an environment with humans may communicate their inten...

Human Whole-Body Dynamics Estimation for Enhancing Physical Human-Robot Interaction

In the last two decades the scientific community has shown a great inter...

An intent-based approach for creating assistive robots' control systems

The current research standards in robotics demand general approaches to ...

I Introduction

Teleoperation stands for operating from distance, in which it extends the human capability to operate a robot remotely, where the human is unable to reach owing to the time and space constraints or the dangers posed by hazardous environments [1]. Moreover, the perception and decision making capabilities of current robotic systems are still limited preventing them from acting autonomously out of laboratory settings and in real-world conditions. Teleoperation plays an important role in a wide range of applications including manipulation in hazardous environments [2], [3], telepresence [4], telesurgery [5], and space exploration [6]. Teleoperation is considered as a type of human-robot interaction at distance, where for an effective and efficient mission, a bilateral communication is paramount [7]. Given this perspective, the human and the robot establish a team, in which the goal of the teleoperated robot is the same as the human operator. Furthermore, teleoperation brings together the excellent cognitive capabilities of humans and the physical strength of the robotic system together [8].

Fig. 1: Whole-body retargeting example scenario

Humanoid robots are designed based on the idea of anthropomorphism and unlike serial manipulators, they have higher manoeuvrability and manipulation capabilities [9]. Hence, they facilitate higher capabilities during teleoperation. At the same time, the complexity of humanoid robots offers more challenges for teleoperation particularly in unstructured dynamic environments designed for humans. The level of autonomy, team organization and, the information exchange between the operator and the robot are some of the vital aspects in teleoperation performance to ensure successful task completion [10, 11]. The level of autonomy ranges from being a semi-autonomous robot at the symbolic or the action level (high-level teleoperation) [7, 1] to complete control of the robot at the kinematic and the dynamic level (low-level teleoperation), either in the robot’s configuration space or task space. A core component of the low-level teleoperation system is the human motion retargeting to a robot. An example scenario of whole-body retargeting of human motion to a humanoid robot is shown in Fig. 1 where each limb of the robot mimics the motion of the human limbs.

Two of the most studied teleoperation paradigms in literature are: 1) master-slave; and 2) bilateral systems. Under master-slave teleoperation paradigm, the flow of information is unidirectional from the human to the robot, while under bilateral teleoperation paradigm there is an exchange of information between the human and the robot. In particular, haptic feedback to the human from the robot [12, 13]. Teleoperation systems that involve humans in the control loop at the kinematic and dynamic level should have the prime objectives of situational awareness and transparency, i.e., the human operator experiencing the remote environment of the teleoperated robot as holistically as possible, while maintaining the stability of the closed-loop system [14, 1]. Delays and information loss are some of the crucial problems with this approach that affect the transparency and stability of teleoperation greatly [1, 14]. Different approaches such as Lyapunov stability analysis [15, 16], passivity based control [16] have been employed to address these limitations. However, these methods are studied extensively with manipulators and the stability measures for humanoid robot teleoperation are not well established [17].

The research on teleoperation of humanoid robots can be broadly classified into three categories: upper body teleoperation, lower body teleoperation, and whole-body teleoperation. In upper body teleoperation, the mapping of the human motion to the robot motion is considered at the kinematic level. Inverse Kinematic (IK) and nonlinear optimization approaches are the common methods for teleoperation scenarios. Using inverse kinematics methods, human joint angles or velocities are computed and mapped at configuration space to the corresponding joints of the humanoid robot, taking into account the robot limitations

[18, 19, 20]. Nonlinear optimization methods map the human’s hand motions at task space to the desired trajectory of the humanoid robot end-effector motions [21, 18]. Alternatively, a data-driver mapping between the human and the robot arm is proposed in [22]. In these cases, some consider the effect of the change of center of mass (COM) in robot lower body motion and the robot’s balance [21], while others do not and therefore, the risk of robot falling down increases. So, concerning the lower body teleoperation of humanoid robots, the aspects of stability and locomotion have higher precedence over retargeting of all the lower limbs. A more detailed description of such methods are discussed in [23, 24].

Coming to the topic of whole-body teleoperation of humanoid robots, the key challenge is to control the robot such that it does not fall while keeping its manoeuvrability and manipulability high, so that the human and the robot team can successfully perform a given task. The balance of the robot is achieved by either keeping the Center of Mass (CoM) inside the support polygon or maintaining the net momentum about the Center of Pressure (CoP) to zero [25, 9]. Although, a set of safety limitations are considered to maintain the robot’s stability, multi-link dynamic contacts are not considered in [9]. Therefore, they can not handle tasks which needs force exchange with the environment or compensate for external disturbances. An attempt to solve this problem is presented in [17] with simulations which uses the natural frequencies of human and robot models in the feedback law, and synchronize their motions to compute the robot balancing and stepping strategies. Differently from the described methods, a data-driven approach for whole-body retargeting in a physics based animation environment is proposed by the authors of [26].

One of the obvious shortcomings of the teleoperation systems proposed in literature is the lack of ability to easily adapt the system for different human users and humanoid robots with different geometries, kinematics, and dynamics. The possibility to perform human motion retargeting without considering major design changes becomes limited. The system designer often has to spend time and effort in finding a new model of the human to be used during motion retargeting step such as IK based approaches, therefore the usability and scalability of the proposed teleoperation system decreases.

This paper presents a novel framework for whole-body retargeting and teleoperation of a humanoid robot that enhances the scalability to multiple human operators or multiple robot models. Our approach provides anthropomorphic references for humanoid robot joints in real-time based on the human limb motion measures, independently from the human body dimensions, by directly using the robot model. The proposed approach is validated by extensive whole-body retargeting and teleoperation experiments.

The rest of the paper is organized as follows: Section II introduces the basic notations, robot modelling, and an overview of motion retargeting. Section III presents our whole-body retargeting architecture. Section IV describes the whole-body retargeting experiments and highlights the results validating our approach. Section V shows the experiments and results of whole-body teleoperation with two state-of-the-art whole-body controllers for humanoid robots. Section VI provides the conclusions and hints at our future work.

Ii Background

Ii-a Notations & Modeling

The inertial frame of reference is denoted by . Given two frames, and ,

represents the rotation matrix between the frames, i.e., given two vectors

respectively expressed in and , the rotation matrix is such that . The skew-symmetric operation of a matrix is defined as , and the vee operator

maps a skew-symetric matrix from

. Humans and humanoid robots are considered to be multibody floating base systems, i.e., none of the links has an a priori constant pose with respect to the inertial frame [27, 28]. Superscripts and corresponds to a quantity of the human and the robot respectively. The configuration of the system can be determined by the triplet that contains the position and the orientation of the base frame and the vector of joint values that highlight the shape of the robot. The velocity of the multibody system is represented by the triplet composed by the linear and angular velocity of the base frame with respect to the inertial frame along with the vector of joint velocities . The Jacobian is the map between the robot velocity and the linear and angular velocities of the frame , i.e.:


Jacobian matrix is composed of linear part and angular part . Velocity vector of a frame are made-up of linear and angular parts .

Ii-B Kinematic Motion Retargeting

The two main methods for the retargeting of human motions at the kinematic level are the configuration space retargeting and the task space retargeting.

Ii-B1 Configuration space retargeting

The architecture shown in Fig. 2 represents a typical configuration space retargeting scheme [19, 25]. The measurements of the human motion are given as input to an inverse kinematics based method along with the human model. On the output side we retrieve the human joint angles and velocities . Later, a mapping step morphs them to the robot joint angles and velocities .

Fig. 2: Typical configuration space retargeting scheme.

Some of the key limitations of this approach are: i) finding a customized mapping: we should apply the constraints of the robot joints to , find a customised offset and scaling factor for each of the robot joint with respect to the corresponding human joints; ii) dissimilarity of the human and the robot kinematics: the robot kinematics can be different from the humans, for example the human shoulder consists of a spherical joint, while robot’s shoulder are three revolute joints with different order. Therefore, at this step we should use forward kinematics to find the relative rotation between the chest link frame and the upper arm link frame of the human, and then apply inverse kinematics to find the robot’s joint angles and velocities; iii) different human kinematics: different human subjects have different physical properties, which results in different human models.

Ii-B2 Task space retargeting

The architecture shown in Fig. 3 represents a typical task space retargeting scheme. In this approach, the human link measurements in cartesian space are mapped to the robot’s cartesian space at the first step [21, 9]. A general attempt for such a mapping is a fixed proportion between the human and robot’s geometry, e.g., the human’s wrist rotation is mapped equally to robot end effector orientation or for the case of robot’s end-effector position we have

. A heuristic to find

is provided in [21], in which it is defined as . Later, the optimization problem, i.e., inverse kinematics, is solved with the robot’s model.

Fig. 3: Typical task space retargeting scheme.

Some of the key limitations of this approach are: i) workspace or precision limits: the workspace of the robot may be narrowed () for reaching some of the far points or the precision is lost in the case for fine manipulation tasks; ii) robot’s internal configuration dissimilarity:

the robot internal configuration may not be similar to the human, i.e., the degrees-of-freedom problem. It causes psychological discomfort, as the user or people who are interacting with the robot may not predict the robot’s motions because of non-anthropomorphic motions that depend on the parameters of the optimization problem

[18]. Moreover, the precise control of internal configuration becomes essential when the robot acts in cluttered environments to avoid obstacles.

Iii Methods

Fig. 4: The architecture of the whole-body teleoperation with active human motion retargeting.

Iii-a whole-body Teleoperation Architecture

We propose a whole-body teleoperation architecture as shown in Fig. 4. The human user receives visual feedback from the robot environment by streaming the robot camera images through the Oculus Headset. The robot hands are controlled via the Joypads. The human locomotion information, i.e., the linear and the angular velocities are obtained from the Cyberith Virtualizer VR Treadmill. Additionally, the human wears a sensorized full body suit from Xsens technologies to obtain the kinematic information of various human links with respect to the inertial frame of reference.

Iii-B Kinematic Whole-Body Human Motion Retargeting

We perform the whole-body retargeting by geometrically mapping anthropomorphic motions of human links to corresponding robot links. Fig. 5 shows our proposed method for the whole-body retargeting of human motions. Similar to the task space retargeting introduced in Section II-B, we formulate the retargeting problem as an inverse kinematics problem given only the rotation and angular velocity of the human links and the robot’s URDF model. In our case, the customized mapping between each link of the human and the robot is done with a constant rotation , and applied directly on the robot’s URDF for the ease of implementation. An additional benefit with this approach is the consideration of the robot’s link properties and joint types using the URDF model. Therefore, the change of the human subject or robot geometry does not affect the retargeting of the human motions to the robot, i.e., the proposed retargeting method increases the scalability enabling application to different human subjects or robots with minimal efforts.

Fig. 5: Block diagram of kinematic whole-body motion retargeting.

In this formalization, the frame in which an individual human link rotation and angular velocity measurements are expressed should coincide with the corresponding robot link’s frame. As an example, the actual link frame definitions of both the human links and the robot links are highlighted in the Fig. 4. The frame equivalence from the human to the robot links is indicated by the numbering. In this case, by identifying the relative rotation between the human link frames and the corresponding robot link frames manually, we obtain the robot’s desired motion in the frame attached to the robot. Given the rotation from the human link frames to the inertial frame, , and the constant rotation from the robot link frames to human link frames, ; equation (2) provides the rotation from the desired robot link frames to the inertial frame,


The fixed rotation is computed offline by positioning both the robot and the human models in a similar joint configuration as highlighted in Fig. 4.

Once the motion of robot links is correctly extracted, the robot joint positions is found by formulating the inverse kinematics as an optimization problem [29, 28]. To solve the inverse kinematics problem, we benefit from a dynamical optimization method that ensures the convergence of the frame orientation errors to a minimum [30]. We define the following dynamical system:


in which is the gain matrix, and are respectively the vectors collecting orientation and angular velocity errors defined as follow:


where are the errors computed for the i-th link, and is the i-th link of the robot. Hence, the joint velocities can be found by solving the following optimization problem:


in which is the regularization term, and with a linear inequality constraint is defined. Finally, the robot’s desired joint positions are found by integrating . To solve the optimization problem we rely on a quadratic programming (QP) library [31].

Our proposed method allows to retarget human motions to a robot even if their kinematics is not similar, i.e., in case the humanoid’s robot limb and the human’s corresponding limb has different degrees of freedom. Moreover, it enables the anthropomorphic motion retargeting, as the robot mimics human’s links motion.

Iv Retargeting Experiments & Results

The human limbs’ rotation and angular velocity are captured in real time using Xsens motion capture technology that involves several MEMS based inertial sensors placed on various body parts of the human. The whole-body retargeting experiments are performed with motion data captured for two human subjects. To demonstrate the scalability and usability of our proposed method, we perform kinematic retargeting with robots having different degrees of freedom (DoFs). The robot models we considered are a) iCub humanoid robot with 32 DoFs, b) NAO humanoid robot with 24 DoFs, c) Atlas humanoid robot with 30 DoFs. To show that our method is not limited to humanoid robots, we perform a retargeting scenario with Baxter dual arm 15 DoFs robot. Additionally, we show the retargeting with a human model that has 66 DoFs. The Fig. 6 highlights kinematic retargeting with different models and human subjects shown using Rviz kinematic visualization tool. The first row corresponds to the human motion of standing on right foot from the first subject and the second row corresponds to the human motion of standing on the left foot by the second subject. Concerning the baxter robot, the retargeting is done only for the arms and the head, as it is a fixed base robot.

Fig. 6: Rviz visualization of whole-body retargeting of human subjects motion to different models: a) Human Model b) Nao c) iCub d) Atlas e) Baxter; top: human subject stands on the right foot, bottom: human subject stands on the left foot.
Fig. 7: Performance of the whole-body retargeting of the human motions to the iCub robot.

The orientation of the human links is obtained from Xsens measurements. The solution of the inverse kinematics problem formulated in Section III-B along with the robot model provides the joint values and velocities of the robot. To compute the robot’s achieved link orientation , we use floating base forward kinematics employing the robot’s joint values. Fig. 7 on the top, shows the human’s and the robot’s right arm elbow joint values. Indeed, when the user moves his elbow in a configuration that it is not feasible for the robot, as can be seen at time instant , the inverse kinematics finds a feasible solution that tries to minimize the rotation error between the human frames and the iCub robot frames, see (6). Moreover, according to the robot model, the robot’s kinematics may not resemble the human’s corresponding ones between two consecutive links, e.g., the robot has lower DoFs than the human, or the order/orientation of joints between the two consecutive links are different. In this case, we use measure to evaluate the error between the robot’s link orientation and the human’s one. Fig. 7 on the bottom shows the human’s right lower leg link rotation matrix and corresponding one of the iCub robot computed through kinematic whole-body retargeting. For the sake of comprehension, the rotation matrix is parametrized using Euler angles expressed a series of ,, intrinsic rotations.

V Teleoperation Experiments & Results

Towards demonstrating the capabilities of our whole-body retargeting, we perform teleoperation experiments using two state-of-the-art whole-body controllers for humanoid robots. The whole-body teleoperation experiments are carried with the 53 degrees of freedom iCub robot that is tall [32]. The controllers run at while the retargeting application runs at . The average walking speed of the robot is . Both the applications are run on a machine of th generation Intel Core i7@ with 8GB of RAM.

V-a Whole-Body Teleoperation with Balancing Controller

Momentum-based control [33] [34] proved to be effective for maintaining the robot’s stability by controlling the robot’s momentum as the primary objective. Additionally, a postural task projected into the nullspace of the primary task can be used for performing additional tasks like manipulation while ensuring the stability of the robot. The control problem is formulated as an optimization problem to achieve the two tasks while carefully monitoring and regulating the contact wrenches, considering the associated feasible domains by resorting to quadratic programming (QP) solvers.

We considered one such momentum-based balancing controller [33] and extended the postural task by giving the joint references from whole-body retargeting. Fig. 8 shows snapshots from the experiments of the whole-body retargeting with the balancing controller.

Fig. 8: Whole-body retargeting with balancing controller snapshots

In this experiment the robot is balancing on the left foot and maintaining the stability of its center of mass as shown in Fig. 9. Additionally, it tracks all the joints with the references coming from whole-body retargeting. The vertical dashed lines correspond to the experimental snapshots indicated in Fig. 8. The references to the and components of the CoM are close to zero to maintain the stability of the robot by keeping the CoM inside the support polygon and the gains are tuned to achieve good tracking. The CoM motion along the -axis does not effect the stability of the robot and the gain value of the components is kept lower in order to allow the vertical movements of the robot during retargeting. The input joint references from retargeting are smoothed through a minimum-jerk trajectory [35]. A smoothing time parameter is tuned in order to achieve good balancing between the postural tracking and stability. Accordingly, the joints such as torso pitch, torso roll, and left knee for which the human does not move fast, while balancing on left foot, achieve good tracking. On the other hand, the joints such as right shoulder pitch, right shoulder roll, and left ankle pitch are moved frequently while performing the retargeting and hence the tracking is not close owing to the delay from the smoothing time involved in producing minimum-jerk trajectory joint references for the robot joints. Ideally, the smoothing time can be kept lower considering that we receive continuous joint references from retargeting. At this point, we did not conduct exhaustive tests to find the lower threshold for the smoothing parameter that ensures fast and accurate retargeting of dynamic motions from the human while maintaining the robot’s stability.

Fig. 9: Tracking of the center of mass and the joint angles during the whole-body retargeting with balancing controller; blue line represents the desired quantity, orange line is the actual robot quantity.

V-B Whole-Body Teleoperation with Walking Controller

Humanoid robot walking is another challenging control paradigm. Divergent-Component-of-Motion (DCM) based control architectures proved promising for humanoid robot locomotion [23, 36]. The architecture typically consists of three layers: 1) Trajectory generation and optimization layer that generates the desired footsteps and the DCM trajectories [36]; 2) Simplified model control layer that implements an instantaneous control law with the objective of stabilizing the unstable DCM dynamics; and 3) Whole-body control layer that guarantees the tracking of the robot’s set of tasks, including the Cartesian tasks and the postural tasks, using the stack-of-tasks paradigm implemented through a quadratic programming (QP) formalism.

We considered one such DCM based walking controller [23] and extended the postural task by giving the joint references from whole-body retargeting. Fig. 10 shows the three different experimental stages of whole-body retargeting with the walking controller. During the first and the third stages the robot is in double support standstill phase while during the second stage the robot is in walking phase.

1 first stage
2 second stage
3 third stage
Fig. 10: Whole-body retargeting with walking controller experimental stages

The walking controller’s primary objective is to track the center of mass and components along the desired trajectory. The overall center of mass tracking of the and components is very good for the entire duration of the experiment as shown in Fig. 11.

Currently, we engage only the upper body retargeting, and the lower body is controlled by the walking controller. During our experiments we observed that the weights for achieving satisfactory upper body retargeting of the postural task during the double support standstill phase and the walking phase are different. Having the same retargeting gains while walking lead to uncoordinated movements eventually compromising the robot’s stability while walking. So, we choose higher retargeting gains during double support standstill phase and the gain values are set to zero during the walking phase. The transition between the two sets of weights is achieved smoothly through minimum jerk trajectories [35]. Fig. 11 highlights tracking for some of the upper-body joints. The blue line represents the desired joint position provided by human motion retargeting and the orange line is the actual robot joint position. The purple vertical dashed line indicates the starting instance of the second stage, i.e., walking, and the green vertical dashed line indicates the stopping instance of walking phase. During the first stage, human motion retargeting is good and the joint position error is low. Instead, during the second stage, as the robot starts walking the joint position error is higher as the retargeting gains are set to zero.

Fig. 11: Tracking of the center of mass and the joint angles during the whole-body retargeting with walking controller.

Vi Conclusions & Future Work

In this paper, we propose and validate a whole-body teleoperation framework for humanoid robots, leveraging the geometric retargeting of motion from human body parts to the analogous humanoid robot parts. The proposed approach increases the usability by employing solely the robot’s model and by considering the orientation and angular velocity measurements from the human links.

The proposed retargeting approach has been applied to multiple robot models using motion data from multiple human subjects. Furthermore, we performed active retargeting experiments during bipedal balancing and locomotion tasks using two state-of-the-art whole-body controllers for humanoid robots. Our experimental validation strongly supports our proposed framework.

Currently, in the balancing controller the center of mass references are independent from the human while the retargeting is done in the postural space. Additionally, in the walking controller, we restrict ourselves to do only upper-body retargeting in postural space. The center of mass and feet trajectory references are independent from the human. In the future work, we will extend our framework to address the above limitations.


  • [1] P. F. Hokayem and M. W. Spong, “Bilateral teleoperation: An historical survey,” Automatica, vol. 42, no. 12, pp. 2035–2057, 2006.
  • [2] K. B. Shimoga, “A survey of perceptual feedback issues in dexterous telemanipulation. ii. finger touch feedback,” in Proceedings of IEEE Virtual Reality Annual International Symposium, Seattle, WA, USA, 1993.
  • [3] J. Trevelyan, W. R. Hamel, and S.-C. Kang, “Robotics in hazardous applications,” in Springer handbook of robotics.   Springer, 2016, pp. 1521–1548.
  • [4] S. Tachi, H. Arai, and T. Maeda, “Development of an anthropomorphic tele-existence slave robot,” in Proceedings of the International Conference on Advanced Mechatronics (ICAM), vol. 385, May 1989, p. 390.
  • [5] J. Burgner-Kahrs, D. C. Rucker, and H. Choset, “Continuum robots for medical applications: A survey,” IEEE Transactions on Robotics, vol. 31, no. 6, pp. 1261–1280, 2015.
  • [6] L. Pedersen, D. Kortenkamp, D. Wettergreen, I. Nourbakhsh, and D. Korsmeyer, “A survey of space robotics,” in

    Proceedings of 7th International Symposium on Artificial Intelligence, Robotics and Automation in space (i-SAIRAS-03)

    , May 2003.
  • [7] M. A. Goodrich and A. C. Schultz, “Human-robot interaction: a survey,” Foundations and Trends® in Human–Computer Interaction, vol. 1, no. 3, pp. 203–275, 2007.
  • [8] M. Zucker, S. Joo, M. X. Grey, C. Rasmussen, E. Huang, M. Stilman, and A. Bobick, “A general-purpose system for teleoperation of the drc-hubo humanoid robot,” Journal of Field Robotics, vol. 32, no. 3, pp. 336–351, 2015.
  • [9] Y. Ishiguro, K. Kojima, F. Sugai, S. Nozawa, Y. Kakiuchi, K. Okada, and M. Inaba, “High speed whole body dynamic motion experiment with real time master-slave humanoid robot system,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), May 2018, pp. 1–7.
  • [10] J. M. Beer, A. D. Fisk, and W. A. Rogers, “Toward a framework for levels of robot autonomy in human-robot interaction,” Journal of Human-Robot Interaction, vol. 3, no. 2, pp. 74–99, 2014.
  • [11] A. Steinfeld, T. Fong, D. Kaber, M. Lewis, J. Scholtz, A. Schultz, and M. Goodrich, “Common metrics for human-robot interaction,” in Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction.   ACM, 2006, pp. 33–40.
  • [12] Y. Ishiguro, K. Kojima, F. Sugai, S. Nozawa, Y. Kakiuchi, K. Okada, and M. Inaba, “Bipedal oriented whole body master-slave system for dynamic secured locomotion with lip safety constraints,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sep. 2017, pp. 376–382.
  • [13] A. Wang, J. Ramos, J. Mayo, W. Ubellacker, J. Cheung, and S. Kim, “The hermes humanoid system: A platform for full-body teleoperation with balance feedback,” in 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Nov 2015.
  • [14] S. Lichiardopol, “A survey on teleoperation,” Technische Universitat Eindhoven, DCT report, 2007.
  • [15] S. Islam, P. X. Liu, A. E. Saddik, and Y. B. Yang, “Bilateral control of teleoperation systems with time delay,” IEEE/ASME Transactions on Mechatronics, vol. 20, no. 1, pp. 1–12, Feb 2015.
  • [16] N. Chopra, M. W. Spong, S. Hirche, and M. Buss, “Bilateral teleoperation over the internet: the time varying delay problem,” in Proceedings of the 2003 American Control Conference, vol. 1, Denver, CO, USA, June 2003, pp. 155–160.
  • [17] J. Ramos and S. Kim, “Humanoid dynamic synchronization through whole-body bilateral feedback teleoperation,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 953–965, Aug 2018.
  • [18] M. V. Liarokapis, P. Artemiadis, C. Bechlioulis, and K. Kyriakopoulos, “Directions, methods and metrics for mapping human to robot motion with functional anthropomorphism: A review,” School of Mechanical Engineering, National Technical University of Athens, Tech. Rep, 2013.
  • [19] K. Ayusawa and E. Yoshida, “Motion retargeting for humanoid robots based on simultaneous morphing parameter identification and motion optimization,” IEEE Transactions on Robotics, vol. 33, no. 6, pp. 1343–1357, 2017.
  • [20]

    C. Stanton, A. Bogdanovych, and E. Ratanasena, “Teleoperation of a humanoid robot using full-body motion capture, example movements, and machine learning,” in

    Proc. Australasian Conference on Robotics and Automation, 2012.
  • [21] M. Elobaid, Y. Hu, G. Romualdi, S. Dafarra, J. Babic, and D. Pucci, “Telexistence and teleoperation for walking humanoid robots,” in Proceedings of SAI Intelligent Systems Conference.   Springer, 2019, pp. 1106–1121.
  • [22] R. M. Pierce and K. J. Kuchenbecker, “A data-driven method for determining natural human-robot motion mappings in teleoperation,” in 2012 4th IEEE RAS EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), June 2012, pp. 169–176.
  • [23] G. Romualdi, S. Dafarra, Y. Hu, and D. Pucci, “A Benchmarking of DCM Based Architectures for Position and Velocity Controlled Walking of Humanoid Robots,” in 2018 IEEE 18th International Conference on Humanoid Robots (Humanoids), nov 2018, pp. 1–9.
  • [24] S. Feng, E. Whitman, X. Xinjilefu, and C. G. Atkeson, “Optimization-based full body control for the darpa robotics challenge,” Journal of Field Robotics, vol. 32, no. 2, pp. 293–312, 2015.
  • [25] L. Penco, B. Clement, V. Modugno, E. Mingo Hoffman, G. Nava, D. Pucci, N. G. Tsagarakis, J. . Mouret, and S. Ivaldi, “Robust real-time whole-body motion retargeting from human to humanoid,” in 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids), Nov 2018.
  • [26] X. B. Peng, P. Abbeel, S. Levine, and M. van de Panne, “Deepmimic: Example-guided deep reinforcement learning of physics-based character skills,” CoRR, vol. abs/1804.02717, 2018.
  • [27] R. Featherstone, Rigid Body Dynamics Algorithms.   Secaucus, NJ, USA: Springer-Verlag New York, Inc., 2007.
  • [28]

    C. Latella, M. Lorenzini, M. Lazzaroni, F. Romano, S. Traversaro, M. A. Akhras, D. Pucci, and F. Nori, “Towards real-time whole-body human dynamics estimation through probabilistic sensor fusion algorithms,”

    Autonomous Robots, pp. 1–13, 2018.
  • [29] L. Sciavicco and B. Siciliano, “A solution algorithm to the inverse kinematic problem for redundant manipulators,” IEEE Journal on Robotics and Automation, vol. 4, no. 4, pp. 403–410, Aug 1988.
  • [30] L. Rapetti, Y. Tirupachuri, K. Darvish, C. Latella, and D. Pucci, “Model-Based Real-Time Motion Tracking using Dynamical Inverse Kinematics,” arXiv e-prints, p. arXiv:1909.07669, Sep 2019.
  • [31] B. Stellato, G. Banjac, P. Goulart, A. Bemporad, and S. Boyd, “OSQP: An operator splitting solver for quadratic programs,” ArXiv e-prints, Nov. 2017.
  • [32] L. Natale, C. Bartolozzi, D. Pucci, A. Wykowska, and G. Metta, “icub: The not-yet-finished story of building a robot child,” Science Robotics, vol. 2, no. 13, p. eaaq1026, 2017.
  • [33] G. Nava, F. Romano, F. Nori, and D. Pucci, “Stability analysis and design of momentum-based controllers for humanoid robots,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2016, pp. 680–687.
  • [34] A. Herzog, L. Righetti, F. Grimminger, P. Pastor, and S. Schaal, “Balancing experiments on a torque-controlled humanoid with hierarchical inverse dynamics,” in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2014, pp. 981–988.
  • [35] U. Pattacini, F. Nori, L. Natale, G. Metta, and G. Sandini, “An experimental evaluation of a novel minimum-jerk Cartesian controller for humanoid robots,” in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2010.
  • [36] J. Englsberger, C. Ott, and A. Albu-Schäffer, “Three-Dimensional Bipedal Walking Control Based on Divergent Component of Motion,” IEEE Transactions on Robotics, vol. 31, no. 2, pp. 355–368, 2015.