Haptic teleoperation plays an important role in robotic application cases, such as robot-assisted surgery and nuclear facility decommissioning. Vision-based teleoperation can be enhanced by haptic rendering. Haptic feedback also helps visual perception when vision systems lose efficacy in some special cases, e.g., lidar can give wrong terrain maps when the terrain is covered by liquid or grass. Exploring the environment using tactile sensors could mitigate errors in vision system results. In the literature a number of haptic teleoperation works with single arm robots can be found, whereas teleoperating quadruped robots to do manipulation and exploration has been studied little.
Researchers have been working on taking full advantage of multi-legged robots to be more versatile. For humanoid robots it is natural to do manipulation using arms [jenkins2006uncovering][atkeson2015no]. Equipping quadruped robots with an additional arm that is dedicated to manipulation tasks is a common way to enhance locomanipulation [rehman2016towards][spotmini][bellicoso2019alma] due to the fact that the leg of a quadruped usually does not have enough DoFs nor a sufficiently large workspace to be competent at six dimensional manipulation in Cartesian space. Whole-body controllers can enhance foot posture control by coordinating the leg and base, which greatly extends the versatility of quadruped robots in real world deployment.
A fully optimization-based whole-body controller called hierarchical quadratic programming (HQP) has been widely used on legged robots [de2010feature][escande2014hierarchical][Herzog2016][DarioBellicoso2016]. Multiple tasks are encoded by sequential strictly null-space prioritized QPs, which solve for the torque commands while taking into account joint space dynamics and physical constraints. Solving several QPs online is still time consuming, although decomposition methods are developed to reduce the number of decision variables. Usually three QPs have to be solved for dynamic feasibility, operational tasks and saving energy [bellicoso2019alma]. Alternatively a non-strictly prioritized weighted QP (WQP) is used to avoid solving sequential QPs by stacking task equations into a weighted quadratic cost function [de2009prioritized][Feng2015].
In [Xin2018], projecting dynamic equations into constraint-free space, we derived operational space dynamics for floating base robots and then obtained analytical Cartesian impedance controllers. Although a QP optimization is still required to solve inequality constraints, the analytical Cartesian impedance controller gives us the ability to set gains based on operational space inertia [Angelini2019] and to estimate external forces because of properly scaled gains. In this paper, we will apply that analytical method to haptic teleoperation for sensing and manipulation.
Since the hierarchical controller will sacrifice low level tasks to satisfy high level tasks, the base may move outside the support polygon when the foot is operated as primary task. To avoid such failures, we need to set boundaries to restrict teleoperation movement. The commanded end-effector forces should be bounded based-on physical criteria, as too high acting force may cause the supporting feet to slip and/or lose balance due to moment around support polygon edge generated by the acting force. Papers about motion planning have discussed balance maintenance for multiple contact cases[harada2006dynamics][dai2014whole]. This paper will present how to appropriately restrict operator commands by mapping the end-effector feasible wrench polytope (FWP) [ferrolho2019comparing][orsolino2018application] to joystick boundaries.
The contribution of this paper is twofold. Firstly, we extend our hierarchical Cartesian impedance controller to adapt to foot posture teleoperation, and the impedance controller is experimentally proved to be accurate enough to estimate contact forces. Secondly, we bound the teleoperation joystick area with respect to the physical constraints to ensure safe teleoperation. To the best of authors’ knowledge, this is the first time to achieve haptic teleoperation of foot posture for quadruped robots in the literature.
Ii Control framework
The joint space dynamics equation of a quadruped robot is
denotes the vector of actuated joint positions () and floating base position and orientation (), is the inertia matrix, is the generalized vector containing Coriolis, centrifugal and gravitational effects, is the vector of torques, is the constraint Jacobian that describes linearly independent constraints, denotes the number of supporting feet, are constraint forces acting on supporting feet, and
is the selection matrix with
dimensional identity matrix.
Eq. (1) is subject to the following physical constraints,
Ii-a Cartesian impedance controller
In paper [mistry2012operational], M. Mistry suggested to use a null-space projector to split up (1) into two orthogonal subspaces,
Eq. (7) represents motion space that is not affected by contact forces of supporting feet, meanwhile Eq. (8) represents dynamics in constraint space. The advantage of this separation is that we can derive an analytical Cartesian impedance controller with Eq. (7) and solve a QP optimization to generate an extra torque command satisfying constraints of Eqs. (4-6) with Eq. (8). The final torque command is composed of two parts as follows,
where the first right side term is Cartesian impedance control and is the result of QP optimization. denotes the Jacobian matrix of a swing foot, the dimension of which depends on the control task. Since we want to control both the position and orientation of the foot, . and , derived from operational space dynamics, are Cartesian impedance controllers for the swing foot and the base. is the dynamic consistent null-space projector [mistry2012operational] of the swing foot, which enforces strictly hierarchical priorities. In the case of foot posture control, will deal with the overlap between and , leading to the convenience of leaving base Jacobian to be always a matrix. As the torso is in the null-space of the swing foot, the torso is enforced to satisfy the swing foot’s motion requirement, which results in automatic motion coordination and reachability extension of foot.
The generic equation of the Cartesian impedance control law for and is
where represents the operational space gravity compensation vector, denotes the operational space inertia matrix, and are diagonal matrices containing desired damping and stiffness gains, is the pose error of end-effector (either the base or the swing foot), denotes desired acceleration of end-effector. We refer to [Xin2018] for more details of and .
The impedance control law Eq. (10) leads to the following impedance behavior under external disturbances
where is the external wrench acting on the end-effector. Obviously if there is no external disturbance, the robot will track desired trajectory accurately with the assumption of using a perfect model. Conversely, we could measure motion error and then estimate external disturbances,
Model error always exists for real robot platforms, leading to a caused by both model error and disturbances. If the model error is much smaller than disturbances, is accurate enough to be a contact force estimator. Here in our experiments, we employ this estimation as haptic feedback for teleoperation, and thus do not require a force/torque sensor at the point of contact.
Note that the torso motion error doesn’t affect the foot position error because the torso is controlled in the null-space of the manipulation foot. If the base were fixed, the same external force, , acting on the foot would result in the same motion error, , as when running our hierarchical whole-body controller with a floating base. The difference only exists in joint errors. We can always only measure the motion of the foot and then use Eq. (13) to estimate external force without any torso motion error interfering.
Ii-B QP optimization
The control structure is depicted in Fig. 1 where the inputs of QP optimization in constraint space are torques for the motion and state feedback, which implies the motion can affect contact forces as are involved in Eq. (8). We solve the forward dynamics of Eq. (7) to derive the equation of with respect to , and then substitute it into Eq. (8) to eliminate . Since is not invertible, we resort to a trick using to replace in Eq. (7) as holds, leading to
Therefore, the QP in constraint space with respect to can be defined as
where abbreviates the first right side term of Eq. (15). in Eq. (16) is regarded as a constant vector. The QP may have no solution which can be solved by using motion planning considering physical constraints. In the case of teleoperation, this motion is generated by the operator. Particularly when the robot has interaction during a manipulation mission, the operator may push the joystick too much resulting in slipping. The next section will discuss how to set boundaries on the joystick in order to avoid the operator breaking physical constraints.
Iii Feasibility boundaries for teleoperation
To avoid failures, the teleoperated robot should not try to execute physically impossible commands. Instead of simply ignoring such infeasible commands, we propose to set motion boundaries on the joystick in order to give operators early notification of approaching dynamically infeasible areas.
Iii-a Boundaries on CoM position
To limit CoM motion, we look for the maximum range in each direction from the current foot position that will keep the CoM within the support polygon. Theoretically, we can use 6 DoFs to control foot posture meanwhile using the remaining 3 DoFs to control the torso. The null-space projector will automatically use the 3 DoFs for the torso to guarantee the foot trajectory tracking if the 6 DoFs assigned to foot are not enough to achieve desired motion. Solving forward dynamics and simulating the system for a time horizon can give us predicted robot state. If we want to calculate the boundary in Cartesian space, that will be time consuming to predict. Instead of directly finding the boundaries of the foot, we shrink the supporting polygon to be the boundary of CoM. When the CoM is close to the boundary, joystick stiffness will increase to stop joystick moving along that direction.
Iii-B Boundaries for feasible acting forces
Since the joystick sends position commands to the impedance controller, the impedance controller generates acting forces based on the position errors. We can restrict the workspace of joystick to saturate the acting force. Firstly, we need to compute the set of forces at the end-effector respecting joint torque limits and the contact force constraints. Such a set has been used to enhance motion planning of robot manipulators [ferrolho2019comparing]. The operational-space impedance controller gains from (11) are used to transform the set of feasible end-effector forces to feasible displacements, and are subsequently mapped to feasible joystick commands. The joystick will run a varying impedance controller to sharply increase the stiffness when the operator is moving the joystick into a infeasible position.
To compute the set of feasible forces at the end-effector, we extend the computational framework from [orsolino2018application], which computes the set of feasible wrenches applied at the torso of the robot. As in that framework, we note the structure of the equations of motion (1):
where we have split the inertial forces (), the joint-torques, and the contact forces into parts associated with the base and the separate legs and feet. Leg number 0 performs the manipulation.
The bottom three rows provide a constraint between joint torques and contact forces for each leg:
These equality constraints map the hypercube of allowed joint torques to a polytope of contact forces. For the stance-legs these polytopes are intersected with the friction cone. Any constraints on the contact force in manipulation, for instance when interacting with fragile environments, can be added at this stage as well. This results in a polytope () of feasible contact forces for each separate leg.
Finally the top row of (17) combines these polytopes with the dynamics of the robot torso, to form the feasible force polytope for the manipulating leg:
where a sum of polytopes is taken to be their Minkowski sum. The polytope-operations are handled using the CDDLib library. Unfortunately the various transformations require multiple conversions between vertex and halfspace representations, making the computation of the feasible force polytope expensive. As a result the polytopes are computed offline with a simple look-up used for online access.
Figure 2 shows the feasible force polytope for a configuration in simulation. The polytope has 37 planar sides. The posture of each surface in world frame is defined by the normal vector and the maximum feasible force along this direction. In this configuration the robot can exert little force upwards, as larger forces would shift the centre of pressure beyond the support polygon. In theory we need to project the contact force on those 37 directions and then compare the projection with the maximum feasible force along these directions. If the projected force is greater than the maximum feasible force, that means the contact force will break physical constraints. In practice we will shrink the corresponding to a certain configuration to be the boundary on joystick. Instead of using detected contact force to check if it triggers the boundary, we will use the predicted force generated by impedance controller to compare with . When any direction is close to the boundary we will generate a force to prevent the joystick from entering the area and to inform the operator.
The proposed algorithms are implemented on the quadruped robot ANYmal111See http://www.anybotics.com/ and a Sigma.7222See http://www.forcedimension.com/ haptic joystick. In practical manipulation cases the yaw of the foot with respect to foot frame ( axis of foot frame coincides with the central axis of shank) has little to no effect on the task, so we chose not to control it. The remaining 5 dimensions are controlled by the foot impedance controller as shown in Fig. 1. Not controlling the yaw helps in giving the torso controller more authority in the null-space, otherwise the torso is quite sensitive to disturbance and non-smooth motion commands. The experiment videos can be found at: https://vimeo.com/360099917.
Iv-a Haptic joystick setup
The Sigma.7 haptic device used in Fig. 1 is connected to the quadruped using a WiFi connection and custom ROS node running on a different computer than the ANYmal with Ubuntu 18.04 LTS. The commercial version from Force Dimension comes with a SDK that allows to easily access the device pose, twist and wrench of the device end-effector while also applying a desired wrench. For this work the device pose is read and added to the current quadruped end-effector pose to generate the target pose. As the workspace of the haptic device is significantly smaller than the ANYmal one, the Sigma.7 gripper is used to reset the end-effector command. To reduce the mental load on the operator and increase the stability of the quadruped the authors implemented the following measures:
The haptic device stiffens when the quadruped is reaching feasible limits as described in section III.
As the quadruped might experience high contact forces the reported contact forces are only applied when the operator presses the device gripper. In case of an open gripper state the Sigma.7 device goes into it’s initial base pose in the center of the workspace.
Iv-B Accuracy verification of estimated contact force
The Cartesian impedance controller can estimate the contact force as well as the contact wrench. Usually we don’t need wrench feedback since the foot shape is a small sphere, designed as a point foot. The first experiment is to verify how accurate the force estimation based on impedance controller is. We teleoperated the robot to press an e-stop button of a crane as shown in 3. The force sensor inside the foot is enabled to record sensory contact force to compare with estimated force by impedance controller. The Eq. (13) is used to compute estimated forces. However, acceleration and velocity are noisy. If the estimated force contains high frequency components it will cause oscillation on joystick when rendering the feedback forces. In the end, we used only the position error to estimate the contact forces.
The recorded data is shown in Fig. 4. It should be noted that we set a threshold of to filter the estimated force, intending to discriminate non-contact error when swinging in the air. This threshold can explain the spikes of estimated , and also leads to because estimated is always less than in this experiment. The force dropping at around the mark, is due to the button being fully pressed. When the button reaches the bottom, the contact force increases until the operator feels that the button was completely pressed down.
Based on Fig. 4 the estimated force is accurate enough to be used as haptic feedback for teleoperation. Additionally compared to noisy sensory forces (the sensory result shown in Fig. 4 has already been filtered by a second order low pass filter) the estimated force is easier to be projected onto the joystick.
Iv-C Pipe manipulation
In this experiment we show-case the robot inserting its foot into a pipe, via teleoperation. This scenario is relevant for industrial use and inspection tasks. The aim of this experiment is to show the whole-body controller can control foot posture, since inserting the foot into the pip demands accurately adjusting the foot orientation to be aligned with the pipe. Fig. 5 shows the snapshots of the process. We can see that the whole shank is finally inside the pipe. Moreover the entrance of the pipe is quite high for ANYmal. We rotated the second joint more than to reach the pipe. During that we moved through a singular configuration as shown in the second snapshot. The leg moved fast when trying to overcome singularity (as can be seen in the accompanying video). It didn’t cause big problem, which shows the robustness of the our controller.
Iv-D Exploring surface
In real world deployment, visual perception of solid surfaces may be obscured by liquid, smoke, grass etc. The robot’s locomotion planner needs to know the accurate elevation map to plan foot placement. In this experiment the operator teleoperates the robot to explore the surface covered by some light packing peanuts. We place the robot on a 20 cm high stage as shown in Fig. 6, such that the robot needs to stretch its leg to reach the bottom. Benefiting from the hierarchical whole-body controller, the robot can automatically lower its torso to extend the reachability of the leg. The operator then explores the surface by sliding the foot back and forth. If we recorded the foot trajectory during sliding on the surface, we can establish the geometrical plane to update the elevation map.
Iv-E Feasible force polytope boundary
In this experiment we verify that setting a boundary on the joystick to restrict acting force is necessary. The robot was operated to push against a scaffolding structure as shown in Fig. 7. The is calculated based on the measured configuration when the robot is pushing as shown in Fig. 8. The polytope for this configuration has 33 surfaces. The smallest infeasible force in this configuration is with direction . To verify the correctness of , the operator kept pushing upward and forward in order to trigger the limitation without a boundary being set on joystick. The recorded contact forces are shown in Fig. 9. The right hind leg slipped quickly at around when the recorded forces jump up according to Fig. 9. The recorded data after is irrelevant because the system crashed after detecting large slipping. We use the recorded forces to compute the projection along each direction of . This shows that the projection of the contact force along with direction is 30.69 at . That is slightly greater than the model-predicted maximum feasible value 30.68 in that direction, implying that the computed is close to the real limit considering model error and force estimation error. Subsequently, we repeated that experiment with enabled boundary on joystick. We shrunk the boundary to 80% for safety. The oscillation of contact force shown in Fig. 10 happened when the boundary was triggered. The increased stiffness of joystick prevented the operator from sending infeasible commands.
A hierarchical whole-body controller is successfully applied to foot posture teleoperation of a quadruped robot ANYmal. The output torques of the whole-body controller consists of two orthogonal components: 1) motion torques which are dedicated to execute mission commands without considering physical constraints, and 2) constraint torques that are only for satisfying physical constraints without generating any movement. The motion torques are computed by two analytical Cartesian impedance controllers decoupled by a dynamically consistent null-space projector. Motion control is performed by an analytical controller, such that only one QP optimization needs to be solved to cope with the inequality constraints. Two impedance controllers for the foot posture control and the torso control potentially overlap, but the null-space projector imposes a strict priority, which benefits the foot reachability. The foot impedance controller is also used to estimate contact forces when the manipulation foot is in contact with the environment. Experimental results show the estimation accuracy of this method for haptic teleoperation. In order to avoid falling over due to infeasible commands sent by operator, the CoM and the acting forces are effectively bounded by limiting the joystick movement. These boundaries not only protect the robot, they also guide the operator to move the joystick in feasible directions. Future work will aim to provide predictive online computing algorithms for the CoM boundary.
This work has been supported by the following grants: EPSRC UK RAI Hub NCNR (EPR02572X/1) and ORCA (EP/R026173/1), THING project in the EU Horizon 2020 (ICT-2017-1), and by grant EP/L016834/1 for the University of Edinburgh RAS CDT from EPSRC.