Control of a Hexapod Robot Considering Terrain Interaction

12/19/2021
by   Marco Zangrandi, et al.
0

Bio-inspired walking hexapod robots are a relatively young branch in robotics in both state of the art and applications. Despite their high degree of flexibility and adaptability derived by their redundant design, the research field that compliments their abilities is still very lacking. In this paper will be proposed state-of-the-art hexapod robot specific control architecture that allows for full control over robot speed, body orientation and walk gait type to employ. Furthermore terrain interaction will be deeply investigated, leading to the development of a terrain-adapting control algorithm that will allow the robot to react swiftly to terrain shape and asperities such as non-linearities and non-continuity within the workspace. It will be presented a dynamic model derived from the interpretation of the hexapod movement to be comparable to these of the base-platform PKM machines, and said model will be validated through Matlab SimMechanicsTM physics simulation. A feed-back control system able to recognize leg-terrain touch and react accordingly to assure movement stability will then be developed. Finally results coming from an experimental campaign based of the PhantomX AX Metal Hexapod Mark II robotic platform by Trossen RoboticsTM is reported.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 9

page 11

page 12

09/08/2018

Stable Stair-Climbing of a Quadruped Robot

Synthesizing a stable gait that enables a quadruped robot to climb stair...
09/25/2019

ARCSnake: An Archimedes' Screw-Propelled, Reconfigurable Robot Snake for Complex Environments

This paper presents the design and performance of a screw-propelled redu...
10/29/2021

Stitching Dynamic Movement Primitives and Image-based Visual Servo Control

Utilizing perception for feedback control in combination with Dynamic Mo...
03/31/2019

An Embodied, Platform-invariant Architecture for Connecting High-level Spatial Commands to Platform Articulation

In contexts such as teleoperation, robot reprogramming, human-robot-inte...
01/26/2021

Design, analysis and control of the series-parallel hybrid RH5 humanoid robot

Last decades of humanoid research has shown that humanoids developed for...
05/18/2022

A Method for Self-Service Rehabilitation Training of Human Lower Limbs

In recent years, the research of rehabilitation robot technology has bec...
11/10/2021

Effects of Design and Hydrodynamic Parameters on Optimized Swimming for Simulated, Fish-inspired Robots

In this work we developed a mathematical model and a simulation platform...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Bio-inspired robotics is a fairly new, still early in development branch of modern robotics. Control architectures coming from bio-inspired designs are able to perform complex tasks as to walk, swim, crawl, jump or even fly [14] thanks to specific forms of movement, known as gaits, that can sometimes even surpass in efficiency conventional engineering.

For walking robots the first conventional-design competitor will always be the wheeled robot, widely spread in any real application from outer-planetary rover exploration and military drones to commercial, human transportation and materials handling vehicles. However walker robots can have an edge on rough, hostile terrains where wheels are limited by their ability of maintaining static friction with the ground at all times and cannot display the same manoeuvrability as their legs-equipped counterparts [31], which can instead both display greater stability and body control under a wider variety of terrains [7]. For this very reason most applications for legged robots come from moving through rubble and obstacles-filled environments such as for disaster rescue applications [10] or maintenance and repair applications in difficult to traverse mechanical environments [4].

The reason why hexapod solutions are so common in robotics as legged rovers is that six legs is the optimal number to get a fair number of statically stable walking gaits [34] (quadruped robots only get one, bipedal robots don’t have any) to have good variety of movement options and speed while being able to cycle through them freely to better adapt to the user’s needs. Also the fact that at all times three opposed legs at minimum are supporting the body assures that under normal operating conditions statically unstable poses are never reached.

The robotic platform selected for this research project is the PhantomX AX Metal Hexapod Mark II from Trossen Robotics [3], shown in Figure 1. It mounts 18 Dynamixel AX-12A Smart Servomotors [1]

that are able to provide feedback on position and load estimate as well as mounting large RAM banks to store information that allows them to be controlled with much more precision than of conventional servos.

[width=0.4]./pictures/intro/hex-a

Fig. 1: PhantomX AX Metal Hexapod Mark II [3]

Modelling a walker robot mechanics is a complex task [11, 16, 18]. First, since there is no rigid connection from leg to the ground, either a RRR-joint is assumed and friction modelling is completely neglected [19, 24] or a virtual spring-damper physics engine is written to simulate the interaction behaviour between legs and ground [15, 32]. In both cases though body push mechanics and relative displacements are never easy to describe in an easy formal way, and either a very complicated model is developed for a narrow range of applications [8] or a third-party physics engine is mounted in to aid the model-based control logic [25].

Furthermore small, body relative micro-management displacements and poses are almost never considered fully, preferring to consider macro-movements to feed to trajectory finder algorithms and obstacle avoidance systems. Such is the case with [18] where a very complex model is developed to predict movement in a 3D environment and with [35] where trajectory definition and hexapod obstacle avoidance abilities are deeply investigated.

I-a Problem Assessment and Objectives

The objective of the work proposed is the development of a a model that allows us to compensate macro terrain deformations and interfaces, to crawl through rough terrain sections and in general to perform these tasks automatically through minimal information about terrain environment and without user aid.

‘NUKE’ [2] is the standard control software for any walker robot using AX-series and MX-series servos as main actuators that wills to implement a fast inverse kinematics engine. By asking the user to provide robot dimensions and to ‘capture’ a ‘neutral’ position of the robot the program generates custom locomotion software.

NUKE is able to generate a kinematic model that can automatically handle gait mechanism generation, robot body pose control and motion trajectory. Once the algorithm is compiled and uploaded to the control board, the robot can be commanded by user input and NUKE automatically handles walk motion cycle and servomotor control as to react to the user instructions in real time.

While NUKE allows for fast, real-time computation of an inverse kinematics engine, it suffers from the main limitation most multi-legged robot control software carry behind: While they are perfectly able to move the robot smoothly and adjust stride correctly through a gait engine, they are generally not able to interact with anything else than a flat, continuous, obstacle free terrain

[30].

This paper takes as objective to extend the advantages of typical hexapod robots control software in presence of non-flat, non-continuous, irregular terrain. In particular the underlying objective is to develop a kinematic model able to account for terrain shape and location, and to further the analysis to see how robot physics interact with ground presence.

Finalizing a closed-loop control architecture is also essential to ensure robustness in movement especially under circumstances where gait stability is not guaranteed. Sloppy movement due to feed-forward control related errors and non-idealistic behaviours of the physical system can in fact destabilize the final robot pose.
In most application scenarios model-based control can be either greatly simplified or entirely skipped by accurately placing pressure sensors on its feet [21, 22] and analysing pressure variation data to identify terrain contact [27]. Furthermore it was shown in [12] how in some scenarios position feedback can be sufficient to accurately identify terrain presence and maintain horizontal positioning at all times.
However as our objectives aspire for greater controllability over the robot pose and reach autonomously movement stability under a number of tasks and situations a full, non reduced dynamic model will be employed. This means that the control software will be able to analyse a full-state feedback for robot torque and use it to provide the robot perception of its surroundings.

Ii Locomotion Control

Locomotion control of the hexapod crawler robot is mainly accomplished through the direct command of its legs endpoints. Legs endpoints are defined as the 3-D coordinate points located at the extremities of each leg as shown in Figure 2.
Since legs are 3-DOF systems that lead to the endpoints it is possible to attain full control of a leg by imposing its endpoint position and employing an inverse kinematics engine to calculate the associated servomotor angles.

[width=0.5]./pictures/section1/neutralposition

Fig. 2: Robot endpoints in neutral position

The advantage coming from such control architecture is the possibility of collapsing the description of each leg configuration to a single point position in space, leading to a much simpler and clearer handling of the robot movement.

The position the robot assumes at the deployment state is called neutral position and the associated endpoints coordinates are hardcoded into the robot controller software. This position represents a neutral state the kinematic engine will use as reference to build its walking gait.

Ii-a Endpoints Handling

[width=0.5]./pictures/section1/allCS

Fig. 3: The body, leg and global coordinate systems

The coordinate systems employed in the kinematic analysis of the robot are presented in Figure 3. The global coordinate system is fixed at terrain level at 0,0,0 coordinates and will be used to account for robot position with regard to the terrain and the environment. Therefore for the first iteration the transformation matrix for the global to body coordinate system (alias: ) will be as shown in (1).

(1)

Where ‘SP’ is the starting position. Note that should be the initial robot height and must be consistent with the neutral position endpoints coordinates.
The body coordinate system and the legs coordinate systems instead move alongside the robot body and are used to both describe the robot orientation and solve the inverse kinematics for endpoint position and servomotor angles.

The transformation matrix that binds the leg c.s. and the body c.s. (as: ) is defined as (2).

(2)

Where in (2) and are the joint position in body c.s., from which it is inferred that is unique for each leg as each leg has a different joint position.

Once the desired endpoints position has been defined, servomotor angles can be assessed. The convention used in the kinematic analysis is shown in Figure 4 where represents the servomotor orientation and , and represent the coxa, femur and tibia servomotor angles respectively.
By expressing the desired endpoint position in leg coordinate system, eventually through (2), servomotor angles can be calculated through (3)–(7).

[width=1]./pictures/section1/legconvcoxa [width=1]./pictures/section1/legconvfemurtibia
Fig. 4: Definition of , and
(3)
(4)
(5)
(6)
(7)

Where in (3)–(7) is the position of the desired leg endpoint in leg coordinate system and , and are the lengths of the coxa, femur and tibia respectively. Endpoints positions are defined in body coordinate system and therefore their significance is fully derivative to the position and orientation of the robot body. This means that every movement the robot body is instructed to go through can be immediately transposed to a relative displacement of the endpoints; the complete kinematic model can be built based on that assumption.

A gait engine is a subroutine of the locomotion algorithm that handles legs synchronization: In general any pedal locomotion divides each leg in either pushing or swinging state in fixed order in such a way to allow for a repetitive, continuous movement. The gait engine is responsible for assigning the swing and push roles as well as the direction and amount of space to cover for each iteration tick. The gait engine employed is not fundamental to describe the desired formulation and will not be discussed as already available in literature111The gait engine employed in this treatment is the one provided by NUKE.. Therefore from the user input comprised of x-speed, y-speed and z-axis rotation speed the gait engine provides the movement data the robot is supposed to follow, the kinematic problem is to find the related endpoints displacement.

To solve the kinematic problem it is imperative to be able to describe the robot position and orientation at each iteration step by taking into consideration all movement data provided by the gait engine and by direct command of the user222The user is supposed to be able to bypass the gait engine instructions to apply direct control of the robot pose at each iteration step..
That is done by building the transformation matrix at each iteration step (as: ) as shown in (8).

(8)

Where in (8):

  • is the translational transformation matrix holding data about the movement instructions coming from the gait engine. It is built as shown in (9) and reconstructs the robot position as if it were just under the influence of the gait engine alone.

    (9)

    In which is the transformation matrix representing the step movement due to the gait engine instructions. It is made of two contributions as shown in (10).

    (10)

    In which is the component related to the translational movement and the one related to z-axis rotation. Those are defined in (11) and (12) respectively.

    (11)
    (12)

    Where in (11)–(12) , and are the gait engine movement instructions for the step x-axis displacement, y-axis displacement and z-axis rotation respectively.

  • is the transformation matrix holding the user-imposed translational movement data of the robot body, defined as in (13).

    (13)
  • , and are the transformation matrices holding user-imposed orientation of the robot body, being the z-axis rotational matrix, y-axis rotational matrix and x-axis rotational matrix respectively.

  • is the terrain-compensated reorientation matrix addressing additional body displacement due to terrain shape. It is dependant on the position of the robot body in the terrain environment and can either be given by user input or computed in real-time on basis of the terrain shape in the robot surroundings. Our objective is to assemble automatically this matrix by employing a terrain-tuning algorithm that takes as its only input the terrain elevation function which can again either be given as user input (assuming perfect knowledge of the terrain shape) or constructed by an estimation architecture.

    is an identity matrix for completely flat terrains.

And therefore the full movement comprising of all contributions the robot body goes through at the iteration step is represented by the transformation matrix calculated as in (14).

(14)

The pushing legs endpoints are fixed to the terrain due to friction, therefore their global position should not change between iterations. This means that if the body moves as described by (14), then the relative position of the endpoints should change as (15)

(15)

And so solving the kinematic problem for the pushing legs endpoints. Note that since (15) is built recursively it needs starting values. That is the neutral position endpoints and this is the reason why their coordinates are hardcoded into the control software.
The swinging legs endpoints positions are defined as (16).

(16)

Where , and are the endpoint coordinates of the neutral position. The reason why (16) makes reference to the neutral position endpoint coordinates and addresses and instead of and is because of the core difference from (8) and (14): Instead of building the endpoints transformation from the previous endpoints position, the gait engine now needs to displace the leg in such a way that the swinging stride motion is obtained. This means that the movement data coming from and are no more about increments of movement but effective displacements from the neutral position. The gait engine should in fact return these values when assessing swinging state legs endpoints.
To account for terrain presence and assure that the leg will always be over the terrain height while swinging, the correction shown in (17) should be applied.

(17)

Where is the elevation function of the terrain.
Once the global coordinates of the swinging legs endpoints are found, the body centred coordinates are found as in (18).

(18)

Completing the kinematic problem assessment.

Ii-B Terrain Compensation Algorithm

The objective of this section is to develop a way to orientate the body pose of the robot in such a way that while moving the robot freely on the ground, its body results ‘isolated’ respect to the ground itself.
Assuming a perfect knowledge of terrain geometry in term of angular and discontinuity interfaces position wouldn’t be realistic and aligned with our objective of development of an adaptive algorithm. Our purpose is the design of an algorithm able to deal with any terrain just relying on the elevation function (measurable by on-board sensors mounted on the robot itself but not purpose of this work [6, 33]).

First of all it’s important to unequivocally define what the ‘body isolation’ condition is: This can be achieved by defining a set of points in the robot body and then consider the height of these points with respect to the ground as a way to evaluate the body relative position to the terrain. A possible approach is to ask for those points to maintain a distance to the ground as close as possible to the one defined by from (1). If these points of interest are correctly chosen in order to represent the vertices of the robot body, the entire base should follow terrain profile and prevent unwanted situations like the ones discussed beforehand.
With the proposed algorithm the robot body will be able to position itself in such a way that it complies with the terrain geometry no matter the harshness.
In the algorithm presented six points are selected in the locations of the robot shoulders as shown in Figure 5.

[width=0.45]./pictures/section1/robotpointsalgorithm

Fig. 5: Interest points positioned with robot shoulder joints

The main assumption of the compensation model is that the robot will be able to position itself in the optimal pose by moving from its horizontal pose defined by

with three degrees of freedom only: A first displacement in the z-direction followed by a rotation around its body c.s. y-axis and a final relative rotation around its body c.s. x-axis. In transformation matrices this is described by (

19).

(19)

Where:

  • is the z-axis rigid translation matrix defined as in (20).

    (20)
  • and are the rotation matrices defined as in (21) and (22).

    (21)
    (22)

    Where in (21)–(22) is the angular rotation around the body c.s. y-axis and is the angular rotation around the body c.s. x-axis.

From the global coordinate system the joint positions after the terrain reorientation are (23).

(23)

Since we want our points to have relative heights as close to as possible we can build an algorithm that minimizes the total relative height quadratic error. It is accomplished accounting for each point, shown in (II-B) with reference to (25).

(24)
(25)

The problem is a multi-variable optimization problem with equality constraints coming from trigonometric functions consistency, solvable by writing the Newton-Euler equations with Lagrangian multipliers [28]. The formulation obtained and reported in (26) comes from adjoining equality constraints (25) to cost function in (II-B).

comes from the adjoining to the cost function (II-B) the equality constraints to get the Lagrangian function (26).

(26)

Necessary conditions hold for the minimum [28] as: .
The gradient of the Lagrangian function is calculated with the partial derivatives shown in (27)–(33).

(27)
(28)
(29)
(30)
(31)
(32)
(33)

In order to find the solution to this M-V-O-P numerical methods need to be employed. In this situation being the cost function quadratic it is possible to employ effectively the steepest descent algorithm [26] to iterate and find the solution very quickly and with a relatively low computational cost.

Having already defined the gradient of the Lagrangian, the steepest descent algorithm is implemented as in (34).

(34)

And stopping condition being . In (34) the parameter represents the adjustment index to the increment to the next-iteration solution. Generally a of is good enough to get convergent solutions in a few iterations in most general applications.
This method assures good performances in most terrain situations, only experiencing non-convergence problems in extreme scenarios. As shown in Figure 6, with reasonable weight () and tolerance () the solution is generally found very quickly.

[width=0.45]./pictures/section1/gaussian

Fig. 6: Gaussian distribution of iterations needed to get to convergence with and for a typical walk task in the terrain

Since the problem was defined in general terms, the algorithm is capable of running even in situations where rough interfaces are present, moving the robot in such a way to allow for a smooth transition between angular interfaces and navigating over non-continuous terrains. In the scenario of a steep ramp given a full horizontal speed input, the robot is able to smoothly go from being completely horizontal to adapting to the terrain inclination as shown in Figures 7 and 8.

[width=0.45]./pictures/section1/terraincompfig

Fig. 7: Robot continuous adaptation to interface and ramp

[width=0.45]./pictures/section1/terraincompgraph

Fig. 8: Angle variation of robot inclination during ramp interface. Note how it is a continuous distribution until it matches the ramp’s angulation.

Note that since the algorithm runs on the elevation function only, realistic terrains defined through conventional discrete models [17] can be used without needing particular adjustments.

Iii Torque Estimation

Iii-a Dynamic Model

Smart servos generally employed in physical hexapods are able to provide feed-back in both position and speed, however these signals are almost completely insensitive to terrain interaction effects; in particular they are blind to sliding and instability issues. Moreover they might not being affected by trips or grip losses. Even in overstepping cases, the robot will simply fall down in a rigid manner while reporting to be working perfectly.
For this reason to intercept terrain contact another signal must be taken into account. Torque is a value that is highly dependant on which legs are supporting the body or not, as well as reacting swiftly to terrain interaction. Servomotor torque can therefore become our way to make the robot inspect its surroundings and by compensating unstable-poses scenarios by repositioning the legs correctly we can assure robust movement through the entire control operation.
However to correctly interpret torque values coming from servos we need to develop a full dynamic model of the robot movement, as presented in the following section.

The degrees of freedom of the robot body are defined as in (35).

(35)

Where the DOF follow the order set in (8).

The degrees of freedom of a single leg are defined as the state vector shown in (

36) with consistency to the previous definitions of , and .

(36)

The state vector comprising of all legs degrees of freedom is defined as shown in (37).

(37)

The main problem coming with the modelling of the hexapod robot and legged locomotion in general is that to use lightweight dynamic algorithms like Newton-Euler equations it’s an absolute necessity to have one and only one grounded joint at all times [13] [20] [9]. Legged locomotion usually does not fall under these requirements and multiple ground constraints need to be addressed which add great complexity to the model.
To avoid these kind of problems it is possible to take an alternate route for modelling: instead of considering the robot positionally constrained at the ground at the pushing legs endpoints we consider the robot as not having any constraints at all, and adjoining the kinematic constraints to the Lagrangian dynamic equations.
Since the ground contact points act as hinges for the robot, the kinematic constraints will be the nullity of these points’ linear velocities.

The coxa, femur and tibia coordinate systems with reference to Figure 9 are identified with the matrices (38), (39) and (40). Those are used to find the relative positions of either the coxa, femur and tibia joints or the feet endpoint, e.g. being the position of the femur joint in coxa coordinate system.

[width=0.4]./pictures/section2/cftCS

Fig. 9: Coxa, femur and tibia coordinate systems
(38)
(39)
(40)

By defining as in (41) the transformation matrix that describes the effects of the body DOF on its pose, the axes of rotation of the coxa, femur and tibia joints can be found under a fixed coordinate system as shown in (42)–(44).

(41)
(42)
(43)
(44)

[width=0.4]./pictures/section2/actuatoraxes

Fig. 10: Axes of rotations of actuated joints

Where is the 0,0,1,0 axis and the 0,1,0,0 axis.

From this definition we can define the kinematic constrain for pushing legs grounded endpoints as (45), where it’s trivial the derivation of (46) which shows the relation between the velocities of the actuators and the robot body DOF.

(45)
(46)

The and Jacobian constraint matrices definition is reported in (47) and (48).

(47)
(48)

In which is the matrix that transforms the time derivatives of the robot body DOF into the robot body twist333‘twist’ being the name given by [9] of the kinematic screw and

is the skew matrix representation of the

position vector. The kinematic screw is the velocity vector field of dimension composed of the linear velocities and the angular velocities respectively..

A single constraint equation for all degrees of freedom of the robot requires to apply the definitions provided by (49) and (50) which result in the expression (51).

(49)
(50)
(51)

Where is a boolean value that accounts for whether the leg is in pushing, constrained state (and therefore its endpoint is grounded) or in swinging state.

Since the kinematic constraint equation (51) was written with reference to both and , the Lagrange equations will consider the full state vector as in (52).

(52)

By defining the kinematic constraint equation, the dynamic model is expressed as of (53).

(53)

Where are the contributions coming from the real actuated joints and are the ones coming from the body DOF . is the left pseudoinverse of the matrix.

By defining the inertia matrix as in (54) and the potential energies as (55) [9], then the two contributions can be calculated as of (56)–(57).

(54)
(55)
(56)
(57)

In which is the Kronecker product following notation by [29]. Note that Since is measurable and can be estimated through calculated through (14), the full state vector knowledge is assumed at all times.

Iii-B Dynamic Model Validation

[width=0.4]./pictures/section2/contactforces

Fig. 11: Graphical rendition of the plane-spheres interaction through the Simscape Multibody Contact Forces Library

[width=0.5]./pictures/section2/simcomp

Fig. 12: Comparisons of dynamic results of a simple walk task from SimMechanicsTM simulation and the dynamic model. Graphs are referring to right-front leg’s coxa, femur and tibia actuators respectively.

The dynamic model was developed through various simplifying assumptions and so its validity is subordinate to the fact that those assumptions and simplifications are small enough to be neglected in a real application perspective. Following other works that deal with crawler robot physics [5] the dynamic model validation will be carried out through direct comparison of results with a Matlab SimMechanicsTM simulation.
The Matlab SimMechanicsTM simulation employs a physics engine and uses a full 3D model of the hexapod robot, as well as fully simulating friction and terrain physics through the Simscape Multibody Contact Forces Library [23] (foot-terrain interaction system shown in Figure 11).

We can clearly see that from comparisons shown in Figure 12 that the dynamic model closely matches the result coming from the SimMechanicsTM physics engine: In fact while there are differences in torque spikes coming from the simulated friction model physics, the overall behaviour of the torque perceived by the SimMechanicsTM physics engine closely matches the one generated by our dynamic algorithm. This not only means that our algorithm is indeed accurate enough to provide good results, but the assumptions we made while developing the dynamic model, such as the way we modelled grounded endpoints through kinematic constraints, the PKM interpretation of the hexapod movement and the simplification of the hexapod feet providing single-point contact between leg and terrain are small enough not to cause mismodelling errors in the final calculations.

Iv Closed-Loop Control

A closed loop control approach is essential to obtain stable, robust movement throughout the entirety of the movement tasks the robot will be subjected to. A completely feed-forward approach may lead after few steps to positioning errors related mainly to robot-interface mismatching between reality and expected values, which would translate to grip losses and other unwanted behaviours to avoid in presence of uneven terrain.

The two major situations to avoid are the case where the feed-forward locomotion control system expects the terrain to be higher, and therefore leaves the leg hanging, and the case where it expects it to be lower, and therefore pushes the body back in an overstepping action destabilizing the entire robot pose. These situations are bound to happen due to either terrain mismodelling or desync between expected behaviour from simulation environment and actual robot motion, which is mainly due to the increasing drift between real and expected position of the robot as it accumulates positioning errors due to all non-ideal behaviours and performances coming from its hardware and components. Note how both situation happen in the leg-lowering part of the swing phase only, meaning a feed-back control algorithm acting on real ground touching only needs to be called during that movement section.

The way the feed-back control system works is by comparing the expected value of the actuators torques coming from the dynamic model and the actual sensed torques coming from the sensors system.
By reading the actuator position and speed and estimating body movement from it is in fact possible at each iteration to build the estimated and state vectors, to which immediately calculate the and components as defined in (56)–(57) being them the only actual dependencies.
Once these two values are stored in memory, the lowering legs’ sudden terrain touch can be simulated in the dynamic model by simply interacting with the values defined earlier in (49)–(50). In fact the boolean represents the grounded state of the leg’s endpoint; therefore by temporarily setting it to 1 and building the and constraint matrices it is possible to get the hypothetical terrain-reached torques for all actuators through (53). This is an easy, relatively CPU-light way to get full references for the legs terrain-touch situations.
If there is a sensible correspondence between the expected torques and the sensed ones then the terrain is considered reached for that particular leg and its movements are stopped. This accounts for the overstepping part of the problem, while it may still be possible for legs to reach its desired end positions while no terrain having been sensed.
In order to solve the ‘leg-hanging’ problem, it is only necessary to lower the expected altitude of the desired leg endpoint position444In the employed algorithm, the gait altitude instruction is lowered by 1 cm. each time. and recompute the needed servomotor angles. This can be obtained lowering the legs until terrain is touched and the previous condition is met, effectively assuring that legs reach ground and stop right on before moving to the push phase.

Finally, since the servomotors stopping condition is changed to sensed terrain touch, lowering legs endpoints will very probably differ to the ones expected by the locomotion system instructions. Therefore the new positions need to be updated employing the forward kinematics tools such as the

, and transformation matrices derived through (38)–(40).
A scheme summarizing the legs handling system is shown in Figure 13. An implementation example of the closed loop control solution is presented in Algorithm 1.

[width=0.5]./pictures/section3/TerrainAlgoALT

Fig. 13: Scheme of the feed-back control algorithm interaction with push and swing phases
1 tau_a, tau_b = dynamic_model(X_a, X_b, DynValues, TrMatrices);
2 torques = read_motor_torques();
3 while lowering_legs is not empty do
4       for leg in lowering_legs do
5             ip[leg] = 1;
6             A_c, B_c = constr_matr(TrMatrices, ip);
7             tau = tau_a + (-pinv(A_c)*B_c).T * tau_b;
8             if corrispondence(tau, torques) == 1 then
9                   stop_motors(leg);
10                   theta, phi, psi = read_motor_angles(leg);
11                   endp_pos[leg] = FK(theta, phi, psi, leg);
12                   del leg in lowering_legs
13             else if is_motors_stop(leg) then
14                   endp_pos[leg].z = endp_pos[leg].z - 10;
15                  
16             else
17                   pass;
18                  
19            
20       end for
21      
22 end while
Algorithm 1 lowering legs control algorithm pseudo-code

V Experimental Results

The developed motion control architecture has been experimentally verified through the accomplishment of a series of testing scenarios. The experimental tests conducted have the aim to stress two important aspects of the architecture presented and highlight the improvements respect to a stock controller as well as to analyse the effect of feed-forward locomotion control and feed-back stabilization contribution. In particular the two scenarios reported are:

  • An obstacle-filled terrain: Where the robot is fed with a flat terrain data, and it needs to adapt to the presence of software ‘invisible’ terrain. The task will be considered completed if the robot is able to remain stable due to overstepping into obstacles and tripping, completing the traversal correctly.

  • A Ramp terrain situation: Where the robot is fed with data of the ramp terrain while the body is asked to remain horizontal throughout the full movement. The task will be considered completed if the robot is able to handle the angular interface, to climb the ramp effectively, and maintain the body horizontal at all times. Difficulty comes from the fact that even a small deviation in trajectory direction, which is highly possible since in the robotic platform no positional feedbacks are enforced, can cause significant errors between real and expected position of the terrain.

[width=]./pictures/section4/popperbookscheme3D
[width=]./pictures/section4/popperbookscheme2D
Fig. 14: The obstacle course setup dimension and position

V-a Obstacle Recognition

Feed-back control algorithm adds to the robot walking motion robustness as it allows to even sense and react to unexpected obstacles in the workspace. This means that when facing a mismodelled obstacle, like a section of the terrain wrongly modelled as flat and being instead of higher altitude, the feed-back logic is expected to compensate feed-forward error and and to update the gait in such a way to maintain movement stability.

In order to reproduce this condition a solid object was placed within the walk path of the robot. The obstacle and robot setup schematics are shown in Figure 14.

As shown in Figure 16(a) while the stock controller puts the robot in an instability stride, the novel closed loop solution is indeed able to maintain the body horizontal and by not overstepping onto the solid object all legs hold to the ground assuring equilibrium (Figure 16(b)). These aspects are further highlighted when a payload is mounted on top of it, showing how stability of movement transfers to stability of robot body. Robustness of walking stride made in fact possible to effectively disregard completely the presence of the obstacle in the robot path.

[width=1]./pictures/section4/rampscheme3D
[width=1]./pictures/section4/rampscheme2D
Fig. 15: The ramp climbing task setup dimensions, angulation and position

[width=0.5]./pictures/section4/femur16loadcomp

Fig. 16: Right-middle femur actuator angle comparison between operations. In blue color the operation employs the NUKE controller, in red color the feed-back control algorithm is used instead.

A comparison of the front-right and middle-right femur servo angle variation through the entire operation between NUKE and feed-back control logics is shown in Figure 16. It is evident how thanks to the feed-back control algorithm, in the middle of the stride cycle the obstacle was found by the legs and that resulted into the respective femur actuator to stop on its tracks.
In Figure 16(a) it is also possible to check the ‘blindness’ of the robot while being moved by the stock controller: Despite being in a completely unstable position, the angular feedback of the servomotors still reports no extra-ordinary values, and therefore the robot still attempts to walk as if no obstacle were in its path in the first place.

V-B Ramp Climbing

In the ramp climbing task, the robot is positioned right in front of the angular interface, with the instruction of a forward constant movement. The setup scheme is presented in Figure 15. In order to request the robot body to remain horizontal the matrix is built as (58) rather than (19).

(58)

The most challenging part of the task is the traversal of the interface between flat plane and ramp, as in that location desync effects are present the most.
This is clearly visible in Figure 17(a), where when the control feedback is absent then legs are left hanging quite often, leading to movement unsteadiness and ultimately increased traversal difficulty. By inducing the novel control architecture, as shown in Figure 17(b), the robot is instead able to climb the ramp correctly after successfully traversing the interface while maintaining an horizontal body pose. The task is done with such gait steadiness that even when carrying a payload the robot is able to assure its stability.

[width=]./pictures/section4/photos/obstacle1 [width=]./pictures/section4/photos/obstacle2
(a) Stock NUKE controller instabilities

[width=]./pictures/section4/photos/obstacle3 [width=]./pictures/section4/photos/obstacle4
(b) Feed-back control logic aided walking
Fig. 17: Physical applications of obstacle recognition capability. On top are shown performances of NUKE controller, on bottom the closed-loop control logic is used instead.
[width=]./pictures/section4/photos/ramp1 [width=]./pictures/section4/photos/ramp2
(a) Feed-forward only controller stability problems

[width=]./pictures/section4/photos/ramp3 [width=]./pictures/section4/photos/ramp4
(b) Feed-back control logic performances
Fig. 18: Physical applications of the closed loop control architecture. On top the feed-back algorithm is absent, on bottom the full control logic is used instead.

Vi Conclusions

This paper presents a simple but effective kinematic model through the manipulation of hexapod legs endpoints; at any time it is possible to access legs position in space and robot pose through the use of transformation and pose matrices.
An autonomous terrain-adapting algorithm is developed, able to automatically tune the robot body pose in such a way to assure its isolation from rough terrain asperities no matter the terrain type.
A full dynamic model comprising of ground interaction modelling is presented, with the possibility of a real time implementation while providing accurate estimation of servomotor torques.
Finally a terrain sensing algorithm is presented, able to correct instability situations coming from hardware’s non-idealistic behaviour, as well as guaranteeing leg-ground reach based on a feedback architecture using estimated torque from the model and torque provided by servomotors as variables.
Novel control architecture, composed by terrain adapting, torque estimation and terrain sensing algorithms, is evaluated in term of applicability and performances by means of experimental tests conducted on PhantomX AX Metal Hexapod Mark II robotic platform and results are reported.

References

  • [1] (Website) External Links: Link Cited by: §I.
  • [2] (Website) External Links: Link Cited by: §I-A.
  • [3] (Website) External Links: Link Cited by: Fig. 1, §I.
  • [4] M. Agheli, L. Qu, and S. S. Nestinger (2014) SHeRo: scalable hexapod robot for maintenance, repair, and operations. Robotics and Computer-Integrated Manufacturing 30 (5), pp. 478 – 488. External Links: ISSN 0736-5845, Document, Link Cited by: §I.
  • [5] S. Beaber, A. Zaghloul, M. Kamel, and W. Hussein (2018-11) Dynamic modeling and control of the hexapod robot using matlab simmechanics. pp. V04AT06A036. External Links: Document Cited by: §III-B.
  • [6] D. Belter and P. Skrzypczynski (2011-07) Rough terrain mapping and classification for foothold selection in a walking robot. J. Field Robotics 28, pp. 497–528. External Links: Document Cited by: §II-B.
  • [7] M. Bjelonic, N. Kottege, and P. Beckerle (2016-10) Proprioceptive control of an over-actuated hexapod robot in unstructured terrain. pp. . External Links: Document Cited by: §I.
  • [8] A. Bowling (2011-03) Impact forces and agility in legged robot locomotion. Journal of Vibration and Control 17, pp. 335–346. External Links: Document Cited by: §I.
  • [9] S. Briot and W. Khalil (2015-09) Dynamics of parallel robots. Vol. 35. External Links: Document Cited by: §III-A, §III-A, footnote 3.
  • [10] H. Deng, G. Xin, G. Zhong, and M. Mistry (2017) Gait and trajectory rolling planning and control of hexapod robots for disaster rescue applications. Robotics and Autonomous Systems 95, pp. 13 – 24. External Links: ISSN 0921-8890, Document, Link Cited by: §I.
  • [11] X. Ding and F. Yang (2014-01) Study on hexapod robot manipulation using legs. Robotica 34, pp. 1–14. External Links: Document Cited by: §I.
  • [12] J. Faigl and P. Čížek (2019-03) Adaptive locomotion control of hexapod walking robot for traversing rough terrains with position feedback only. Robotics and Autonomous Systems 116. External Links: Document Cited by: §I-A.
  • [13] R. Featherstone (2008-01) Rigid body dynamics algorithms. External Links: Document Cited by: §III-A.
  • [14] T. Fukuda, F. Chen, and Q. Shi (2018-05) Special feature on bio-inspired robotics. Applied Sciences 8, pp. 817. External Links: Document Cited by: §I.
  • [15] Y. Fukuoka, H. Kimura, and A. Cohen (2003-03) Adaptive dynamic walking of a quadruped robot on irregular terrain based on biological concepts. I. J. Robotic Res. 22, pp. 187–202. External Links: Document Cited by: §I.
  • [16] H. Gao, Z. Deng, J. Song, Y. Liu, G. Liu, and K. Iagnemma (2013-11) Foot–terrain interaction mechanics for legged robots: modeling and experimental validation. The International Journal of Robotics Research 32, pp. 1585–1606. Cited by: §I.
  • [17] C. Hirt (2016-06) Digital terrain models. Encyclopedia of Geodesy, pp. . External Links: Document Cited by: §II-B.
  • [18] N. Hu, S. Li, Y. Zhu, and F. Gao (2018-04) Constrained model predictive control for a hexapod robot walking on irregular terrain. Journal of Intelligent and Robotic Systems 94. External Links: Document Cited by: §I, §I.
  • [19] E. Kljuno and R. Williams (2010-08) Humanoid walking robot: modeling, inverse dynamics, and gain scheduling control. Journal of Robotics 2010, pp. . External Links: Document Cited by: §I.
  • [20] G. Legnani (2003-01) Robotica industriale. External Links: ISBN 88-408-1262-8 Cited by: §III-A.
  • [21] T. Maiti, Y. Ochi, D. Navarro, M. Miura-Mattausch, and H.J. Mattausch (2018-02) Walking robot movement on non-smooth surface controlled by pressure sensor. Advanced Materials Letters 9, pp. 123–127. External Links: Document Cited by: §I-A.
  • [22] H. J. Mattausch, A. Luo, S. Bhattacharya, S. Dutta, T. K. Maiti, and M. Miura-Mattausch (2020) Force-sensor-based walking-environment recognition of biped robots. In 2020 International Symposium on Devices, Circuits and Systems (ISDCS), Vol. , pp. 1–4. External Links: Document Cited by: §I-A.
  • [23] S. Miller (2020)(Website) External Links: Link Cited by: §III-B.
  • [24] J. Pratt, P. Dilworth, and G. Pratt (1997-05) Virtual model control of a bipedal walking robot. Vol. 1, pp. 193 – 198 vol.1. External Links: ISBN 0-7803-3612-7, Document Cited by: §I.
  • [25] A. Roennau, F. Sutter, G. Heppner, R. Dillmann, and J. Oberländer (2013-11) Evaluation of physics engines for robotic simulations with a special focus on the dynamics of walking robots. pp. . External Links: Document Cited by: §I.
  • [26] S. Salsa, F. Vegni, A. Zaretti, and P. Zunino (2009-01) Invito alle equazioni a derivate parziali. pp. . External Links: Document Cited by: §II-B.
  • [27] M. F. Silva, J.A. T. Machado, and R. S. Barbosa (2006) Complex-order dynamics in hexapod locomotion. Signal Processing 86 (10), pp. 2785 – 2793. Note: Special Section: Fractional Calculus Applications in Signals and Systems External Links: ISSN 0165-1684, Document, Link Cited by: §I-A.
  • [28] R. Stengel (1994-01) Optimal control and estimation. Dover. Cited by: §II-B, §II-B.
  • [29] H. Taghirad (2020-01) Parallel robots: mechanics and control. Cited by: §III-A.
  • [30] F. Tedeschi and G. Carbone (2014-06) Design issues for hexapod walking robots. Robotics 3, pp. 181–206. External Links: Document Cited by: §I-A.
  • [31] J. Tenreiro Machado and M. Silva (2006-04) An overview of legged robots. pp. . Cited by: §I.
  • [32] D. T. Tran, I. Koo, Y.H. Lee, H. Moon, J. Koo, S. Park, and H. Choi (2014-04) Motion control of a quadruped robot in unknown rough terrain using 3d spring damper leg model. 12, pp. 372–382. External Links: Document Cited by: §I.
  • [33] K. Walas (2014-07) Terrain classification and negotiation with a walking robot. Journal of Intelligent and Robotic Systems, pp. . External Links: Document Cited by: §II-B.
  • [34] D. Wettergreen (1995-12) Robotic walking on natural terrain: gait planning and behavior-based control for statically-stable walking robots. Ph.D. Thesis, Carnegie Mellon University, Pittsburgh, PA. Cited by: §I.
  • [35] Y. Zhao, X. Chai, and C. qi (2018-02) Obstacle avoidance and motion planning scheme for a hexapod robot octopus-iii. Robotics and Autonomous Systems 103. External Links: Document Cited by: §I.