Four-Arm Manipulation via Feet Interfaces

09/11/2019 ∙ by Jacob Hernandez Sanchez, et al. ∙ EPFL 0

We seek to augment human manipulation by enabling humans to control two robotic arms in addition to their natural arms using their feet. Thereby, the hands are free to perform tasks of high dexterity, while the feet-controlled arms perform tasks requiring lower dexterity, such as supporting a load. The robotic arms are tele-operated through two foot interfaces that transmit translation and rotation to the end effector of the manipulator. Haptic feedback is provided for the human to perceive contact and change in load and to adapt the feet pressure accordingly. Existing foot interfaces have been used primarily for a single foot control and are limited in range of motion and number of degrees of freedom they can control. This paper presents foot-interfaces specifically made for bipedal control, with a workspace suitable for two feet operation and in five degrees of freedom each. This paper also presents a position-force teleoperation controller based on Impedance Control modulated through Dynamical Systems for trajectory generation. Finally, an initial validation of the platform is presented, whereby a user grasps an object with both feet and generates various disturbances while the object is supported by the feet.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

There is evidence that feet could potentially be good candidates for controlling robotic arms. Starting by studies in feet-computer interaction where the feet have been found appropriate for accurate and non-accurate spatial tasks [HOFFMANN1991] [Pakkanen2004], and recently [Abdi2016] found that having a mental representation of one foot as a third hand in a virtual environment can improve performance in cognitively demanding scenarios.

We investigate the design of a feet-interface, namely an interface that can be operated by the two feet simultaneously and show how such an interface can be used to enable a four-handed telemanipulation (Fig. 1).

Our use case contrasts other approaches for Supernumerary Robotic Limbs (SRL) as in [Llorens-Bonilla2012][Bonilla2014][Bright2017], in the fact that in our case the human has control over the artificial robotic arms using the natural dexterity of their feet. This control can potentially be within a spectrum from direct manipulation of the motor commands towards a shared autonomy to facilitate the task for the human.

Not only does the interaction through feet leave the hands free to perform other tasks, but the haptic link (between human and robot) allows the human to supervise the desired motion and force of the task. This may be advantageous in human-robot collaborative scenarios where visual or verbal guidance may compromise the efficiency, responsiveness or the quality of the task (e.g. assisted surgery, complex assemblies, etc).

Figure 1: The user drives two robotic arms with the feet. Each robot is controlled by their ipsilateral foot in cartesian teleoperation. The forces measured from each robot are fed back to the user through haptic feedback.

Feet Interfaces

Unlike its hand counterpart, a foot interface has the additional challenge that the leg represents a considerable load, namely the weight of the leg, that depending on the body posture can vary from to of body mass in healthy adults (i.e. kg) [Plagenhoef1983]. Since the high payload challenges the mechatronic design, it is not a surprise that even though there are multiple existing foot platforms reported in literature (for rehabilitation, locomotion, etc), not all of them are convenient for teleoperation using both feet. Many of them are cumbersome like [Otis2008] [Iwata2001] [Yoon2006a] because of the big electromagnetic actuators to reflect high forces and to compensate the weight of the leg when moving up and down. Some machines are limited in degrees of freedom (from one to three) like [Farjadian2014] [Saglia2013] [Wang2013] because they were designed for the ankle and not for motion of the leg. Moreover, many of them are devoid of active haptic feedback like [Paradiso2004] [Rovers2005] [Abdi2017a]. Finally, most of the surveyed interfaces employ parallel kinematics [Girone2001] [Saglia2013] [Wang2013], which despite being advantageous in terms of rigidity and low inertia, limits the workspace for linear motions with respect to the total footprints.

Our contribution comes as a mechanical solution for feet to move in a large workspace relative to the footprint of the platforms. Also, since we are targeting the use of both feet simultaneously, feet should be as well able to move closely relative to each other. Additionally, we sought for a mechanical implementation that could facilitate ad hoc modifications of the workspace by choosing serial kinematics and using joints that could be easily constrained.

Figure 2: Denavit Hartenberg Kinematic Model of 5 DOF. Two prismatic joints drive linear motions and three rotational joints control the orientations. The motion command is in the frame of reference of the tip of the pedal (). A small offset between the joints and (i.e. ) avoids the problem of Gimbal lock, therefore there is no singular configuration within the limits of the workspace.

Kinematic Model

The kinematic model is illustrated in Fig 2. We reduced the number of degrees of freedom to five to alleviate high torque requirements for compensating the inertial forces on the up-down motion of the legs.

Let us consider the convention that the coordinates frames are defined as a set of orthonormal basis identified by .

The notation adopted in this paper for the kinematic formulation, escapes the conventional Denavit Hartenberg (DH) parameterization in that we are using intermediate supplementary frames (fixed joints) for convenience and clarity in the solution. This formulation specially allows to represent, in the kinematic chain, the three tait bryan angles using DH parameters.

is the inertial reference frame which is static or moving with constant velocity.

Let be the coordinate transformation matrix between and consequence of the homogeneous transformations:

(1)

where is a pure translation over an arbitraty axis of arbitraty frame and is a pure rotation matrix around an arbitraty axis of arbitraty frame . Following the convention, and are the angle and displacement in the axis, whereas and are the angle and translation in the axis respectively.

Table 1 presents the geometric parameters of the kinematic chain.

Table 1: DH Parameters of the Kinematic Model

The forward kinematic model is obtained as follows:

(2)

Where is the number of coordinate frames, in this case 7, and represents the pre-multiplication of successive transformation matrices. After computing the forward kinematics, the desired motion of the foot is taken from the cartesian coordinates of the frame at the tip of the pedal, namely

, can be described in the inertial reference frame by the following vector:

(3)

where and correspond to the cosinus and sinus of the angle respectively.

The vector from (3) represents the task cartesian coordinates, and the geometric parameters , , , and correspond to the generalized coordinates of the joint space.

Resulting Workspace

Figure 3: Illustration of the workspace of the feet pedals if the 5 DoF platforms were next to each other. The volume ( per foot) is computed from the forward kinematics on the pedal tips (). The small rectangles () represent the linear range of motion in XY. To be compared with the net footprints of the platforms illustrated as big rectangles () in the base.

The workspace of the platform was computed from the forward kinematics of the Denavit-Hartenberg formulation, illustrated in Fig 3, after considering the ranges of motion of the degrees of freedom (, , , and ) and the following geometrical parameters: mm, mm, mm, mm and mm. These values were verified to go accordingly with the lower limbs effective workspace of an adult male of average height based on [Pheasant1996]. The height () was defined under technical constraints of the available hardware.

Hardware Implementation for Cartesian Control

Figure 4: Illustration of the foot platform for linear cartesian control. It has 3D force feedback (X (), Y (), ), highlighted with three different colors. DC motors provide force and the motion is measured with optical encoders. Two passive joints () are fixed to a desired position measured with soft potentiometers. Each encoder is initialized using limit switches. All the axes are belt driven. The linear motions are achieved with v-slot aluminium profiles and adjustable rollers. The pulley for the pitch motion was custom made through 3D printing. A 6 axis ATI-Mini 40 Force/Torque Sensor is used to monitor the foot interaction forces.

Figure 5: Block Diagram of Hardware & Control Architecture. In the firmware implementation, homing and centering algorithms are followed by the implicit force control for teleoperation ( at kHz). Moreover, the current control loop (PI) at kHz. On the other hand, the teleoperation channel is done using Robot Operative System (ROS)

The first demonstration simplifies the task to a linear cartesian teleoperation (3D). We decided to build a platform, see Fig 4, following the kinematics presented in Fig 2, but blocking the last two passive joints. Hence, each foot could control and receive feedback in 3 DoF. The future implementation will include 5D active force feedback.

The final specifications of the platform are listed in table 2. The ranges of motion were informed on available bio-mechanical data of the ankle (c.f [Siegler1988][Dettwyler2004]). For the possible rotations, we allow a higher ROM than the anatomical constrains of the ankle given the added extra mobility acquired when engaging the movement of the entire leg.

Similarly, the dimensioning of the motors was based on the known psychophysics of perceivable forces of the foot’s plantar/dorsiflexion. [Southall1985] studied perception of resistive forces in a vehicle pedal (felt at the tip of the pedal around a lever arm). Results indicate that in ranges of background forces from to N, a Weber fraction of should be applied for a difference to be detected by of the population. Similarly, [Abbink] was found that footwear and frequency of duration affect the perception of active force variations, in a vehicle pedal, with background force of N . Results indicate that the just noticeable difference (JND) of a signal at Hz, when wearing socks, is N. This agrees with [Ichinose2013] that report a JND of N in a study of driving assistance through pedal reaction force control.

A recent study by [Geitner2018] for determination of influence of footwear, pulse duration and amplitude, suggested that a good force reflection should span from to N to be comfortably detected. This agrees with [Abbink] and also [EDWORTHY1995] that described that higher intensities than N startle the driver.

Consequently, we based our design on the recommended values for force reflection in plantar/dorsiflexion, expecting similar perception capabilities in the other foot rotations.

Note that we envisioned the human to be in a sitting position as opposed to standing. We assumed that when sitting there is a greater body balance to move both hands and feet in multiple degrees of freedom.

Regarding the design of the mechanical structure, it consists in aluminium frames with v-groove (V-Slot ) that enable self-centering smooth linear motions by the use of wheeled supported gantry plates moved by timing belt-pulley transmission (pulley radius of ). For the rotary motion of the joint (See Fig. 2), a bigger pulley was manufactured to amplify the torque times.

As illustrated in Fig. 4 the mechatronic design is comprised of a series of sensors and actuators for the motion input and the force reflection. DC motors (Faulhaber 38H024CRxx) are driven by servo-controller (MAXON-ESCON 50/5) and measured with incremental differential encoders (IE3-1024L). Moreover, LS7366-based

encoder counters are communicated with the micro controller via serial peripheral interface (SPI). The quadrature encoders are responsible for angle measurement of the actuated joints, whereas the passive joints are measured by using membrane potentiometers (Spectra Symbol SP-L-0100-103-3%-RH). Then limit switches are implemented for reset of the values measured by the incremental encoders (homing). On the other hand, the membrane potentiometers provide absolute angle estimation. A six-axis force/torque (F/T) sensor (ATI Mini 40) is used to measure the interaction forces between the platform and the foot. The control is performed using an ARM Cortex M4-based micro-controller (STM32f303xx).

Relevant information about the control and hardware architectures can be appreciated in Fig.5.

Metric Design Specification Value
Size Height m
Footprint m m
Range of Motion X Y m
Pitch
Transmission Reduction ratio
Nominal Wrench Force X and Y N (Nominal)
N (Peak)
Torque Pitch Nm (Nominal)
Nm (Peak)
Motion Sensing Linear (&) um
Resolution Angular ()
F/T Sensing Force (& ) N
Resolution[ATI_S2018] Torques () Nm
Table 2: Specifications of Foot Platform

Robot Control

Figure 6: Position-Force DS-impedance based Teleoperation Architecture. The two colors represent the master and the tele-operated device. The new variable introduced, corresponds to the foot force that is monitored to check transparency and not used for closed loop control

The proposed control architecture of the position-force teleoperation is illustrated in Fig. 6. We assume a constant negligible time delay in the communication channel, and also that the robotic arms are torque controlled.

For clarity, we represent the variables related to the robot arm with the superscript , while the variables of the foot platform with the superscript .

For the telemanipulator side, we start with the classical expression for the dynamics of a DOF manipulator in the three-dimensional cartesian space:

(4)

where denotes the position of the end effector, the inertia matrix, the Centrifugal and Coriolis forces respectively, while and correspond to the control and external forces respectively. The control force is obtained from an impedance controller taking the output of a time-invariant dynamical systems as reference velocity , see [Kronander2016]:

(5)

where denotes the gravity compensation forces,

is a varying damping matrix with positive eigenvalues

, and

, designed such that the first eigenvector is aligned with

. By manipulating the last two eigenvalues, one can selectively damp perturbations that are orthogonal to , see [Kronander2016]. This is advantageous to provide selective rigidity in directions that matter for the task (e.g. to the normal to the contact with the object) and hence to handle external disturbances in the teleoperation. One clear advantage of this implementation is a safe physical robot interaction with unknown forces of the environment acting in directions not relevant to the task. A case in point is when we use both hands alongside the foot controlled telemanipulators to perform a supernumerary manipulation task; in such case, we can make the robot compliant to these exogenous forces when they are not aligned with the foot commands, and in consequence the feedback don’t startle the user much.

To generate the desired robot velocity we use a linear dynamical system (DS) whose attractor is obtained by mapping the user foot’s position to the robot’s workspace: , where is the foot position in the platform frame (in pedal tip ) and is the telefunctioning matrix mapping the platform’s workspace to the robot one which takes into account rotations between both reference frames.

Finally, an orientation error measured in the end effector of the robotic arm (superscript E) is computed as as where and are the axis-angle representations of the measured and the desired orientations for the end effector of the robot. The rotation target is met using a PD controller.

On the side of the foot master device, we work on the joint space and establish the dynamics of the haptic device:

(6)

where represents the joint generalized coordinates, is the configuration-dependent inertia matrix, the non-linear velocity-dependent forces (frictions), the centrifugal and coriolis and the configuration dependent static forces (gravity). is the actuator commanded force expressed in the joint space, is the actuator disturbance projected in the joint space, and is the desired human input (interaction) force expressed in the joint space as , where is the 3D cartesian force in the platform and is the translation submatrix of the geometrical Jacobian. Thus, .

Figure 7: Illustrations of the bipedal telemanipulation to assist the work of the hands. The red lines are guides to understand the relative motion. Nevertheless, the video sequences provided as supplementary material are more telling that this description. Phases: a) No action is performed with the feet, while bimanual tasks are being done, b) One robotic arm is used to retrieve a pieces container to assist the user in his task, c) To facilitate the task, the user grasps and lifts the container using the two foot-controlled robotic arms. d) The user adds the completed bundles in the container (working on container). e) The user performs a supernumerary task where, after placing a lid on the container, he is holding it with his feet while hammering. f) The user places the robotic arms far away from the working area since the tasks are completed (retreating).
Figure 8: Excerpt of Results for the Right Foot/Right Arm in the frame of the Right Arm Interaction force in the direction of grasping ( in the Frame of the Platform, in the Frame of the Kuka LWR), (measured), and (desired) . , and Plots in , and of the robotic arm position from (desired human input) and (real measured position)

Furthermore, is reflected from the telemanipulator as: , where is the interaction force between the telemanipulator and the environment and is a telefunctioning matrix defining the desired force relation between the telemanipulator and the foot master device.

The friction is assumed to be low. On the other hand, we expect Coriolis and Centrifugal forces to be negligible in the motion bandwidth of the leg. Hence, based on (6) the inverse dynamics can be approximated by:

(7)

At the same time, each actuator’s commanded torque (element of ) is controlled through the current as , where is the torque constant and is the current applied to the motor, tracked with a PI controller.

Experimental Validation

An experiment with the feet platforms was performed with two KUKA LWR IV+. The goals were to: 1. Evaluate the force transparency of the teleoperation (environment force vs foot interaction force) when the master device is used with implicit force control (no force sensor used in closed loop), 2. Assess the DS modulated impedance control in terms of effects on position-force tracking in the teleoperation. 3. Check the feasibility of a bipedal grasp in cartesian motion.

A volunteer from the research group (main author) performed the test. He got familiarized with the device for 20 minutes before the task, moving around their feet and observing the behaviour in the robot arms. Indeed,the calibration of the tele-functioning and the robot impedance matrix was based on what was perceived to be comfortable by the user. Namely, the reflected force in (axis of gravity) was scaled down five times for the master, while the components for and remained unchanged. Similarly, the platform’s position ( in all axes ) was amplified five times from the master device to the robot’s workspace.

The Task

The user was tasked to control the two robotic arms with coordinated and uncoordinated maneuvers. The task is illustrated and explained in detail in Fig. 8.

Results

Fig. 8

shows the temporal evolution of the position of ipsilateral right robotic arm and foot, as well as the interaction force in the environment and in the foot interface (N.B. The left side information is not included since at the moment of the experiment the left platform didn’t have a force sensor).

Plot indicates that the tracking error of the position in the direction of grasping was lower in phases a,b and f (i.e. free motion), with a Root Mean Squared Error () of , than in phases c-e (i.e contact) with a

. Probably when trying to squeeze the object (phase e) the moving attractors controlling the position of the robotic arms, are virtually pushed further inside the object but the real walls of the object prevent convergence in this direction. This translates in a bigger error in position tracking. In contrast, regarding the orthogonal directions (c.f. plot

and ), the tracking error was greater in free motion () than in contact (), because when trying to hold the object in place, the convergence to the attractor in these directions is not constrained.

It is clear how the impedance endowed to the robot arm smooths down its motion, acting as a low pass filter in the position tracking. This translates in less jerky movements (see plot phase b). Indeed, the task-aligned gain of would have to be tuned depending on the required task to find a trade-off between compliance and accuracy. On the other hand, regarding the directions orthogonal to the task, the damping seems to contribute to the stability of the grasp and to low startling of the subject (evidenced in low reactivity in and for the human motion during phases c-e) specially during the abrupt disturbances (i.e. hammering).

Results show a low error () in force reflection during contact (i.e. phases c-e). This means that the foot platforms are very transparent. Such result can be attributed to the high backdrivability (due to low gear ratio employed) and the smooth motion of the joints chosen for the mechanical construction.

To conclude, the task of grasping, lifting, working on an object, and overcoming abrupt perturbations with the feet was found feasible and successfully achieved.

Summary and Outlook

The contribution presented in this paper evaluates the use of the feet for direct control of robotic arms to be used along with the biological hands in a manipulation task. Both an experimental prototype and a control implementation are presented along with a preliminary demonstration. Results show a human being able to perform and maintain a bipedal grasp with high force transparency in the task-aligned direction and rejection of abrupt disturbances in the orthogonal directions to the task. These selectively convenient behaviours were possible thanks to the control strategy adopted (Impedance Control modulated through a Dynamical System).

Despite of using an open-loop implicit haptic control with approximative compensation of the dynamics, the force error during contact resulted to be was small. This initially validates good mechanics for haptics and discourages the need for closed loop force control. Nevertheless, further characterization on the platforms regarding Z-width and friction identification should be done.

We are currently putting effort in evaluations with more participants and definition of protocols for training and determination of subject-specific calibration parameters. Additionally, a new version of the platform with the same kinematics but with fully motorized 5 DoF is under development.

For the next steps, we focus in defining and testing more complex and concrete manipulation tasks with simultaneous four arm interactions. On the other hand, it is likely that controlling four arms simultaneously may create an additional cognitive load for the human. To alleviate this, we are addressing investigations on autonomy for the robots to control the two robotic arms in coordination so as to synchronize motion and force. Specifically, moving up in the spectrum of shared autonomy, the next immediate step is to combine the direct control presented in this paper, with a dynamical system’s approach for motion and force generation in contact tasks (c.f. [Billard-RSS-19])

Acknowledgement

We thank the support of the Hasler Foundation and the European Community Horizon 2020, in particular the robotics program ICT-23-2014 under grant agreement 644727-CogIMon.

References