A Subject-Specific Four-Degree-of-Freedom Foot Interface to Control a Robot Arm

In robotic surgery, the surgeon controls robotic instruments using dedicated interfaces. One critical limitation of current interfaces is that they are designed to be operated by only the hands. This means that the surgeon can only control at most two robotic instruments at one time while many interventions require three instruments. This paper introduces a novel four-degree-of-freedom foot-machine interface which allows the surgeon to control a third robotic instrument using the foot, giving the surgeon a "third hand". This interface is essentially a parallel-serial hybrid mechanism with springs and force sensors. Unlike existing switch-based interfaces that can only un-intuitively generate motion in discrete directions, this interface allows intuitive control of a slave robotic arm in continuous directions and speeds, naturally matching the foot movements with dynamic force & position feedbacks. An experiment with ten naive subjects was conducted to test the system. In view of the significant variance of motion patterns between subjects, a subject-specific mapping from foot movements to command outputs was developed using Independent Component Analysis (ICA). Results showed that the ICA method could accurately identify subjects' foot motion patterns and significantly improve the prediction accuracy of motion directions from 68 kinematics-based approach. This foot-machine interface can be applied for the teleoperation of industrial/surgical robots independently or in coordination with hands in the future.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 11

05/27/2019

Development of an Intuitive Foot-Machine Interface for Robotic Surgery

The human-machine interface is of critical importance for master-slave c...
03/08/2019

Performance evaluation of a foot-controlled human-robot interface

Robotic minimally invasive interventions typically require using more th...
06/19/2021

Reassessing Measures for Press Freedom

There has been a newly refound interest in press freedom in the face of ...
07/12/2020

A Three-limb Teleoperated Robotic System with Foot Control for Flexible Endoscopic Surgery

Flexible endoscopy requires high skills to manipulate both the endoscope...
11/12/2020

Sensors for expert grip force profiling: towards benchmarking manual control of a robotic device for surgical tool movements

STRAS (Single access Transluminal Robotic Assistant for Surgeons) is a n...
10/15/2020

Intuitive sequence matching algorithm applied to a sip-and-puff control interface for robotic assistive devices

This paper presents the development and preliminary validation of a cont...
10/09/2021

Learning to Control Complex Robots Using High-Dimensional Interfaces: Preliminary Insights

Human body motions can be captured as a high-dimensional continuous sign...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Human-machine interfaces (HMI) play an important role in teleoperation systems. A well-built HMI can accurately identify the commands from the user and provides the user with haptic feedback from the remote environment. Many HMIs for hands are available in the market, e.g., PHANToM (3D Systems, USA), Delta/ Omega/ Sigma series (Force Dimension, Switzerland), etc. These HMIs are under the direct control of the user’s hands. They can act as master devices telemanipulating remote robotic arms through the sub-robotic system.

HMIs have been widely used in robotic surgical systems. For example, the well-known Da Vinci system [1] has a built-in master console through which the surgeon tele-manipulates or switches instruments. The RAVEN [2] and MiroSurge [3] robotic systems offer teleoperation and haptic feedback through commercial hand interfaces PHANToM Omni and Sigma 7, respectively. The flexible endoscopic robotic MASTER [4] system has similar teleoperation mechanisms for the manipulation of two flexible robotic instruments in a flexible endoscope. While these HMIs provide good manipulation control, they are designed and used for the hands so that the surgeon can only control at most two instruments at one time.

These HMIs are too limited for many surgical operations, which require using three instruments simultaneously. In this case a human assistant is needed to help controlling the additional instrument(s) in coordination with the surgeon. For instance, a camera assistant is needed to adjust the camera field of view during laparoscopic surgery; in a robotic flexible endoscopic surgery, the robotic arms controlled by the surgeon can only work in a small workspace and thus an endoscopist is often needed to move the endoscope which is carrying the robotic arms to adjust the working area [4]. However, it was observed that surgical performance and efficiency are often limited by the communication delays and errors between the surgeon and the assistant [5], and any mistakes may affect the patient’s health.

A solution to address this issue is let the surgeon control the additional instrument(s) [6]. Our vision is that the surgeon should seamlessly control a third robotic arm in conjunction with the natural arms, yielding smoother procedures, faster reactions, increased skill, reduced errors, and reduced manpower (fewer assistants). In fact, this is feasible because humans can naturally control their hands while carrying out other tasks using other parts of the body, e.g., one can manipulate objects with hands while speaking and walking.

Various hand-free interfaces have been developed to control the camera of laparoscopic surgery using head [7], voice [8], or switches activated by fingers or feet [9, 10, 11]. Most of these interfaces are intrinsically limited. For instance, voice commands may be affected by noise in operation room, and several persons cannot send commands together; turning the head away from the area of interest may affect hand movements; fingers’ movements are coupled with the hand movements. Thus, the foot is the primary modality for inputs when the user’s hands are busy [12, 13].

Foot controlled movements are easy to learn [14], can be used in conjunction with hand movements [15, 16] and may provide similar or better skill as hand/voice controlled movements [17, 18]. Foot interfaces in commercial surgical robotic systems have been mainly used to control a laparoscopic camera [19, 20, 21], and generally are footswitches or buttons placed closely on a planar base to move the camera at constant speeds in zoom in/out, upward/downward, right/left directions. Kawai et al. [22] used a pressure sensor sheet to record different foot patterns movements and locally control five degrees-of-freedoms (DOF) of a forceps, and Abdi et al. [23] built an elastic-isometric four-DOF foot interface to control a robotic endoscope holder. These interfaces enable the movement to only a few discrete directions, i.e., it is difficult or not possible to command two or more DOFs simultaneously; and some require frequent visual checks to ensure that the foot is placed correctly, especially for novice users. These interfaces do not provide haptic feedback, which is required for fine control. Furthermore, the individual differences of foot operation are not considered in those interfaces.

In this paper we present a foot interface overcoming these limitations: a four-DOF parallel-serial hybrid mechanism with springs and force sensors. It allows intuitive control of a slave robotic arm in continuous directions and speeds, naturally matching the foot movements with dynamic force & position feedbacks. The passive haptic feedback and automatic homing features of the interface relieve the user from visual checking of the foot positions. Moreover, the interface is adaptable to the specific movement patterns of different users so as to enable accurate control.

The paper is organized as follows: Section II reveals the operation modes of the foot interface and selected four-DOF foot motions. Section III presents the proposed foot interface mechanism design, followed by the kinematics and statics modeling of the interface in Section IV. Section V introduces a user study to test the developed foot interface. The results exhibit the need to identify user-specific commands corresponding to their feet motion patterns. Section VI proposes and compares different mapping approaches for the user-specific motion patterns. Section VII summarizes the paper’s contributions and discusses the interface’s limitations.

Ii Feedback and sensing modules

A passive compliant system with serial elastic feedback-sensing modules was selected to capture the continuous four-DOF natural movements of the foot with dynamic force feedback. Rich displacement and force feedback enables the operator to control robotic arms with enhanced intuitiveness, dexterity, and efficiency [24]. Elastic elements in robotic systems [25, 26] can help define the energy distribution of the system in the working range for desired functionality.

Fig. 1: Feedback and sensing module in (a) elastic and (b,c) isometric modes. : velocity, : force exerted by the foot, : elastic force, : reaction force.

Ii-a Working Principle

The feedback-sensing module consists of an elastic element and a force sensor [27] as shown in Fig. 1. The force sensor, elastic element and the mobile part are connected serially with each other. The mobile part is activated by foot movements whilst elastic element is deformed under the force of the foot which is detected by the force sensor. The measured forces can then be used to calculate the deformations of the elastic module which are further used to calculate the position and orientation of the mobile part through kinematics. The whole foot operation can be separated into elastic mode (Fig. 1a) and isometric mode (Fig. 1b,c). Within the motion range of the elastic element, the foot is free to move (with reaction forces from the springs), and the position/force of the foot can be used as output of the interface to control a robotic arm; this control mode is elastic control mode. Once the elastic element reaches the elastic limit (e.g. the fully compressed compression spring in Fig. 1b) or mechanical constraint (Fig. 1c), the foot cannot further move beyond the corresponding boundaries, but the corresponding forces may still be changed by the user. This force signal provides isometric control mode. The external force from the foot can be derived from the spring force or reaction force via readings of the force sensor if the friction force is ignored. The transition between elastic and isometric modes enables sufficient proprioceptive information to the user and unlimited input range (depending only on the operator’s capability) which can be used to control the position/rate of the slave robot .

Ii-B Degrees of Freedom

Four-DOF specific foot motions were selected as the input to the foot interface system (Fig. 2a): i) foot forward/backward movements (due to knee joint flexion/extension), ii) foot lateral movements (due to hip’s abduction/adduction), iii)foot lateral/medial axial rotation and iv) dorsiflexion/plantar flexion of the ankle. These natural foot movements can be carried out comfortably by most users. A relatively small workspace has been set to avoid uncomfortable operation and human fatigue (i.e. The above mentioned motions i) and ii) are limited to 2cm; motions iii) and iv) are within and ).

Eight feedback and sensing modules are located around the foot as shown in Fig. 2 to collect the input signals from the foot motions. The first six feedback and sensing modules can detect three-DOF foot motions in the horizontal plane. They are parallel connected to the foot to provide dynamic and continuous elastic feedback. Two additional feedback and sensing modules are located under the sole and heel to collect force signal of the fourth DOF foot motion in dorsiflexion/plantar flexion. The details of the design and modeling of the foot interface mechanism are described in the next two sections.

Fig. 2: Four-DOF foot motions enabled by the interface, and arrangement of feedback and sensing modules to measure these movements.

Iii Interface design

Fig. 3: Passive four-DOF foot interface. (a) The interface prototype controlled by a user’s foot. (b) The schematic diagram in top view at initial home position (blue dotted lines show a random position of the MF) and (c) side view with mechanical motion limits.

Fig. 3 shows the foot interface prototype and the schematic diagrams. The foot is fixed to a foot pedal which is serially connected with a mobile frame (MF) through a pivot shaft, torsion springs, and load cells. The MF connects to the base through a series of linear guides, compression springs, and load cells, forming a parallel structure. The whole system is essentially a parallel-serial hybrid mechanism controlled by foot. The parallel mechanism provides a simple and compact structure with low inertia through closed-loop kinematics; meanwhile, the serially connect pedal decouples the dorsiflexion/plantar flexion movements of the foot from other horizontal movements within a relatively large workspace.

The elastic mode of the system is achieved through the elastic network with eight springs providing dynamic real-time passive force & position feedback. In addition, these springs are carefully arranged for a singularity-free workspace with a neutral central home position (global minimum elastic energy), as sketched with solid lines in Fig. 3b,c. When the operator finishes operation movements and releases the pedal, the interface returns to the home position automatically (assuming zero friction), providing a resting posture for the foot and enabling a quick start for the next operations, without the need of a visual check. The usage of force sensors instead of position sensors collects force information of the operator’s foot within and even beyond the geometric workspace, enabling 9 transition between elastic and isometric modes.

Fig. 4: 3D system mechanical structure of the foot interface in (a) perspective top, (b) open side views and zoom views of (c) compression spring feedback and sensing module, (d) torsion spring feedback and sensing module.

The 3D model of the foot interface is detailed in Fig. 4. The interface includes a base which is fixed on the ground, a mobile frame (MF), a pedal with adjustable foot fixture and the feedback and sensing modules with springs and force sensors. The base and the MF have a parallel kinematics structure and are connected by six feedback and sensing modules of the compression spring (Fig. 4c) with hinge joints on both sides. The MF is the input component which can slide in the horizontal plane in two translations and one rotation (2T1R) within the base. To reduce the friction and inertia, eight universal wheels are mounted at the bottom of the MF (Fig. 4b), which transform sliding friction to rolling friction in order to minimize a user’s fatigue. The pedal plate for the foot is serially mounted in the MF as an extension with a pivot shaft and two feedback and sensing modules of the torsion spring (Fig. 4d), acting as a second input component for pitch rotation .

The pitch rotation of the pedal and movements of MF are decoupled so they do not affect each other. The potential motion coupling problem between forward/backward motion and dorsiflexion/plantar flexion of the ankle was minimized by placing the pitch pivot shaft at a lower position between the height of the spring guide and motion surface. The driving force from the human foot acting on the pivot shaft, the reaction forces of the springs, and the friction force (low) are generally counterbalanced.

An adjustable foot fixture mechanism is mounted on the foot pedal plate enabling comfortable but rigid fixation. This fixture is composed of four 3D-printed, foot-shaped guides that can fit well different human feet. Each guide block connects with two guide screws, enabling four directions adjustments independently or in tandem, for feet of size 35 to 46 in Europe standard. The length and width can be easily adjusted by twisting the handles.

compression spring torsion spring
stiffness stiffness
free length position angle
initial length pre-tension
fully compressed length operating angle
TABLE I: Springs’ specification

The six compression springs and a pair of torsion springs form a spring network with four DOFs. Springs of stiffness 0.01, 0.02, 0.03, 0.05, 0.1 N/cm were considered, and the 0.02 N/cm compression spring was selected as not too hard to press while providing sufficient haptic feedback. Compression springs were preferred than extension springs, as they enable force measurement even when fully compressed and are simple to assemble. The selected spring specifications are listed in Table I. A pair of torsion springs with position angle and spring stiffness was selected for the pitch rotation DOF mounted symmetrically but in reverse directions on the pivot rotation shaft which support the pedal plate. For each torsion spring, the fixed arm is integrated to the MF via a fixing block, whereas the mobile arm and moving block rotate with the pedal plate. The twisting angle was mechanically limited to which is reached once the moving block touching the vertical plane of fixing block (refer to Fig. 3c enlarged view). The spring mechanism provides real-time force feedback changing monotonically with the foot displacement. Each spring is located outside the base mounting in serial with the corresponding force sensor (LW1025-25 from Interface, Inc., USA). As will be analyzed in Fig. 6 of Section IV, this avoids singularities inside the workspace and defines an automatic home position.

The applied forces by the operator are translated into electrical signals through eight force sensors. To ensure that the springs can effectively transmit forces to load cells in any poses of the pedal, a 5.6N pre-compression force is applied for each compression spring at home position. While the pedal’s movements are limited by the deflection range of compression spring (for DOFs in 2T1R) and mechanical constraint (for DOF in pitch rotation), the force detection range is not restrained, i.e., once a compression spring is fully compressed, or the pitch rotation reaches the limit angle, the isometric force just builds up and is still measured by the load cell.

Iv Modeling

Iv-a Kinematics

The kinematics of the three DOFs of 2T1R in the horizontal plane (without tilting the paddle) can be regarded as a 6-RPR planar parallel mechanism as shown in Fig. 3b. The MF is constrained and connected to the base via six spring guides with hinge joints on both sides (represented as points on the base and on the MF). A fixed base reference frame {O-} and mobile reference frame {C-} are assigned to the centroid of the base and the MF square plane, respectively.

Fig. 5: (a) Kinematic model of ith compression spring and (b) statics forces in - horizontal plane and (c) pitch rotation DOF.

The position vector of point

is defined by the vector expressed in the fixed frame {O} whereas the position vector of point is defined as in the mobile frame {C}. They can be represented using , the lengths and widths of base the and the MF square, and , the distance between the laterally located linear spring guides, equal to lengths and . The position of the MF reference frame {C} relative to the base reference frame {O} is defined by the vector connecting O to C, while its orientation is defined by the angle between the axis of the {O} and the axis of the {C}. The length of th spring guide between the attach points of base and the MF is denoted as . Finally, let denote the unit vector along th spring guide, and .

Using these definitions, the closed mechanical chain of Fig. 5a is:

(1)

where is the rotation matrix from the fixed frame {O} to the mobile frame {C}. For this parallel mechanism, the inverse kinematics, i.e., calculating the guide lengths as a function of the pose translation and rotation can be computed from the closure constraints:

(2)

For the forward kinematics, given , can be derived by solving the six scalar equations (2), yielding

(3)

where , , , , and . The magnitude of guide length is derived from delta compression spring forces via Hooke’s Law:

(4)

where is the compression spring stiffness constant of th spring. , is the original length and pretension force of th spring guide and spring at home position. is th load cell’s reading. The pitch rotation angle is

(5)

where , are the rotation torque and stiffness constant in pitch DOF which will be derived in eq.(9).

Iv-B Statics

Elastic statics

Gravity can be neglected in the horizontal plane of Fig. 5b. The resultant force and torque of and can be computed from delta compression spring forces . The statics equations are:

(6)

which can be written in matrix form as

(7)

where is the structure matrix of the planar parallel structure. The stiffness matrix in motion workspace can be found by taking derivatives of with respect to :

(8)

The torque for pitch rotation around center point is obtained through the following equation:

(9)

where and are recorded by load cells 7 and 8 placed under the sole and heel, respectively. Theoretically, there is no pre-tension force for the two torsion springs in the balanced state. The force magnitude of and directly reflect the input force change (Fig. 5c). and are the arm lengths from pedal plate center to positions of load cells 7 and 8, which can be modified according to the operators’ habitual posture. The stiffness in this dof is directly reflected in , the stiffness constants of torsion springs 7 and 8.

The and of the four-DOF structure can be rewritten as below:

(10)

where the resultant wrench

is feedback to the human. From static equilibrium, the sum of external force and moments

exerted on the pedal equals the resultant external wrench exerted on the human foot.

Isometric statics

When the pedal reaches a workspace’s boundary, the system is in isometric mode. The force/torque can continue to increase as is recorded by load cells coupled with the fully compressed compression spring(s) or the mechanically constrained torsion spring.

Singularities

Fig. 6a illustrates a previous design with compression springs between the base and MF. This design yields two singular configurations (Fig. 6a, black and blue lines) at the extreme yaw rotations, where the pedal remains twisted, making the pedal’s movement uncontrollable. This issue was addressed by placing the compression springs outside of the base (Fig. 3b). This changes a compressing force exerted on the MF to a pulling force, thus making the springs free to take any orientation independently on the other connections. This structure avoids the mechanical constraints that result from using bars connecting the base and MF and the resulting singularities at local energy minima (Fig. 6b, blue line). The elastic energy has a unique global minimum value at home position (Fig. 6b, black line and Fig. 6c).

Fig. 6: (a) Initial design with springs inside the base, between the base and MF. (b,c) show how elastic energy depends on yaw the rotation angle (b) and on (c).

V Analysis of aiming error

V-a Experiment

An experiment was conducted with ten subjects (27.32.2 years old, right foot dominant, 4 female) to study the precision with which they can control the interface in given directions using their right foot. The experiment was approved by the Institutional Review Board (IRB) of Nanyang Technological University (IRB-2018-05-051).

Each subject was seated comfortably on a chair in front of a table, and the foot interface was placed under the table at the home position with the pedal horizontally located at the center of the base. The subjects were asked to step their right feet on the pedal and adjust the positions of the fixture blocks to match their specific feet sizes.

The task was to move the pedal from the home position along multiple specified directions to the boundary of the workspace. Once at the boundary, the pedal should be held for one second and returned back to the home position. The task was demonstrated to the subjects by the experimenter prior to data collection. The subjects were also informed of the desired target directions. No visual feedback of the foot posture was provided during the experiment. In this way, we could observe feedforward foot movements corresponding to any desired motion directions. Movements were carried out in single and diagonal directions:

Single directions

Each subject started at home position. Three trials were conducted in each of the following eight directions: forward (F), backward (B), left (L), right (R), toe up rotation (TU), toe down rotation (TD), left torsion (LT), and right torsion (RT) in this order. This procedure was continuous without pause until all the centre-out and back movements were completed. Then, the subject lifted down his/her foot and took a 30 seconds’ break. Another group of 24 trials was repeated after that.

Common diagonal directions

Each subject started at the home position. Three trials were conducted in the twelve common diagonal directions, which are the combination of the two single Cartesian directions, in the order of {left & forward (LF), right & forward (RF), left & backward (LB), right & backward (RB), left & toe up (LTU), right & toe up (RTU), left & toe down (LTD), right & toe down (RTD), forward & toe up (FTU), backward & toe up (BTU), forward & toe down (FTD), backward & toe down (BTD)}. trials were conducted in diagonal directions.

V-B Data Analysis

Foot force data from load cells were recorded at 50 Hz and smoothed offline by using a moving average filter with window size of 9. They were mapped to the pose vector of the center point C on the pedal using the forward kinematics Eqs. (3, 5).

The actual initial position of the foot for each consecutive operation was considered as the calibrated home position. To compare different components of the pose, a point P on the pedal, (with ) with respect to {C}, was selected as a reference point (see Fig. 5). The four-DOF position change of point P with respect to home position is represented as (section V-A), where , , , , with the range , . A scaling factor was used for to bring the two angles in the same range. The data of vector were then filtered to remove the static position data using a resultant velocity threshold of 0.005m/s.

Foot-path error was used to quantify the performance in the trials:

(11)

where is the real trajectory points, the projection of on the desired path, and is the total number of samples for foot movements in each direction.

V-C Results

Fig. 7 shows the foot-path error over the ten subjects in various directions and the variability over the consecutive trials of each subject. We see that in the single directions (Figs.7a,b) LT and RT have a larger error relatively to the other directions, and F/B have the least error. Diagonal directions (Fig. 7c) that combine translations in planes have less error then the diagonal motions involving rotation and translation in , planes. In most cases, the deviation is large between subjects. And the deviation is relatively small in the trials of each subject. In addition, the smoothness in the two data sets of single direction trials and the data set of diagonal direction trials were analyzed using the spectral metrics [28] which can be used to assess learning [29]

. There was no significant difference between the two single data sets (T-test, p

0.4), meaning that there was no learning effect between these two periods. However, there is a different level of smoothness (T-test, p0.01), indicating smoother movements in single axis directions as in diagonal ones.

Fig. 7:

Foot-path error and its standard deviation over subjects and consecutive trials for each subject on (a) data set 1 of single-direction trials, (b) data set 2 of single-direction trial and (c) data set 3 of diagonal-direction trials.

Vi Subject-specific control patterns

The forces measured by the interface will reflect the operator’s foot motion intention and be used to control a device with four-DOF. What is required is a mapping:

(12)

from the eight load cells’ force signals to a movement in the four DOFs of , reflecting the pose of center point C of the pedal under base frame {O} (Figs. 3b,c). An obvious solution consists of using the forward kinematics Eqs.(3, 5) which directly reflect the foot motion patterns.

As discussed in the last section, the foot motions’ performances are different depending on the direction and subject. However, the variability over consecutive trials of a single subject is relatively low as can be seen in Fig. 7. Therefore, an alternative idea for the mapping consists of identifying subjects- and directions- specific force patterns that can be used as commands to control a four-DOF device by foot. The independent component analysis (ICA) can separate mixing signals into simpler components. This method was used to find the foot motion pattern of the subject and obtained a superior performance than kinematics modeling.

The force data

is the input to the ICA model, which is the z-score normalization form of delta force changes of load cells’ readings

(from load cells real-time readings subtracting the spring pre-tensions at the home position). The FastICA algorithm [30] is then applied onto this data to derive a subject specific ICA model, as

(13)

where is a mapping matrix from one subject’s force data. Its components are eight-dimension arrays for each DOF, which are derived from single axis Cartesian motion data of L & R, F & B, LT & RT, TU & TD respectively. They reflect the new basis of four DOFs for the specific subject in the combination of eight load cell readings.

Once a subject-specific model is built using the calibration procedure described in the single direction task of section V-A, the subject-specific patterns are used to predict further motor commands matching one’s motion intention, i.e., mapping the force to motion command . To verify this method, we analyse the three sets of data collected in section V-A that are used for modeling (single axis direction data set 1) and testing (single axis direction data set 2 and diagonal axis direction data). For comparison purposes, the results from using the kinematics and ICA methods are converted to the same range through a min-max normalization. The original positions are calibrated to the zero for each consecutive set of data.

Fig. 8: Comparison of foot motion path in directions F, B, L, R of three representative subjects for single Cartesian modeling data using kinematic transformation and ICA.

Fig. 8 illustrates the effect of using for transformation subject- and direction-specific ICA compared to kinematics modeling, with three subjects’ foot trajectories in the plane. It is plotted by the result of = with modeling data in directions of F, B, L, R, with three trials for each direction. The kinematics model (red traces) corresponding to the actual human foot motions exhibits different patterns over subjects. In contrast, the results of ICA modeling yield a subject- and direction- specific mapping minimizing the foot-path error and performance variability of users.

Fig. 9: Foot motion with kinematic transformation (red lines) and ICA (black lines) on testing data for subject 9 in single Cartesian directions data (a) and diagonal directions (b).

Fig. 9 further illustrates the results from the two mapping methods in four-DOF of testing data for subject 9. The sequences of motions for single and diagonal directions are labeled on the top of each figure. The ICA method effectively fixes the issue of inaccurate motions coupling in multi-DOF, for example, the directions of F and RT in Fig. 9a, and BTU in Fig. 9b.

To check the accuracy of control commands for all subjects and directions, the direction identification accuracy, defined as the ratio of the sum of the number of movements applied in the desired direction to that the sum of the movements in all the directions, was used as a metric to test the performance of different mapping methods. For example, when forward direction is desired, correct sample data corresponding to , while all motions correspond to the data satisfies .

The diagonal directions can be regarded as rotated single directions and follow similar identification requirements in a transformed coordinate. For instance, when the target direction is LF, the data will be transformed with an anti-clockwise rotation of in plane, changing the target direction to negative axis of . The DOF of are not counted for diagonal direction analysis. As it cannot reach the extreme value during diagonal motions, the normalized data lose the relative relationship with the other DOFs. Thus, the correct sample data for direction LF should satisfy , and all motions data identified by .

A defined zero band based on modeling data was set for the control commands in each DOF. The upper and lower limits in translations and rotations are identified as the 30% and 40% of the respective maximum and minimum values of the mapping results in kinematics and ICA. The zero band ranges are small: for , for and for when reported in the theoretical kinematic model. For the subsequent testing data, if a mapping’s result falls into the zero band, it is regarded as zero output in the responding DOF.

Fig. 10: Direction identification accuracy using the kinematic model transformation (a,b,c) and ICA model (d,e,f). (a,d) applies on the single Cartesian modeling data, (b,e) on the single Cartesian testing data, and (c,f) on the diagonal directions testing data.

Fig. 10 shows the direction identification accuracy results of kinematic (Fig. 10a-c) and ICA modeling (Fig. 10d-f) for the 10 subjects of previous experiment. We see that the performance with the kinematic mapping is improved using the subject-specific ICA mapping. However, even the ICA mapping cannot prevent specific problems such as subject 1 in the RT direction. Furthermore, the accuracy is lower in diagonal directions as expected.

Vii Conclusion

This paper introduced a four-DOF foot-controlled human-machine interface featuring force and position feedbacks, continuous output space control, and automatic home positioning. The design and modeling of this foot interface were presented, and an experimental study was conducted to quantify the performance of this interface. An approach was proposed to define a suitable mapping from foot movement space to output space based on ICA.

The experimental data obtained with ten able-bodied subjects exhibited subject-specific movement patterns with obvious variability among different subjects but less variability in the repeated trials of each subject. This motivated us to develop a subject-specific mapping from the movement to an output command space using ICA, which improved the control as compared with the common kinematic transformation approach. With the approach based on the ICA transformation, the accuracy of multiple directions over ten subjects increased (relatively to the kinematic transformation) from to , to , and to for three datasets of single and diagonal directions respectively.

The above results on foot motion direction identification indicate that the foot interface system with built-in ICA model is able to identify multi-directions foot motion intentions accurately. Nevertheless, the performance of the ICA method largely depends on the calibration procedure and data, which should best reflect the habitual motion pattern of the specific subject. This is a possible reason why e.g. subject 1 cannot achieve a better result as the other subjects.

Acknowledgment

The authors would like to thank Jonathan Eden for his suggestions on improving this manuscript and the subjects who participated in this study. This work was funded by the Singapore National Research Foundation through the NRF Investigatorship Award (NRF-NRFI 2016-07).

References

  • [1] J. C. Bodner, H. F. Wykypiel, G. J. Wetscher, and T. M. Schmid, “First experiences with the da vinci operating robot in thoracic surgery,” European Journal of Cardio-thoracic Surgery, vol. 25 5, pp. 844–51, 2004.
  • [2] B. Hannaford, J. Rosen, D. W. Friedman, H. King, P. Roan, L. Cheng, D. Glozman, J. Ma, S. N. Kosari, and L. White, “Raven-ii: An open platform for surgical robotics research,” IEEE Transactions on Biomedical Engineering, vol. 60, pp. 954–959, 2013.
  • [3] A. Tobergte, P. Helmer, U. Hagn, P. Rouiller, S. Thielmann, S. Grange, A. Albu-Schaffer, F. Conti, and G. Hirzinger, “The Sigma.7 haptic interface for MiroSurge: A new bi-manual surgical console,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2011, pp. 3023–3030.
  • [4] N. Takeshita, K. Y. Ho, S. J. Phee, J. Y. Y. Wong, and P. W. Y. Chiu, “Feasibility of performing esophageal endoscopic submucosal dissection using master and slave transluminal endoscopic robot,” Endoscopy, vol. 49 S 01, pp. E27–E28, 2017.
  • [5] M. Nurok, T. M. Sundt, and A. Frankel, “Teamwork and communication in the operating room: Relationship to discrete outcomes and research challenges,” Anesthesiology Clinics.
  • [6] M. Turk, “Multimodal interaction: A review,” Pattern Recognition Letters, vol. 36, pp. 189–195, 2014.
  • [7] S. S. Kommu, P. Rimington, C. Anderson, and A. Rané, “Initial experience with the endoassist camera-holding robot in laparoscopic urological surgery,” Journal of Robotic Surgery, vol. 1, pp. 133–137, 2007.
  • [8] G. Yulun Wang and S. N. Darrin Uecker, “Speech interface for an automated endoscopic system,” U.S. Patent 6,463,361 B1, 2002.
  • [9] G. F Buess, A. Arezzo, M. Oliver Schurr, T. Ulmer, H. Fisher, L. Gumb, T. Testa, and C. Nobman, “A new remote-controlled endoscope positioning system for endoscopic solo surgery: The fips endoarm,” Surgical Endoscopy, vol. 14, pp. 395–399, 2000.
  • [10] R. Polet and J. Donnez, “Using a laparoscope manipulator (lapman) in laparoscopic gynecological surgery,” Surgical Technology International, vol. 17, pp. 187–91, 2008.
  • [11] M. G. Munro, “Automated laparoscope positioner: Preliminary experience,” The Journal of the American Association of Gynecologic Laparoscopists, vol. 1, pp. 67–70, 1993.
  • [12] E. Velloso, D. Schmidt, J. Alexander, H. Gellersen, and A. Bulling, “The feet in human-computer interaction: A survey of foot-based interaction,” ACM Computing Surveys, vol. 48, pp. 1–35, 2015.
  • [13] J. Alexander, T. Han, W. Judd, P. Irani, and S. Subramanian, “Putting your best foot forward,” in Proceedings of the ACM Conference on Human Factors in Computing Systems, 2012, p. 1229.
  • [14] A. L. Simeone, E. Velloso, J. Alexander, and H. Gellersen, “Feet movement in desktop 3d interaction,” in IEEE Symposium on 3D User Interfaces, 2014, pp. 71–74.
  • [15] E. Abdi, E. Burdet, M. Bouri, and H. Bleuler, “Control of a supernumerary robotic hand by foot: An experimental study in virtual reality,” PLoS ONE, vol. 10, p. e0134501, 2015.
  • [16] E. Abdi, E. Burdet, M. Bouri, S. Himidan, and H. Bleuler, “In a demanding task, three-handed manipulation is preferred to two-handed manipulation,” Scientific Reports, vol. 6, p. 21758, 2016.
  • [17] D. W. Podbielski, J. Noble, H. S. Gill, M. Sit, and W.-C. Lam, “A comparison of hand- and foot-activated surgical tools in simulated ophthalmic surgery,” Canadian Journal of Ophthalmology, vol. 47, 2012.
  • [18] B. Hatscher and C. Hansen, “Hand, foot or voice: Alternative input modalities for touchless interaction in the medical domain,” in Proceedings of the 20th ACM International Conference on Multimodal Interaction, 2018, pp. 145–153.
  • [19] J. M. Sackier and Y. Wang, “Robotically assisted laparoscopic surgery,” Surgical Endoscopy.
  • [20] S. Voros, G.-P. Haber, J.-F. Menudet, J.-A. Long, and P. Cinquin, “ViKY robotic scope holder: Initial clinical experience and preliminary results using instrument tracking,” IEEE/ASME Transactions on Mechatronics, vol. 15, pp. 879–886, 2010.
  • [21] A. Mirbagheri, F. Farahmand, A. Meghdari, and F. Karimian, “Design and development of an effective low-cost robotic cameraman for laparoscopic surgery: Robolens,” Scientia Iranica, vol. 18, pp. 105–114, 2011.
  • [22] T. Kawai, M. Fukunishi, A. Nishikawa, Y. Nishizawa, and T. Nakamura, “Hands-free interface for surgical procedures based on foot movement patterns,” in IEEE EMBC, 2014, pp. 345–348.
  • [23] E. Abdi, “Supernumerary robotic arm for three-handed surgical application: Behavioral study and design of human-machine interface,” Ph.D. dissertation, Ecole Polytechnique F’ed’erale de Lausanne (EPFL), 2017.
  • [24] M. Mace, P. Rinne, J.-L. Liardon, C. Uhomoibhi, P. Bentley, and E. Burdet, “Elasticity improves handgrip performance and user experience during visuomotor control,” Royal Society Open Science, vol. 4, p. 160961, 2017.
  • [25] Y. Eguchi, H. Kadone, and K. Suzuki, “Standing mobility device with passive lower limb exoskeleton for upright locomotion,” IEEE/ASME Transactions on Mechatronics, vol. 23, no. 4, pp. 1608–1618, 2018.
  • [26] X. Liu, A. Rossi, and I. Poulakakis, “A switchable parallel elastic actuator and its application to leg design for running robots,” IEEE/ASME Transactions on Mechatronics, vol. PP, pp. 1–1, 09 2018.
  • [27] E. Burdet, M. A. V. Mace, J.-L. Liardon, P. Bentley, and P. Rinne, “A force measurement mechanism,” European Patent 3 247 986A1, 2016.
  • [28] S. Balasubramanian, A. Melendez-Calderon, A. Roby-Brami, and E. Burdet, “On the analysis of movement smoothness,” Journal of NeuroEngineering and Rehabilitation, vol. 12, pp. 1–11, 2015.
  • [29] S. Balasubramanian, A. Melendez-Calderon, and E. Burdet, “A robust and sensitive metric for quantifying movement smoothness,” IEEE Transactions on Biomedical Engineering, vol. 59, pp. 2126–2136, 2012.
  • [30] A. Hyvärinen and E. Oja, “Independent component analysis: algorithms and applications,” Neural Networks, vol. 13, pp. 411–430, 2000.

References