A Flexible and Modular Body-Machine Interface for Individuals Living with Severe Disabilities

07/29/2020 ∙ by Cheikh Latyr Fall, et al. ∙ Université Laval 0

This paper presents a control interface to translate the residual body motions of individuals living with severe disabilities, into control commands for body-machine interaction. A custom, wireless, wearable multi-sensor network is used to collect motion data from multiple points on the body in real-time. The solution proposed successfully leverage electromyography gesture recognition techniques for the recognition of inertial measurement units-based commands (IMU), without the need for cumbersome and noisy surface electrodes. Motion pattern recognition is performed using a computationally inexpensive classifier (Linear Discriminant Analysis) so that the solution can be deployed onto lightweight embedded platforms. Five participants (three able-bodied and two living with upper-body disabilities) presenting different motion limitations (e.g. spasms, reduced motion range) were recruited. They were asked to perform up to 9 different motion classes, including head, shoulder, finger, and foot motions, with respect to their residual functional capacities. The measured prediction performances show an average accuracy of 99.96 able-bodied individuals and 91.66 disabilities. The recorded dataset has also been made available online to the research community. Proof of concept for the real-time use of the system is given through an assembly task replicating activities of daily living using the JACO arm from Kinova Robotics.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 5

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Assistive Technology (AT) Devices are tools that aim to provide people living with disabilities, complementary functionalities to compensate cognitive, sensory or motor impairments. Such tools often require complex user interaction to properly activate all their degrees of freedom (DoFs). Control interfaces (CIs) such as joysticks and user buttons, with or without adaptive functions, are necessary to capture the user’s intent and translate it into action commands. Although considerable efforts have been devoted to improve CIs for people living with severe disabilities, important challenges still need to be addressed both in terms of the variability of the users’ capacity to interface with AT and the general functionality of these ATs.

Individuals living with limited residual functional capacities (RFCs) have to rely on specialized CIs to provide them with activable DoFs. Devices such as dedicated joysticks and user buttons [35] require mechanical intervention by the user and can fail for those with dexterity issues, lack of mobility or absence of upper extremity members. Sip-and-puff tools [4] and head mounted switches [6]

have successfully been used to operate powered wheelchairs. However, they are often cumbersome, counter intuitive and hardly adaptable to each individual’s ergonomic requirements (e.g. chair, wheelchair, bed), and impairment condition. CIs that translate tongue position into command vectors can provide several DoFs to the severely disabled, but they tend to be invasive, requiring a tongue piercing and necessitate the help of a third person to place/remove the headset or the necessary intraoral sensing accessory

[29, 20]. Brain activity can be used to operate external devices by reading electroencephalography (EEG) and/or electrocorticography (ECoG) signals [27, 28]. Although the results are promising, precise electrode placement and extensive training phases are often required. Furthermore, the implementation cost of these techniques remains high [21]. Eye motion and gaze orientation sensed using electro-occulography (EOG) [26], and camera or infrared (IR) sensors [39, 3, 23], has been used as well to operate, inter alia, an articulated robotic arm [25, 22]. While skin preparation and facial electrode placement is required for EOG measurement, the user has to be positioned within a limited range of view for camera and IR sensors. Surface electromyography (sEMG), from which muscle activity patterns can be derived, has also been used through amplitude based detection [11] and/or pattern recognition [8] to implement efficient CIs. However, periodic recalibration of the classification system is required [24, 10]. In addition, sEMG-based control can be impractical in the presence of muscle spasms or when the muscle signal is weak, due to atrophy or low body-motion amplitude (e.g. fingers and toes). In these case, inertial measurement units (IMU) sensors, when properly used, can provide much better body motion measurement resolution.

Body-machine interfaces (BoMI) that rely on residual motion can be highly beneficial to users by maintaining a certain level of muscular activity and tonus in their mobile body parts. Commercial grade IMUs have been successfully used to read upper-body gestures and control external devices. Researchers used three [38] and four [1] IMU modules from Xsens Technologies111Enschede, Netherlands (www.xsens.com/) to read the shoulder motion of individuals living with spinal cord injuries (SCI) between C2 and C5, and provide proportional control to a powered wheelchair and a computer cursor. In those studies, Principal Component Analysis (PCA) was performed to extract the first two principal components of the motion pattern during calibration, and build a body movement transfer function. However, such approaches limit the system DOFs to only two. Chau et al. proposed a technique consisting of modelling the upper body using a finite model introduced as the Virtual Body Machine (VBM) [7]. Although it was successfully tested using five IMU modules, developed by YOST Labs222Ohio, USA (www.yostlabs.com/), to operates a 7-DoF robotic arm, it requires precise upper body measurements such as: head width, torso height and posture. Systems relying on a set of calibrated angular amplitude thresholds (thresholds-based approach) to generate control commands that are proportionally derived from head/shoulder motions have also been developed [14, 12, 13, 19]. Their operating principle is depicted in Figure 1. Despite their high precision, one major drawback of threshold-based approaches is that they can only target a pre-determined, and thus limited range of functional capacities. For instance, limiting motion to head and shoulder restrains the applicability and usability to a smaller group of disabilities. Furthermore, this type of control is not suitable for users with conditions that generate spasms.

Fig. 1: Proportional threshold-based head motion control along the Pitch angle, requiring calibrated amplitudes and predefined motion characteristics that severely limit their applicability to individuals with spasm, low motion amplitude, or diverse RFCs on different body parts (e.g. foot, finger, shoulder).

For users to fully benefit from the growing popularity of AT devices, several issues need to be addressed to overcome the limitations of existing BoMI solutions. First, from a functional point of view, the existing BoMIs are often highly specific, hardly customizable and cannot accommodate a wide range of disabilities without significant changes to the architecture. The manufacturers often have to build or integrate new hardware and/or control algorithm for each user. Thus, from a design point of view, trying to accommodate new users with an existing system can introduce considerable engineering effort and monetary cost. As a large part of the population who need AT devices do not have access to them, due in part to the incurred costs, this issue must be addressed [32]. In particular, an efficient BoMI design should allow integration of new modalities to most suitably address the RFCs of the user (e.g. IMU, sEMG, voice).

This work proposes a calibration (training) and control algorithm providing both motion classification and amplitude control using IMU sensors. It applies proven signal processing techniques commonly used in EMG pattern recognition applications [37], to the processing of IMU signals, for body-machine interaction purposes. The proposed system translates residual body motion, from a wide range of body parts (e.g. finger, head, shoulder, foot), into up to a 9-DoF command vector for external device control. Unlike for modalities that require the use of electrodes or direct field of view, IMU sensors can be easily integrated within accessories and garments. The software algorithm is intended to run on a low-cost, readily accessible processing platform (Raspberry Pi [30] in this case), to be embedded on mobile platforms such as powered wheelchairs. This paper adopts a new approach to addressing the lack of non-invasive BoMIs for severely impaired individuals, by leveraging affordable solutions with the potential of being suitable for a wide range of disabilities. Additionally, the JACO arm from Kinova Robotics333Boisbriand, Canada (www.kinova.ca) [5] is employed as a testbed to prove functionality of the proposed modular BoMI by performing activities of daily living in real-time.

This paper is organized as follows. An overview of the system’s architecture is provided in Section II. Section III

describes the dataset recorded for this work alongside the proposed feature extraction method, classification scheme and experiments conducted. Section

IV presents the experiments’ results within this work, including a real-time experiment to assess the usability of the proposed system for the completion of tasks of daily living.

Ii System Architecture

In an era where the continuous evolution of technologies and structures is mainly shaped by our capacities, individuals living with cerebral paralysis (CP), spinal cord injuries (SCI), congenital absence of limbs and stroke-induced handicaps in the upper body, usually have limited direct interaction with their environment. Depending on the severity of their condition, these individuals often have RFCs allowing them to move their toe, foot, finger, shoulder and head. However, these motion abilities tend to weaken if not maintained. One of the main added values of BoMIs is to exploit these voluntary capacities and turn them into efficient control means to operate CIs while allowing users to more easily retain their motions capabilities.

The objective that motivated the architecture of the proposed system consists of providing the user with a flexible sensing system that can capture their RFCs and voluntary motion capacities for translation into commands. The system uses sensors that are worn with accessories, garments and as patches to properly measure IMU motion signals. Then, the suitable motion pattern features are extracted based on motion characteristics and fed into a classifier for real-time pattern recognition and classification into several classes. While each class is mapped to specific DoFs, motion amplitude is provided as well to allow for proportional control (speed control, position, intensity, etc), as depicted in Figure 2.

Fig. 2: Functional Diagram of the proposed BoMI system.

The proposed BoMI was specifically designed around a set of requirements to satisfy comfort, affordability, power autonomy, robustness and intuitiveness. The motion capture system utilizes a custom wearable sensor network, made of off-the-shelf electronic components. IMU data fusion and signal processing are performed by this system to provide precise pattern extraction.

Ii-a Hardware System

The architecture of the custom, wireless, wearable body sensor network used to implement the proposed system is described in [13]. It is part of an ongoing project to provide a flexible framework architecture for the design of BoMIs dedicated to the severely disabled. Within the network, IMU sensor nodes integrate IMU sensing features using the LS9DSM0 inertial sensor from STMicroelectronics, Switzerland, which provides a serial peripheral interface (SPI). The MSP430F5528 microcontroller unit (MCU) from Texas Instrument, USA, is used for its low power performance. The recorded data is sent wirelessly using the nRF24L01+ 2.4-GHz radio-frequency (RF) chip from Nordic Semiconductor, Norway, which employs a proprietary protocol designed to allow up to 6 pipelines (TX and RX). Therefore, up to 6 IMU sensor nodes, lying on a 4-cm by 2.5-cm printed circuit board (PCB), can be used simultaneously. They can be worn with accessories (e.g., headset and ring), attached to clothes or put directly on the skin as patches (see Figure 3).

Fig. 3: Hardware included with the proposed BoMI. (a) the IMU sensor node worn by users and (b) Raspberry Pi and base station.

The wearable sensors are connected with a USB base station wirelessly through a body area network (BAN). The nodes are all independent from each other for flexibility and modularity. The network’s communication is performed using a star topology. The base station features the TM4C123GH6PM Cortex-M4F MCU from Texas Instruments to gather the data from the network, handle communications, signaling, and transfer the data to the Host platform (Raspberry Pi) for real-time data processing and pattern recognition. It is also used to program the sensors (i.e. download the firmware into the embedded MSP430 MCU) and for charging the battery.

Ii-B Software and Signal Processing

The 16-bit raw IMU data from the sensor () are sampled at 60Hz to provide a suitable time resolution, given that body motion frequency is often below 10 Hz [40]. The 3-axis accelerometer (acc, acc, acc), 3-axis gyroscope (gyro, gyro, gyro) and 3-axis magnetometer (mag, mag, mag) components are processed using a first-order complementary filtering [18] approach as recommended in [13], to retrieve the corresponding Pitch, Roll and Yaw orientation angles. Data fusion provides 1 precision and 18E-4 measured angular drift over time. Prior to regular operation, proper angular offset rotation is applied during a calibration phase to provide the relative motion measurements with respect to an initial neutral position.

The raw IMU data (3-axis accelerometer, 3-axis gyroscope and 3-axis magnetometer), the calibrated 3D orientation angles (Pitch, Roll and Yaw) and the time-domain features computed from all the sensors worn by the user are all used for motion pattern recognition. A motion amplitude indicator described by (1) is derived from the measured 3D angles in real-time. The maximum motion range values, (, ), where is the number of classes of the classifier, are captured during a training phase. The corresponding minimum values () are also found during the training phase. Thus, along each motion class , a proportional output , computed as described in (2), is provided.

(1)
(2)

Iii Dataset and Method

The BoMI architectures described in [13, 19] employ a set of calibrated thresholds, based on the user’s capacities, to capture motion and infer intent. These systems use head motion measurement to generate control commands. They are built around a set of a priori assumptions about the user’s motion ranges and therefore exclude individuals with specific RFCs or spasms. The BoMI proposed in the current work overcomes these limitations, allowing the user to choose the body parts and motion ranges to use.

Iii-a Participants

A total of five participants with different motion capacities were recruited to build, test and validate the functionalities of the proposed BoMI. For each individual user, a dedicated transfer function is provided, built from the classifier model generated by the acquired motion patterns (training session). The functionality and reliability of the proposed approach for different motion amplitudes, and over five consecutive days of usage, is also investigated. The architecture of the proposed BoMI is designed to provide portability and comfort.

The experimental protocol was performed in accordance with the ethical research at Laval University444 Approbation #2016-277 A-1/31-01-2017A. The complete dataset recorded with said experimental protocol is available for download at github.com/LatyrFall/FlexibleBoMI.

The body motions of interest were chosen in the perspective of having both an intuitive directional control (2D or 3D), similar to a joystick device, and the possibility to emulate at least 1 user button (see Figure 4). This is in conformity with devices such as the JACO arm, which minimally requires a 2D joystick and a single button as control devices to be fully controllable [13].

Fig. 4: Example of a 3D Joystick Controller used to control the JACO arm. Operating the six functionalities of the stick (F, B, R, L, R, L) as well as user buttons B and B requires a good level of dexterity and precision, out of reach for individuals with severe disabilities.

During the design phase, three able-bodied participants (P, P and P) were recruited. The nine different head/shoulder motions presented in Figure 5 are employed to control the JACO arm with the same capability as with the joystick depicted in Figure 4. Three IMU sensors (Sensor, Sensor and Sensor) are used and worn as depicted in Figure 6. The motion classes c, c, c, c, c, c, c, c, described in Figure 5, are utilized to map the joystick functionalities F, B, R, L, R, L, B, B, respectively. The class c indicates the user’s neutral position.

Fig. 5: Dictionary of the six targeted head motions (Pitch, Roll, Yaw), the two shoulder motions, and their corresponding labels: c, c, c, c, c, c, c, c. c designates the neutral position.
Fig. 6: Illustration of two sensors worn on the right and left shoulders of a participant (Sensor and Sensor), and a third sensor worn with a headband accessory (Sensor), prior to performing a recording Session.

Two participants living with upper-body disabilities and specific residual motion capacities were then recruited to evaluate the performance of the proposed approach. Both are AT users, possess a JACO arm, and have experience using CIs such as joysticks, dedicated switches, keypads, sip-and-puff and eye-tracking tools. Prior to performing the experiment, the participants filled out a user profile form to provide information about their disability, residual body motion capacities and associated control level (from 1 to 3), spasm level if any (Low, Medium or High). The information is summarized in Table I.

Participant , a male aged 29, has a Cerebral Palsy. He has spasm (see Table I) and his RFCs allow him to perform head and foot movement. The targeted motions considered in consultation with the participant are: 4 head motions, , , and depicted in Figures 5-c), 5-d), 5-e) and 5-f), respectively, and knee elevation (). These motions allow the user to emulate all the functionalities of a 2D joystick while also allowing the simulation of a user button with . The utilization of two sensor nodes, Sensor and Sensor, worn as depicted in Figure 7-a), are necessary to record the targeted motions.

Fig. 7: (a) Partcipant and (b) particpant wearing (a-1 and b-1) and (a-2 and b-2) prior to performing the recording Sessions.

Participant , a male aged 46, lives with a degenerative muscular dystrophy. He is able to perform head motion and limited left thumb movements. A 2D joystick control configuration with a user button emulation is again utilized. Two sensor nodes are worn as depicted in Figure 7-b). Sensor is used for thumb motion sensing, as depicted in Figure 8, in order to replicate the 2D joystick control ((F, B, R, L). Additionally, the head motion depicted in Figure 5-e) is considered for user button emulation with respect to the participant’s functional capacities.

Fig. 8: Finger motions performed by participant with the corresponding labels.
Characteristics P P
Age 29 y.o. 46 y.o.
Gender male male
Disability
   Diagnosis Cerebral Palsy Muscular Dystrophy
   Condition - Degenerative
   Spasme Level High Low
   Residual Motion Head Left Thumb
Right Foot Head
Assistive Technologies
   Assistive Devices JACO arm
Powered Wheelchair…
   Adaptive CIs Joystick (Foot) Sip-and-Puff
ASBs Joystick
7 ASBs
Based on the information provided by the participant.
Ability score from 1 to 3 provided by the participant.
ASBs = Adaptive Switch Buttons.
TABLE I: Profile of Participants with Upper-Body Disabilities.

Iii-B Dataset Recording

The specific targeted motion classes for each user (see Figure 5 and 8) are each recorded for a total of five seconds per motion. Each motion was repeated three times before moving to the next one, starting from . In between repetitions, the user was requested to go back to the neutral class ( for a period of five second. In this work, this process will be referred to as a Sequence. For each user, three such Sequences are recorded to form a Session. This recording process is illustrated in Figure 9. The first two Sequences are employed for training and validation, whereas the last Sequence is reserved for the test set. Only the first two Sequences of the able-bodied participants were used during the classifier design phase to compare the performance of different architectures. In other words, the test sets of , and and all the data recorded for and were left completely untouched during the classifier design phase.

Fig. 9: Structure of a recording Session comprised of 3 Sequences (, and ) during which each of the motion classes ( from to ) is repeated 3 times (, and ) separated by a neutral position (). Each repetition or motion example lasts for 5 sec.

The first recorded Sequence () from P, using Sensor, Sensor and Sensor, is depicted in Figure 10. Pitch, Roll and Yaw are plotted with the associated labels (, ,..,). As comparison, the Pitch, Roll and Yaw recorded from participant (see Figure 7-a) who reported a High level of spasm (see Table I) are depicted in Figure 11. The measured in-class head motion angle variations (up to 10) due to spasm are clearly visible. Note that the proposed approach relies on the possibility for the user to repeat their motion patterns over time. Repeating the different classes during the recording Sequences allows for greater in-class variability to be captured and accounted for.

Fig. 10: , and recorded from , and during a recording Sequence () performed by participant .
Fig. 11: Pitch, Roll and Yaw measured during a Sequence performed by . Measured spasm level occasions amplitude variation of up to 10, making it challenging to classify the different motion classes.

In addition to precisely discriminating the body motion being performed with respect to the variability over time, the goal of the proposed system is to provide a proportional output, , as described in Section II-B. During an additional recording Session performed by P, the participant was asked to arbitrarily perform three different head motion amplitudes during the different repetitions ( to ) of head motion classes (,,,,,) (see Figure 12). This additional recording session, referred to as the multi-amplitude examples (MAE), was intended to evaluate the robustness of the proposed approach, for different motion ranges.

Fig. 12: Example for a single seq. , , and recorded from participant while performing three different motion amplitudes, during , and , for each head motion class.

In order to evaluate the performance of the classifier for long-term use, performed daily recording sessions for five consecutive days. Thus, for each day from day1 to day5, two recording sequences intended for classifier training were performed with no particular attention to precise sensor placement. This was followed by another recording period during which the software generated a random sequence of 27 motions taken from the nine shown in Figure 5. Note that the user was only shown one motion of the sequence at a time, and each motion had to be held for five seconds. The collected data was intended to provide insight about the performance of the proposed approach over several days of utilization (see Section IV-C for results).

Iii-C Classifier Descriptions

An offline classifiers exploration, using the

Statistics and Machine Learning Toolbox™

from MATLAB™, was performed on the training sets of the able-bodied participants. That is, and of participant , and are used for training and validation respectively to find the best suited classifier architecture to discriminate the targeted motion classes. For the real-time control of an external device (e.g. a prosthesis), a latency between 100-300 ms is recommended [15, 2, 36]. Consequently, a window size of eight samples ( 133 ms) was selected to enhance the classification accuracy, hence setting the processing time at the lower end of the recommended latency control. Windows are created with an overlap of 7 samples ( 116 ms) as a form of data augmentation [9, 33]

. A Linear Discriminant Classifier (LDA) was selected for the classification task as it is computationally inexpensive, robust and devoid of hyperparameters. Moreover, it was shown to obtain similar performance in comparison to more complex models for biometric pattern recognition 

[9, 31].

The offline classifiers exploration was also used to identify suitable feature extraction to be used as input for the classifier. An important consideration when designing the feature set was to limit as much as possible the computational cost of producing a given feature vector so that the solution remain lightweight. As described in Section II-B, the raw inertial data from all the sensors worn by the user is sampled and fused using a complementary filtering approach in order to retrieve the 3D orientation angles. The performance of the following 3 feature vectors was evaluated: 1) consists of the 3D orientation angles from Sensor (Pitch, Roll, Yaw), and Pitch and Roll from the other sensors used (Sensor and Sensor (if available)); 2) consists of the same components as , plus the measured gyroscope components from all sensors, which provides additional information regarding user motion characteristics such as spasm; 3) where each window is divided into two sub-windows of length four. Then, the Minimum, Maximum, Average and Absolute Sum for each of the components of are calculated to form the feature vector.

Iii-D Real-time Robotic Arm Control

To evaluate the functionality of the proposed approach in real-time, the BoMI was used to control the JACO arm. Participant performed an assembly task where, as depicted in Figure 13, two cubes of 5 cm were to be moved from position A one by one, and stacked at location B. Three repetitions were required and no timeout delay was defined. The task is considered finished when the participant successfully manages to stack the cubes in a stable manner. For comparison, the experiment is also performed using the joystick controller depicted in Figure 4 (default control method of the JACO arm). The initial position of the arm () known as the home position is depicted in Figure 13. The software algorithm is implemented in C++, using the libsubspace library [16, 17], and the Application Programmable Interface (API) provided by Kinova Robotics, Canada to control the robotic arm. It implements a data logger, running while the task is being performed, to record the controller’s output and the robotic arm’s coordinates.

JACO was controlled in 3D mode in this test and it was set to require two user buttons for mode navigation at a maximum speed of 20 cm/s. One IMU sensor () was worn with a headset, and head motion classes from to as depicted in Figure 5 were mapped to joystick functionalities F, B, R, L, R, L, which respectively correspond to displacements of the robotic arm in , , , , , (see Figure 13). The user buttons B, B used with the joystick were emulated with two Switch Click USB from Ablenet555Roseville, USA (www.ablenetinc.com). The LDA classifier is trained during a calibration phase by recording a single training MAE sequence. Three different motion amplitudes (minimum, intermediate and maximum range) are performed to allow a proportional control.

Fig. 13: Experimental setup showing the JACO arm in home position (), 2 cubes at location A, and location B where they should be stacked again.

Iv Results

Iv-a Classification Performances

The measured accuracy over the test set of each participant, for each feature vector, is summarized in Table II. For participants and , provides an average performance increase of 6.01% and 4.31% compare to and respectively. Consequently, subsequent experiments were conducted considering only .

Participants
100.00% 100.00% 100.00%
99.75% 99.78% 99.87%
100.00% 100.00% 100.00%
89.02% 90.92% 94.84%
82.28% 83.78% 88.48%
TABLE II: Classification accuracy using the different feature vectors tested (, and ) for all the participants ( to ).

Based on the confusion matrix corresponding to the measured performance with participant

(see Figure 14), the most misclassified class at 64.2% accuracy is . Figure 11 reveals that, for this class, the angle variations due to spasm are the highest in comparison to other classes, e.g. . This explains the overall measured prediction accuracy of 94.84% (see Table II). For participant , Figure 15 reveals that (see Figure 8 for a description of the control motion) is 48.8% and 14.2% confused with and , respectively. This is due to the low motion range of ’s left finger. In addition, the sensor used for the recording Session had a size of 4.0 cm by 2.5 cm, which slightly hindered the motion freedom. Note that for both and , the vast majority of the classifier’s mistakes came from predicting the motion (neutral). For real-time applications, these types of misclassifications do not affect the state of the assistive device (in comparison to other types of misclassifications) and are therefore the easiest to recover from.

Fig. 14: Confusion Matrix for Participant .
Fig. 15: Confusion Matrix for Participant .

Iv-B Proportional Control & Reliability

As described in Section III-B, participant P performed a recording session, referred to as MAE, during which three distinct amplitudes are realized for each motion class intended for intuitive directional control: a minimum and a maximum amplitude that define the range and an intermediate level (see Figure 12). The impact of varying the amplitude during the training on the classifier’s performance is assessed by employing two different datasets for training, while testing is done on the third sequence. The first training was done using the first two Sequences of the MAE dataset; the second was realized with two new sequences from participant , both recorded with a single amplitude and referred to as the single-amplitude example (SAE). Figure 16 shows a comparative view of the measured classification output in these 2 configurations. While training with a the SAE is only 80.76% accurate when the amplitude varies, the proposed training method with MAE provides 95.76% reliability. In both cases, the misclassified events are only confused with the neutral position , thus minimizing the risk of an unexpected behaviour of the controlled device.

Fig. 16: (a) Prediction accuracy when using SAE as depicted in Figure 10; (b) Prediction accuracy when a MAE is used for training, as depicted in Figure 12.

Iv-C Performance Reliability Over Several Days

As described in Section III-B, in order to evaluate reliability over several days of usage, motion data from participant was recorded every day for a 5-day period. Two sequences are intended for training and subsequent predictions are performed on data recorded from 27 random motions. First, a prediction model generated using the two training sequences recorded on day one (referred to as the day-1 model) is used to predict the labels for others days (from day one to day five). Second, the experiment is repeated using the d-day models where a new model () is generated every new day using the training data of the associated day (). Table III shows that while the d-day models outperform the day-1 model, the later model is still highly accurate even after 5 days without recalibration (98.31% test set accuracy).

Fig. 17: Measured performance during the experimental test using the presented interface to control the JACO arm. a) Measured position of JACO’s end-effector using the proposed BoMI, relatively to the final position at location B after releasing the last cube. Same result while using the joystick controller is provided within the same timeline for comparison; b) Corresponding output class of the controller with respect to the events identified as misclassified highlighted in red with the number of outputs; c) Real-time motion amplitude measured with the and proportional output used to set the robotic arm’s speed.
Training Model day1 day2 day3 day4 day5
day1 model 99.58% 99.93% 98.14% 97.50% 98.31%
d-day models 99.58% 99.60% 98.10% 100.0% 100.0%
TABLE III: Measured prediction accuracy based on the dataset recorded over 5 consecutive days. day1 model refers to the performance obtained when using the examples of day1 for training, while d-day models refers to the measured accuracy when using the d-day examples.

Iv-D Real-Time Robotic Arm Control Performance

On average, the real-time control task with the JACO arm was completed in 138 sec with the BoMI and in 58 sec with the joystick. The task duration measured during the 3 trials are reported in Table IV. Although participant , who is an able-bodied person, performed faster using the joystick controller, the assembly task was still successfully completed using the proposed BoMI, which is more accessible to individuals with certain type of upper-body disabilities.

Figure 17 provides an overview of the real-time robotic arm control experiment in action. Figure 17-a) shows the position of the robotic arm’s end effector from position , relatively to the location B, during the best trial using each controller. Figures 17-b) and 17-c) show the output of the classifier and the amplitudes and (see (1) and (2)), respectively, following the position of the robotic arm depicted in Figure 17-a). The outputs identified as misclassified events due to irrelevancy are highlighted in red with the number of outputs. When comparing with Figure 17-a), one can realize that it has not generated explicit undesired motion of the robotic arm. Furthermore, the maximum number of measured consecutive misclassifications is 18, which corresponds to only 290 ms. The accuracy in real-time operation is evaluated to be 99.2%.

Control Interface Trial 1 Trial 2 Trial 3
Proposed BoMI 156.58 121.08 135.37
Joystick Controller 59.36 64.64 49.84
TABLE IV: Measured task duration using the proposed BoMI and the joystick controller, for each of the three trials.

V Conclusion

This paper presents an assistive, modular BoMI for people living with disabilities and limited RFCs. A custom wearable and wireless body sensor network is used to measure the residual body motion of the user, and classify it to infer the user’s intents and appropriate human-machine interaction. A complimentary filtering approach is employed for IMU data fusion to retrieve the 3D orientation angles while the pattern recognition system utilizes an LDA classifier. Five participants (three able-bodied subjects and two with disabilities) were recruited to build a dataset, which is made readily available online for download. This dataset was used to evaluate the feasibility of a flexible and modular CI, capable of exploiting the motion of different body parts such as the head, shoulders, fingers and foot. The capacities of participants living with disabilities include spasm and limited motion ranges, making it challenging to discriminate the different targeted motion classes. The measured performances show that the proposed approach can reach 100% of accuracy for up to 9 head and shoulder motions when used by able-bodied individuals. Such results are highly relevant as able-bodied participants can have motion control and amplitudes similar to people with absence of upper-limbs, spinal cord-injuries at C5-C8 levels, after-stroke injuries, etc. In the presence of spasm, head motion classification combined with the right foot shows 94.84% accuracy. Finally, the discrimination of limited finger motions can achieve 88.48% accuracy.

Compared to threshold-based systems targeting specific body motion types, and architectures implementing PCA, the proposed BoMI supports 3D motion abilities of different body parts, with different characteristics, using a single architecture. Hence, the results outcomes of this feasibility study are promising.

Future work will improve upon the custom hardware used by designing smaller sensor nodes capable of measuring the RFCs of different body parts without hindering free motion. Additionally, confidence-based rejection control methods will be explored in conjunction with the LDA to, hopefully, further improve real-time usability [34]. Then, more participants with various types of disabilities and conditions will be recruited to enrich the existing dataset and make it accessible to the research community in the field of body-machine interaction and human capacity empowerment for the severely disabled.

References

  • [1] F. Abdollahi, A. Farshchiansadegh, C. Pierella, I. Seáñez-González, E. Thorp, M. Lee, R. Ranganathan, J. Pedersen, D. Chen, E. Roth, et al. (2017) Body-machine interface enables people with cervical spinal cord injury to control devices with available body movements: proof of concept. Neurorehabilitation and neural repair 31 (5), pp. 487–493. Cited by: §I.
  • [2] U. C. Allard, F. Nougarou, C. L. Fall, P. Giguère, C. Gosselin, F. Laviolette, and B. Gosselin (2016)

    A convolutional neural network for robotic arm guidance using semg based frequency-features

    .
    In Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on, pp. 2464–2470. Cited by: §III-C.
  • [3] E. Arbabi, M. Shabani, and A. Yarigholi (2017) A low cost non-wearable gaze detection system based on infrared image processing. arXiv preprint arXiv:1709.03717. Cited by: §I.
  • [4] T. F. Bastos-Filho, F. A. Cheein, S. M. T. Muller, W. C. Celeste, C. de la Cruz, D. C. Cavalieri, M. Sarcinelli-Filho, P. F. S. Amaral, E. Perez, C. M. Soria, et al. (2014) Towards a new modality-independent interface for a robotic wheelchair. IEEE Transactions on Neural Systems and Rehabilitation Engineering 22 (3), pp. 567–584. Cited by: §I.
  • [5] A. Campeau-Lecours, H. Lamontagne, S. Latour, P. Fauteux, V. Maheu, F. Boucher, C. Deguire, and L. C. L’Ecuyer (2017) Kinova modular robot arms for service robotics applications. International Journal of Robotics Applications and Technologies (IJRAT) 5 (2), pp. 49–71. Cited by: §I.
  • [6] P. Carrington, A. Hurst, and S. K. Kane (2014) Wearables and chairables: inclusive design of mobile input and output techniques for power wheelchair users. In Proceedings of the 32nd annual ACM conference on Human factors in computing systems, pp. 3103–3112. Cited by: §I.
  • [7] S. Chau, S. Aspelund, R. Mukherjee, M. Lee, R. Ranganathan, and F. Kagerer (2017) A five degree-of-freedom body-machine interface for children with severe motor impairments. In Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on, pp. 3877–3882. Cited by: §I.
  • [8] U. Cote-Allard, C. L. Fall, A. Campeau-Lecours, C. Gosselin, F. Laviolette, and B. Gosselin (2017) Transfer learning for semg hand gestures recognition using convolutional neural networks. In 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 1663–1668. Cited by: §I.
  • [9] U. Côté-Allard, C. L. Fall, A. Drouin, A. Campeau-Lecours, C. Gosselin, K. Glette, F. Laviolette, and B. Gosselin (2019) Deep learning for electromyographic hand gesture signal classification using transfer learning. IEEE Transactions on Neural Systems and Rehabilitation Engineering. Cited by: §III-C.
  • [10] U. Côté-Allard, G. Gagnon-Turcotte, A. Phinyomark, K. Glette, E. Scheme, F. Laviolette, and B. Gosselin (2019) Virtual reality to study the gap between offline and real-time emg-based gesture recognition. arXiv preprint arXiv:1912.09380. Cited by: §I.
  • [11] C. L. Fall, G. Gagnon-Turcotte, J. Dubé, J. S. Gagné, Y. Delisle, A. Campeau-Lecours, C. Gosselin, and B. Gosselin (2017) Wireless semg-based body–machine interface for assistive technology devices. IEEE journal of biomedical and health informatics 21 (4), pp. 967–977. Cited by: §I.
  • [12] C. L. Fall, F. Quevillon, A. Campeau-Lecours, S. Latour, M. Blouin, C. Gosselin, and B. Gosselin (2017) A multimodal adaptive wireless control interface for people with upper-body disabilities. 2017 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–4. Cited by: §I.
  • [13] C. L. Fall, F. Quevillon, M. Blouin, S. Latour, A. Campeau-Lecours, C. Gosselin, and B. Gosselin (2018) A multimodal adaptive wireless control interface for people with upper-body disabilities. IEEE Transactions on Biomedical Circuits and Systems. Cited by: §I, §II-A, §II-B, §III-A, §III.
  • [14] C. L. Fall, P. Turgeon, A. Campeau-Lecours, V. Maheu, M. Boukadoum, S. Roy, D. Massicotte, C. Gosselin, and B. Gosselin (2015) Intuitive wireless control of a robotic arm for people living with an upper body disability. In Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE, pp. 4399–4402. Cited by: §I.
  • [15] T. R. Farrell and R. F. Weir (2007) The optimal controller delay for myoelectric prostheses. IEEE Transactions on neural systems and rehabilitation engineering 15 (1), pp. 111–118. Cited by: §III-C.
  • [16] I. Fratric and S. Ribaric (2011)

    Local binary lda for face recognition

    .
    In European Workshop on Biometrics and Identity Management, pp. 144–155. Cited by: §III-D.
  • [17] I. Fratric (2018)(Website) External Links: Link Cited by: §III-D.
  • [18] T. Islam, M. S. Islam, M. Shajid-Ul-Mahmud, and M. Hossam-E-Haider (2017)

    Comparison of complementary and kalman filter based data fusion for attitude heading reference system

    .
    In AIP Conference Proceedings, Vol. 1919, pp. 020002. Cited by: §II-B.
  • [19] A. Jackowski, M. Gebhard, and R. Thietje (2018) Head motion and head gesture-based robot control: a usability study. IEEE Transactions on Neural Systems and Rehabilitation Engineering 26 (1), pp. 161–170. Cited by: §I, §III.
  • [20] A. Jafari, N. Buswell, M. Ghovanloo, and T. Mohsenin (2018) A low-power wearable stand-alone tongue drive system for people with severe disabilities. IEEE transactions on biomedical circuits and systems 12 (1), pp. 58–67. Cited by: §I.
  • [21] K. D. Katyal, M. S. Johannes, T. G. McGee, A. J. Harris, R. S. Armiger, A. H. Firpi, D. McMullen, G. Hotson, M. S. Fifer, N. E. Crone, et al. (2013) HARMONIE: a multimodal control framework for human assistive robotics. In Neural Engineering (NER), 2013 6th International IEEE/EMBS Conference on, pp. 1274–1278. Cited by: §I.
  • [22] S. Li, T. Fujiura, and I. Nakanishi (2017) Recording gaze trajectory of wheelchair users by a spherical camera. In Rehabilitation Robotics (ICORR), 2017 International Conference on, pp. 929–934. Cited by: §I.
  • [23] S. Li, X. Zhang, and J. Webb (2017) 3D-gaze-based robotic grasping through mimicking human visuomotor function for people with motion impairments. IEEE Transactions on Biomedical Engineering. Cited by: §I.
  • [24] J. Liu, X. Sheng, D. Zhang, J. He, and X. Zhu (2016) Reduced daily recalibration of myoelectric prosthesis classifiers based on domain adaptation. IEEE journal of biomedical and health informatics 20 (1), pp. 166–176. Cited by: §I.
  • [25] A. López, F. Ferrero, D. Yangüela, C. Álvarez, and O. Postolache (2017) Development of a computer writing system based on eog. Sensors 17 (7), pp. 1505. Cited by: §I.
  • [26] A. López, F. Ferrero, D. Yangüela, C. Álvarez, and O. Postolache (2017) Development of a computer writing system based on eog. Sensors 17 (7), pp. 1505. Cited by: §I.
  • [27] J. Meng, S. Zhang, A. Bekyo, J. Olsoe, B. Baxter, and B. He (2016) Noninvasive electroencephalogram based control of a robotic arm for reach and grasp tasks. Scientific Reports 6, pp. 38565. Cited by: §I.
  • [28] A. Nijholt, C. S. Nam, and F. Lotte (2018) Introduction: evolution of brain–computer interfaces. In Brain–Computer Interfaces Handbook, pp. 27–34. Cited by: §I.
  • [29] L. NS Andreasen Struijk, E. R. Lontis, M. Gaihede, H. A. Caltenco, M. E. Lund, H. Schioeler, and B. Bentsen (2017) Development and functional demonstration of a wireless intraoral inductive tongue computer interface for severely disabled persons. Disability and Rehabilitation: Assistive Technology 12 (6), pp. 631–640. Cited by: §I.
  • [30] R. Pi (2018)(Website) External Links: Link Cited by: §I.
  • [31] E. J. Rechy-Ramirez and H. Hu (2015) Bio-signal based control in assistive robots: a survey. Digital Communications and networks 1 (2), pp. 85–101. Cited by: §III-C.
  • [32] T. M. Research (2018)(Website) External Links: Link Cited by: §I.
  • [33] A. Sakai, Y. Minoda, and K. Morikawa (2017) Data augmentation methods for machine-learning-based classification of bio-signals. In Biomedical Engineering International Conference (BMEiCON), 2017 10th, pp. 1–4. Cited by: §III-C.
  • [34] E. J. Scheme, B. S. Hudgins, and K. B. Englehart (2013) Confidence-based rejection for improved pattern recognition myoelectric control. IEEE Transactions on Biomedical Engineering 60 (6), pp. 1563–1570. Cited by: §V.
  • [35] M. Shibata, C. Zhang, T. Ishimatsu, M. Tanaka, and J. Palomino (2015) Improvement of a joystick controller for electric wheelchair user. Modern Mechanical Engineering 5 (04), pp. 132. Cited by: §I.
  • [36] L. H. Smith, L. J. Hargrove, B. A. Lock, and T. A. Kuiken (2010) Determining the optimal window length for pattern recognition-based myoelectric control: balancing the competing effects of classification error and controller delay. IEEE Transactions on Neural Systems and Rehabilitation Engineering 19 (2), pp. 186–192. Cited by: §III-C.
  • [37] H. Sun, X. Zhang, Y. Zhao, Y. Zhang, X. Zhong, and Z. Fan (2018) A novel feature optimization for wearable human-computer interfaces using surface electromyography sensors. Sensors 18 (3), pp. 869. Cited by: §I.
  • [38] E. B. Thorp, F. Abdollahi, D. Chen, A. Farshchiansadegh, M. Lee, J. P. Pedersen, C. Pierella, E. J. Roth, I. S. Gonzáles, and F. A. Mussa-Ivaldi (2016) Upper body-based power wheelchair control interface for individuals with tetraplegia. IEEE Transactions on Neural Systems and Rehabilitation Engineering 24 (2), pp. 249–260. Cited by: §I.
  • [39] C. Wangwiwattana, X. Ding, and E. C. Larson (2018) PupilNet, measuring task evoked pupillary response using commodity rgb tablet cameras: comparison to mobile, infrared gaze trackers for inferring cognitive load. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1 (4), pp. 171. Cited by: §I.
  • [40] H. Zeng and Y. Zhao (2011) Sensing movement: microsensors for body motion measurement. Sensors 11 (1), pp. 638–660. Cited by: §II-B.