Stable Electromyographic Sequence Prediction During Movement Transitions using Temporal Convolutional Networks

01/08/2019 ∙ by Joseph L. Betthauser, et al. ∙ Johns Hopkins University 0

Transient muscle movements influence the temporal structure of myoelectric signal patterns, often leading to unstable prediction behavior from movement-pattern classification methods. We show that temporal convolutional network sequential models leverage the myoelectric signal's history to discover contextual temporal features that aid in correctly predicting movement intentions, especially during interclass transitions. We demonstrate myoelectric classification using temporal convolutional networks to effect 3 simultaneous hand and wrist degrees-of-freedom in an experiment involving nine human-subjects. Temporal convolutional networks yield significant (p<0.001) performance improvements over other state-of-the-art methods in terms of both classification accuracy and stability.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Classification of electromyographic (EMG) signals for upper-limb prosthesis activation involves algorithmically learning to distinguish patterns in EMG signals that correlate to discrete hand, wrist, or arm motions [1]

. Techniques such as support vector machines (SVM) can discriminate a large number of EMG movement-class patterns under ideal conditions 

[2]. EMG pattern classification algorithms typically use individual data samples, represented as extracted features from a fixed window of raw EMG, to compute the boundaries that best segment the samples into distinct movement classes.

Fig. 1:

EMG movement-pattern classification strategies can exhibit erratic prediction behavior during transient-states when a subject is switching between classes. Steady-states during class contractions tend to elicit a more reliable, stable classifier response; however, this behavior is not guaranteed and is largely based on the subject’s experience level. We propose temporal convolutional networks to improve both accuracy and stability.

Muscle activations during steady-state contractions are generally more stable, especially with user practice, and the EMG signal patterns tend to reliably fall into established classes [3]. However, transient-state interclass movements pose a challenge for classifiers due to the non-stationary nature of the signal patterns [4][5]. In these cases, the model’s prediction stream can exhibit segments of erratic and incorrect predictions (Fig. 1). Post-processing methods have been proposed to stabilize the prediction stream such as majority filtering and confidence-based rejection [6]. Other methods achieve stable and accurate performance by updating class boundaries adaptively [7][8] and enhancing condition-tolerance [9].

Non-sequential prediction models like SVM can behave erratically during transient-states, in part because they are denied temporal context provided by the preceding sequence of consecutive inputs. EMG windows or frames

are typically predicted independently, with the windowed feature extraction technique itself serving as a compressed temporal representation of EMG. Much like photographs only capture a portion of the information about a moving subject, these models provide a rough snapshot of a dynamic system frozen in time– critical temporal context is lost in the translation.

Sequential prediction models such as long short-term memory (LSTM) recurrent networks 

[10] are the state-of-the-art for time-series prediction tasks like speech [11] and activity [12] recognition. At present, recurrent networks are being used for movement prediction from cortical signals [13] and EMG [14]. Herein, we present a temporal convolutional network sequential model [15] for EMG classification that is significantly more accurate and stable than prevailing sequential and non-sequential methods, especially during movement transitions.

Ii Methods

Fig. 2: Multi-channel EMG features can be fed as fixed-length sequences into a TCN network for classification. TCN achieves good performance by learning convolutions which exploit temporal dependencies within the input sequences. These convolutions may provide a higher degree of regularization than other sequential models, such as the accurate but less-stable LSTM.

Ii-a Temporal Convolutional Networks

Temporal convolutional networks (TCN) are a class of sequential prediction models that are designed to learn hidden temporal dependencies within input sequences. The TCN model used herein consists of a single layer of convolutional filters followed by a time-distributed, fully-connected classification layer (Fig. 2). Within the convolution layer, a collection of =64 convolutional filters , where =25 is the duration of the filter and =8 is the number of input features, are convolved along the temporal dimension of input sequence , where is the number of 25 ms time-steps in the sequence, to produce temporal feature maps , where

(1)

Rectified linear unit (ReLU) activations [16] are applied to each element which are fed to a time-distributed, fully-connected layer for classification. Softmax activation [17] is applied to produce

class probabilities for current time

(2)

where and are the output weight matrix and bias, respectively. During preliminary testing, a subject (excluded from results) performed the experiment described in Sec. II-C. TCN and LSTM sequential models were trained from the first 3 min of data and tested on the last 3 min to determine optimal window sizes, input sequence lengths, and TCN dimensions. Performance contours for TCN and LSTM model parameters are shown in Fig. 3.

Ii-B Assessing Prediction Stability

In addition to performance accuracy, we wish to quantify a sense of the stability of a model, or how inclined the model is towards erratic class-switching or misclassification during volitional class-to-class movement. Furthermore, a model may cleanly and appropriately switch between classes, but with slight delay or anticipation. The accuracy metric penalizes this generally benign behavior, often more-so than erratic behavior.

To complement performance accuracy, we define a stability metric to quantify a model’s class-switching behavior relative to ground truth behavior. Given a vector representing a series of predictions over time, we can count how many times that the prediction model switches its class output with

(3)

where is the Kronecker delta function, or equivalence indicator. After computing as well as from ground truth label sequence , our prediction stability metric

(4)

quantifies how varied predictions are relative to ground truth. Ideally, , meaning a model’s prediction output changes exactly as often as ground truth changes. Together, accuracy and our stability metric provide a fuller understanding of a model’s behavior. Our later comparison of TCN and LSTM highlights the potential for disparity between these metrics.

Fig. 3: Performance contours for parametric tuning of TCN to determine optimal sliding window size and sequence length. The TCN model favored a window of 40 (200 ms) and sequence length =60 for classification. Stated differently, with a step-size of 25 ms, TCN accuracy was highest when its input sequence represented the preceding 1.675 s of EMG information. Importantly, the TCN contour shows that, for short sequences, performance is lower and more dependent on window length. For longer sequences, window length is less of a factor.

Ii-C Experimental Methods

This study was conducted in accordance with protocols approved by the Johns Hopkins Medicine Institutional Review Boards. 9 able-bodied subjects (8 male, 1 female) participated in these experiments, ages: 24.13.2 years. Most subjects were inexperienced with EMG classification.

Ii-C1 Data Acquisition, Prediction, and Visualization

Eight channels of raw EMG sampled at 200 Hz were obtained from a Myo Armband (Thalmic Labs, Ontario, Canada) placed around the circumference of forearm muscle of greatest mass. Hand grasp position data were recorded with the Cyberglove II (CyberGlove Systems LLC, San Jose, CA). Wrist position data were recorded using two 9-axis MPU-9150 inertial sensors (InvenSense, San Jose, CA) (Fig. 4A).

Fig. 4: (A) Each subject was fitted with a Myo Armband, CyberGlove, and inertial sensors for EMG and hand/wrist positional recording. (B) The vMPL environment provided a real-time display of the subject’s hand/wrist orientations. (C) EMG signals were used to predict movement classes along 3 hand/wrist degrees-of-freedom. (D) At each time-step, each DOF is converted from continuous joint position into a ternary class encoding (rest: 0, forward: +1, or reverse:-1) representing one of simultaneous 3-DOF movement classes. (E) Example sequence of 3-DOF joint angles , the corresponding conversion into ground truth class labels, and the class prediction output stream of TCN during this sequence. Three transient prediction problems are evident from this example: lagging, leading, and classification errors. The first two relate to specific timing of volitional movements, whereas the latter is an unintended movement class. The accuracy metric penalizes all three, whereas our defined stability metric only penalizes unintentional class-switching.

For non-sequential models, every 25 ms we used a 200 ms sliding window to extract time-domain (TD5) features from the raw EMG signals: mean absolute value (MAV), waveform length, variance, slope sign change, and zero crossings 

[1]

. For sequential models TCN and LSTM, the optimal sliding window sizes were 200 ms and 175 ms, respectively. We observed that sequential models often performed best with only MAV features trained 35 epochs. In our results, we compared the TCN model (MAV only) with LSTM (MAV), as well as the following non-sequential prediction models (TD5):

-NN: -nearest neighbors,
SVM-RBF: Support vector machine, gaussian radial basis
Forest: Random forest
ANN:

Artificial neural network, 3 layers x 5 nodes

Fig. 5: Comparative analysis of TCN with other classifiers. (A) Example prediction output behaviors of many popular classifiers when predicting 3 classes for a single DOF. Incorrect predictions often occur during interclass transitions. TCN demonstrated resilience in transient-states, but its class-switching was sometimes slightly anticipatory or delayed, informing our decision to devise a stability metric to complement the accuracy metric. (B) Prediction accuracy of 9 able-bodied subjects in our 3-DOF simultaneous experiment. Statistical significance thresholds are denoted. (C) Prediction stability, 9 able-bodied subjects. TCN and LSTM achieved higher accuracy with MAV features than non-sequential methods with TD5 features, though LSTM was less stable than TCN.

A user interface was developed for Python to control the virtual Modular Prosthetic Limb (vMPL) subcomponent of Johns Hopkins University Applied Physics Laboratory’s MiniVIE system [18][19] in order provide the subject with a real-time display of their current hand/wrist orientations (Fig. 4B).

Ii-C2 3-DOF Simultaneous Protocol

After an initial 15 min practice session to familiarize the subject with movement classes and contraction consistency, subjects were asked to explore for 40 s their full range of motion in each of the 3 DOFs representing wrist and hand movements: rest, hand open/close, wrist flexion/extension, and radial/ulnar flexion (Fig. 4C). Outer boundary positions , , and comfortable interior rest position were determined for each DOF from this calibration period. Class thresholds along each DOF were set at 50% of the distances from to and . In other words, for the ground truth to switch from “rest” to “hand close,” the hand must be moved more than half the distance from rest to . Thus, for every time step, each DOF records a ternary encoding (Fig. 4D) allowing for distinct simultaneous 3-DOF movement classes. We determined when the subject was in a transient-state by calculating where joint-velocity magnitude exceeded a threshold (shown as red in Fig. 4E).

Subjects were instructed to explore 3-DOF simultaneous movements held at consistent, repeatable contraction levels in freeform fashion (any order, combination, and duration 5 s) for 6 min while EMG and position data were recorded. In our analyses, the first 3 min of this EMG data were used for model training and the last 3 min for model testing. Since the boundaries of each were subject-determined, there was no need for movement-cue presentation nor a guarantee that the subject would attempt all 27 3-DOF combinations.

Ii-C3 Experiment Analysis and Evaluation

All computations were performed with common Python 3.6.5 modules and the TemporalConvolutionalNetworks [20] open-source package. Statistical

-values were computed from one-way analyses of variance (ANOVA) comparing TCN accuracy and stability with other models. Figure error-bars represent 1 standard error.

Iii Results

Aggregated results from our 3-DOF simultaneous experiment, including -values, are shown in Fig. 5. In general, TCN using only MAV features demonstrated significantly higher transient-state accuracy than non-sequential models using TD5 features: for -NN; for Random forest and ANN; and for SVM-RBF (Fig. 5B). Furthermore, the stability of TCN was significantly higher than all other sequential and non-sequential models tested in both steady-states and transient-states (Fig. 5C). Importantly, the TCN and LSTM sequential models were similarly accurate, but LSTM was one of the least stable models.

For the most stable models (TCN, SVM, Forest, ANN), we observed that steady-state predictions were somewhat less stable on average (though consistently more accurate) than transient-states. These differences were not significant, but could indicate that some models become more unstable during pre-transition activation or post-transition recovery than during the physical transition itself. Examples of pre-transition instability can be seen in Figs. 1 and 5A.

Iv Discussion and Conclusions

Instability in the prediction output stream is a well-known problem in EMG classification, particularly during transient interclass movement. Past attempts to mitigate instability such as majority filters and confidence-based rejection [6] focus primarily on post-processing the output of the classification model. In the specific case of majority filtration, the cost for improved stability is a prediction delay. Coupled with the EMG window length, this delay may be quite perceptible to the user. Confidence-based rejection is highly useful because it creates almost no time delay and can be appended to improve the stability of any model, including TCN.

To address inherent model stability, we hypothesized that sequential models, designed to utilize the temporal context of sequential input data, would significantly improve EMG classification compared with traditional non-sequential models (Fig. 5A). Notably, sequential models yielded better performance accuracy when provided with only the MAV feature for each channel, whereas all non-sequential models preferred the TD5 feature set (Fig. 5B). This indicates that the hidden temporal features learned from MAV sequences are equally or more valuable for EMG classification than non-sequential prediction of TD5 features.

TCN and LSTM perform similarly with respect to classification accuracy (Fig. 5B), but TCN provides significantly more stable output behavior compared to all models evaluated during both steady-states and transient-states (Fig. 5C). Therefore, though some loss in TCN accuracy is due to anticipation or delay in the timing of class-switching, its consistently stable behavior is very desirable for reliable control of upper-limb prostheses.

For natural prosthesis control, it is necessary to accurately predict during dynamic transient-states because our limbs are often in motion, between states, not merely switching discretely between fixed positions. The ability of TCN models to correctly predict during transient-state movements hints at very promising behavior when applied to multi-DOF regression, a research avenue we are currently exploring.

Acknowledgment

We wish to acknowledge Johns Hopkins Applied Physics Laboratory (JHU/APL) for developing and making available the Virtual Integration Environment (VIE), which was developed under the Revolutionizing Prosthetics program. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. N66001-10-C-4056. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of DARPA and JHU/APL. We thank the human subjects who participated in this study and our colleagues Dr. Brock Wester, Robert Armiger, Dr. Colin Lea, Tae Soo Kim, and Dr. Austin Reiter.

References

  • [1] B. Hudgins, P. Parker, R. Scott, “A new strategy for multifunction myoelectric control,” IEEE Trans. Biomed. Eng., vol. 40, no. 1, pp. 82-94, 1993.
  • [2]

    M. Atzori, M. Cognolato, H. Muller, “Deep learning with convolutional neural networks applied to electromyography data: a resource for the classification of movements for prosthetic hands,”

    Front. Neurorob., p. 10:9, 2016.
  • [3] D. Yang, J. Zhao, L. Jiang, H. Liu, “Dynamic hand motion recognition based on transient and steady-state EMG signals,” Int. J. Humanoid Rob., vol. 9, p. 9:1250007, 2012.
  • [4] G. Kanitz, C. Cipriani, B. Edin, “Classification of transient myoelectric signals for the control of multi-grasp hand prostheses,” IEEE Trans. Neural Syst. and Rehabil. Eng., vol. 26, no. 9, pp. 1756-64, 2018.
  • [5] T. Lorrain, N. Jiang, D. Farina, “Influence of the training set on the accuracy of surface EMG classification in dynamic contractions for the control of multifunction prostheses,” J. NeuroEng. Rehabil., vol. 8, no. 1, p. 8:25, 2011.
  • [6]

    E. Scheme, B. Hudgins, K. Englehart, “Confidence-based rejection for improved pattern recognition myoelectric control,”

    IEEE Trans. Biomed. Eng., vol. 60, no. 6, pp. 1563-70, 2013.
  • [7] S. Amsuss, P. Goebel, N. Jiang, B. Graimann, L. Paredes, D. Farina, “Self-correcting pattern recognition system of surface emg signals for upper limb prosthesis control,” IEEE Trans. Biomed. Eng., vol. 61, no. 4, pp. 1167-76, 2014.
  • [8] X. Zhu, J. Liu, D. Zhang, X. Sheng, N. Jiang, “Cascaded adaptation framework for fast calibration of myoelectric control,” IEEE Trans. Neural Syst. and Rehabil. Eng., vol. 25, no. 3, pp. 254-64, 2017.
  • [9] J. Betthauser, C. Hunt, L. Osborn, M. Masters, G. Lévay, R. Kaliki, N. Thakor, “Limb Position Tolerant Pattern Recognition for Myoelectric Prosthesis Control with Adaptive Sparse Representations from Extreme Learning,” IEEE Trans. Biomed. Eng., vol. 65, no. 4, pp. 770-78, 2018.
  • [10] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput., vol. 9, pp. 1735-80, 1997.
  • [11] A. Graves and J. Schmidhuber, “Framewise phoneme classification with bidirectional LSTM and other neural network architectures,” Neural Netw., vol. 18, no. 5, pp. 602-10, 2005.
  • [12]

    F. Ordonez and D. Roggen, “Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition,”

    Sensors, vol. 16, p. 16:115, 2016.
  • [13] J. Gallego, M. Perich, L. Miller, S. Solla, “Neural manifolds for the control of movement,” Neuron, vol. 94, no. 5, pp. 978-84, 2017.
  • [14]

    P. Xia, J. Hu, Y. Peng, “EMG-based estimation of limb movement using deep learning with recurrent convolutional neural networks,”

    Artif. Organs, vol. 42, p. E77, 2018.
  • [15] C. Lea, A. Reiter, R. Vidal, G. Hager, “Segmental spatiotemporal CNNs for fine-grained action segmentation,” in Computer Vision, ECCV 2016, pp. 36-52, 2016.
  • [16] R. Hahnloser, R. Sarpeshkar, M. Mahowald, R. Douglas, H. Seung, “Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit,” Nature, vol. 405, pp. 947-51, 2000.
  • [17] R. Luce, Individual Choice Behavior: A Theoretical Analysis. NY, Wiley, 1959.
  • [18] A. Ravitz, M. McLoughlin, J. Beaty, F. Tenore, M. Johannes, S. Swetz, J. Helder, K. Katyal, M. Para, K. Fischer, T. Gion, B. Wester, “Revolutionizing Prosthetics–Phase 3,” Johns Hopkins APL Tech. Dig., vol. 31, no. 4, pp. 366-76, 2013.
  • [19] B. Wester, et al., “Development of virtual integration environment sensing capabilities for the modular prosthetic limb,” Society for Neuroscience, Presentation Abstract, Washington, DC. Nov. 18, 2014.
  • [20] C. Lea, TCN, (2018), GitHub repository. https://github.com/colincsl/TemporalConvolutionalNetworks.