Classification of electromyographic (EMG) signals for upper-limb prosthesis activation involves algorithmically learning to distinguish patterns in EMG signals that correlate to discrete hand, wrist, or arm motions 
. Techniques such as support vector machines (SVM) can discriminate a large number of EMG movement-class patterns under ideal conditions. EMG pattern classification algorithms typically use individual data samples, represented as extracted features from a fixed window of raw EMG, to compute the boundaries that best segment the samples into distinct movement classes.
Muscle activations during steady-state contractions are generally more stable, especially with user practice, and the EMG signal patterns tend to reliably fall into established classes . However, transient-state interclass movements pose a challenge for classifiers due to the non-stationary nature of the signal patterns . In these cases, the model’s prediction stream can exhibit segments of erratic and incorrect predictions (Fig. 1). Post-processing methods have been proposed to stabilize the prediction stream such as majority filtering and confidence-based rejection . Other methods achieve stable and accurate performance by updating class boundaries adaptively  and enhancing condition-tolerance .
Non-sequential prediction models like SVM can behave erratically during transient-states, in part because they are denied temporal context provided by the preceding sequence of consecutive inputs. EMG windows or frames
are typically predicted independently, with the windowed feature extraction technique itself serving as a compressed temporal representation of EMG. Much like photographs only capture a portion of the information about a moving subject, these models provide a rough snapshot of a dynamic system frozen in time– critical temporal context is lost in the translation.
Sequential prediction models such as long short-term memory (LSTM) recurrent networks are the state-of-the-art for time-series prediction tasks like speech  and activity  recognition. At present, recurrent networks are being used for movement prediction from cortical signals  and EMG . Herein, we present a temporal convolutional network sequential model  for EMG classification that is significantly more accurate and stable than prevailing sequential and non-sequential methods, especially during movement transitions.
Ii-a Temporal Convolutional Networks
Temporal convolutional networks (TCN) are a class of sequential prediction models that are designed to learn hidden temporal dependencies within input sequences. The TCN model used herein consists of a single layer of convolutional filters followed by a time-distributed, fully-connected classification layer (Fig. 2). Within the convolution layer, a collection of =64 convolutional filters , where =25 is the duration of the filter and =8 is the number of input features, are convolved along the temporal dimension of input sequence , where is the number of 25 ms time-steps in the sequence, to produce temporal feature maps , where
class probabilities for current time
where and are the output weight matrix and bias, respectively. During preliminary testing, a subject (excluded from results) performed the experiment described in Sec. II-C. TCN and LSTM sequential models were trained from the first 3 min of data and tested on the last 3 min to determine optimal window sizes, input sequence lengths, and TCN dimensions. Performance contours for TCN and LSTM model parameters are shown in Fig. 3.
Ii-B Assessing Prediction Stability
In addition to performance accuracy, we wish to quantify a sense of the stability of a model, or how inclined the model is towards erratic class-switching or misclassification during volitional class-to-class movement. Furthermore, a model may cleanly and appropriately switch between classes, but with slight delay or anticipation. The accuracy metric penalizes this generally benign behavior, often more-so than erratic behavior.
To complement performance accuracy, we define a stability metric to quantify a model’s class-switching behavior relative to ground truth behavior. Given a vector representing a series of predictions over time, we can count how many times that the prediction model switches its class output with
where is the Kronecker delta function, or equivalence indicator. After computing as well as from ground truth label sequence , our prediction stability metric
quantifies how varied predictions are relative to ground truth. Ideally, , meaning a model’s prediction output changes exactly as often as ground truth changes. Together, accuracy and our stability metric provide a fuller understanding of a model’s behavior. Our later comparison of TCN and LSTM highlights the potential for disparity between these metrics.
Ii-C Experimental Methods
This study was conducted in accordance with protocols approved by the Johns Hopkins Medicine Institutional Review Boards. 9 able-bodied subjects (8 male, 1 female) participated in these experiments, ages: 24.13.2 years. Most subjects were inexperienced with EMG classification.
Ii-C1 Data Acquisition, Prediction, and Visualization
Eight channels of raw EMG sampled at 200 Hz were obtained from a Myo Armband (Thalmic Labs, Ontario, Canada) placed around the circumference of forearm muscle of greatest mass. Hand grasp position data were recorded with the Cyberglove II (CyberGlove Systems LLC, San Jose, CA). Wrist position data were recorded using two 9-axis MPU-9150 inertial sensors (InvenSense, San Jose, CA) (Fig. 4A).
For non-sequential models, every 25 ms we used a 200 ms sliding window to extract time-domain (TD5) features from the raw EMG signals: mean absolute value (MAV), waveform length, variance, slope sign change, and zero crossings
. For sequential models TCN and LSTM, the optimal sliding window sizes were 200 ms and 175 ms, respectively. We observed that sequential models often performed best with only MAV features trained 35 epochs. In our results, we compared the TCN model (MAV only) with LSTM (MAV), as well as the following non-sequential prediction models (TD5):
|SVM-RBF:||Support vector machine, gaussian radial basis|
Artificial neural network, 3 layers x 5 nodes
Ii-C2 3-DOF Simultaneous Protocol
After an initial 15 min practice session to familiarize the subject with movement classes and contraction consistency, subjects were asked to explore for 40 s their full range of motion in each of the 3 DOFs representing wrist and hand movements: rest, hand open/close, wrist flexion/extension, and radial/ulnar flexion (Fig. 4C). Outer boundary positions , , and comfortable interior rest position were determined for each DOF from this calibration period. Class thresholds along each DOF were set at 50% of the distances from to and . In other words, for the ground truth to switch from “rest” to “hand close,” the hand must be moved more than half the distance from rest to . Thus, for every time step, each DOF records a ternary encoding (Fig. 4D) allowing for distinct simultaneous 3-DOF movement classes. We determined when the subject was in a transient-state by calculating where joint-velocity magnitude exceeded a threshold (shown as red in Fig. 4E).
Subjects were instructed to explore 3-DOF simultaneous movements held at consistent, repeatable contraction levels in freeform fashion (any order, combination, and duration 5 s) for 6 min while EMG and position data were recorded. In our analyses, the first 3 min of this EMG data were used for model training and the last 3 min for model testing. Since the boundaries of each were subject-determined, there was no need for movement-cue presentation nor a guarantee that the subject would attempt all 27 3-DOF combinations.
Ii-C3 Experiment Analysis and Evaluation
All computations were performed with common Python 3.6.5 modules and the TemporalConvolutionalNetworks  open-source package. Statistical
-values were computed from one-way analyses of variance (ANOVA) comparing TCN accuracy and stability with other models. Figure error-bars represent 1 standard error.
Aggregated results from our 3-DOF simultaneous experiment, including -values, are shown in Fig. 5. In general, TCN using only MAV features demonstrated significantly higher transient-state accuracy than non-sequential models using TD5 features: for -NN; for Random forest and ANN; and for SVM-RBF (Fig. 5B). Furthermore, the stability of TCN was significantly higher than all other sequential and non-sequential models tested in both steady-states and transient-states (Fig. 5C). Importantly, the TCN and LSTM sequential models were similarly accurate, but LSTM was one of the least stable models.
For the most stable models (TCN, SVM, Forest, ANN), we observed that steady-state predictions were somewhat less stable on average (though consistently more accurate) than transient-states. These differences were not significant, but could indicate that some models become more unstable during pre-transition activation or post-transition recovery than during the physical transition itself. Examples of pre-transition instability can be seen in Figs. 1 and 5A.
Iv Discussion and Conclusions
Instability in the prediction output stream is a well-known problem in EMG classification, particularly during transient interclass movement. Past attempts to mitigate instability such as majority filters and confidence-based rejection  focus primarily on post-processing the output of the classification model. In the specific case of majority filtration, the cost for improved stability is a prediction delay. Coupled with the EMG window length, this delay may be quite perceptible to the user. Confidence-based rejection is highly useful because it creates almost no time delay and can be appended to improve the stability of any model, including TCN.
To address inherent model stability, we hypothesized that sequential models, designed to utilize the temporal context of sequential input data, would significantly improve EMG classification compared with traditional non-sequential models (Fig. 5A). Notably, sequential models yielded better performance accuracy when provided with only the MAV feature for each channel, whereas all non-sequential models preferred the TD5 feature set (Fig. 5B). This indicates that the hidden temporal features learned from MAV sequences are equally or more valuable for EMG classification than non-sequential prediction of TD5 features.
TCN and LSTM perform similarly with respect to classification accuracy (Fig. 5B), but TCN provides significantly more stable output behavior compared to all models evaluated during both steady-states and transient-states (Fig. 5C). Therefore, though some loss in TCN accuracy is due to anticipation or delay in the timing of class-switching, its consistently stable behavior is very desirable for reliable control of upper-limb prostheses.
For natural prosthesis control, it is necessary to accurately predict during dynamic transient-states because our limbs are often in motion, between states, not merely switching discretely between fixed positions. The ability of TCN models to correctly predict during transient-state movements hints at very promising behavior when applied to multi-DOF regression, a research avenue we are currently exploring.
We wish to acknowledge Johns Hopkins Applied Physics Laboratory (JHU/APL) for developing and making available the Virtual Integration Environment (VIE), which was developed under the Revolutionizing Prosthetics program. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. N66001-10-C-4056. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of DARPA and JHU/APL. We thank the human subjects who participated in this study and our colleagues Dr. Brock Wester, Robert Armiger, Dr. Colin Lea, Tae Soo Kim, and Dr. Austin Reiter.
-  B. Hudgins, P. Parker, R. Scott, “A new strategy for multifunction myoelectric control,” IEEE Trans. Biomed. Eng., vol. 40, no. 1, pp. 82-94, 1993.
-  Front. Neurorob., p. 10:9, 2016.
-  D. Yang, J. Zhao, L. Jiang, H. Liu, “Dynamic hand motion recognition based on transient and steady-state EMG signals,” Int. J. Humanoid Rob., vol. 9, p. 9:1250007, 2012.
-  G. Kanitz, C. Cipriani, B. Edin, “Classification of transient myoelectric signals for the control of multi-grasp hand prostheses,” IEEE Trans. Neural Syst. and Rehabil. Eng., vol. 26, no. 9, pp. 1756-64, 2018.
-  T. Lorrain, N. Jiang, D. Farina, “Influence of the training set on the accuracy of surface EMG classification in dynamic contractions for the control of multifunction prostheses,” J. NeuroEng. Rehabil., vol. 8, no. 1, p. 8:25, 2011.
E. Scheme, B. Hudgins, K. Englehart, “Confidence-based rejection for improved pattern recognition myoelectric control,”IEEE Trans. Biomed. Eng., vol. 60, no. 6, pp. 1563-70, 2013.
-  S. Amsuss, P. Goebel, N. Jiang, B. Graimann, L. Paredes, D. Farina, “Self-correcting pattern recognition system of surface emg signals for upper limb prosthesis control,” IEEE Trans. Biomed. Eng., vol. 61, no. 4, pp. 1167-76, 2014.
-  X. Zhu, J. Liu, D. Zhang, X. Sheng, N. Jiang, “Cascaded adaptation framework for fast calibration of myoelectric control,” IEEE Trans. Neural Syst. and Rehabil. Eng., vol. 25, no. 3, pp. 254-64, 2017.
-  J. Betthauser, C. Hunt, L. Osborn, M. Masters, G. Lévay, R. Kaliki, N. Thakor, “Limb Position Tolerant Pattern Recognition for Myoelectric Prosthesis Control with Adaptive Sparse Representations from Extreme Learning,” IEEE Trans. Biomed. Eng., vol. 65, no. 4, pp. 770-78, 2018.
-  S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput., vol. 9, pp. 1735-80, 1997.
-  A. Graves and J. Schmidhuber, “Framewise phoneme classification with bidirectional LSTM and other neural network architectures,” Neural Netw., vol. 18, no. 5, pp. 602-10, 2005.
F. Ordonez and D. Roggen, “Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition,”Sensors, vol. 16, p. 16:115, 2016.
-  J. Gallego, M. Perich, L. Miller, S. Solla, “Neural manifolds for the control of movement,” Neuron, vol. 94, no. 5, pp. 978-84, 2017.
P. Xia, J. Hu, Y. Peng, “EMG-based estimation of limb movement using deep learning with recurrent convolutional neural networks,”Artif. Organs, vol. 42, p. E77, 2018.
-  C. Lea, A. Reiter, R. Vidal, G. Hager, “Segmental spatiotemporal CNNs for fine-grained action segmentation,” in Computer Vision, ECCV 2016, pp. 36-52, 2016.
-  R. Hahnloser, R. Sarpeshkar, M. Mahowald, R. Douglas, H. Seung, “Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit,” Nature, vol. 405, pp. 947-51, 2000.
-  R. Luce, Individual Choice Behavior: A Theoretical Analysis. NY, Wiley, 1959.
-  A. Ravitz, M. McLoughlin, J. Beaty, F. Tenore, M. Johannes, S. Swetz, J. Helder, K. Katyal, M. Para, K. Fischer, T. Gion, B. Wester, “Revolutionizing Prosthetics–Phase 3,” Johns Hopkins APL Tech. Dig., vol. 31, no. 4, pp. 366-76, 2013.
-  B. Wester, et al., “Development of virtual integration environment sensing capabilities for the modular prosthetic limb,” Society for Neuroscience, Presentation Abstract, Washington, DC. Nov. 18, 2014.
-  C. Lea, TCN, (2018), GitHub repository. https://github.com/colincsl/TemporalConvolutionalNetworks.