A novel approach to classify natural grasp actions by estimating muscle activity patterns from EEG signals

02/03/2020 ∙ by Jeong-Hyun Cho, et al. ∙ Korea University 0

Developing electroencephalogram (EEG) based brain-computer interface (BCI) systems is challenging. In this study, we analyzed natural grasp actions from EEG. Ten healthy subjects participated in this experiment. They executed and imagined three sustained grasp actions. We proposed a novel approach which estimates muscle activity patterns from EEG signals to improve the overall classification accuracy. For implementation, we have recorded EEG and electromyogram (EMG) simultaneously. Using the similarity of the estimated pattern from EEG signals compare to the activity pattern from EMG signals showed higher classification accuracy than competitive methods. As a result, we obtained the average classification accuracy of 63.89(±7.54) movement and 46.96(±15.30) higher than the result of the competitive model, respectively. This result is encouraging, and the proposed method could potentially be used in future applications, such as a BCI-driven robot control for handling various daily use objects.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Decoding electroencephalogram (EEG) based brain-computer interfaces (BCIs) is a challenging task. Even with its difficulties, the BCIs are promising tools for detecting user intention and controlling robotic devices such as upper limb prosthesis [18, 11, 10]. Many research groups use EEG-based BCI because of its cost-effectiveness, convenience [4, 11, 9, 10], and potentials [8, 12, 7]. At the same time, improving the decoding accuracy of the BCI system is one of the major interests of many researchers [22, 27]. Among the many other motor-related EEG studies, we focused on hand movements. The hands are uniquely related to more dynamic brain activity than other extremities so we can acquire the EEG signals in the various amounts and aspects. About the decoding of movement in the hands and upper extremities, three related studies inspired our research. Schwarz et al. [21] tried to decode natural reach and grasp actions from human EEG. They attempted to identify three different executed reach and grasp actions, namely lateral, pincer, and palmar grasp, utilizing EEG neural correlates.

Other research groups had slightly different approaches. Ofner et al. [16] had encoded single upper limb movements in the time-domain of low-frequency EEG signals. The primary goal of the experiment was to classify six different actions, and those are elbow flexion, extension, hand grasp, spread, wrist twist left, and twist right.

Agashe et al. [1] had decoded hand motions with a different approach. They demonstrated that global cortical activity predicts the shape of the hand during grasping. It was an offline study, and they inferred from EEG hand joint angular velocities as well as synergistic trajectories as subjects perform natural reach-to-grasp movements. They also showed real-time closed-loop neuroprosthetic control of grasping by an amputee and the feasibility of decoding brain signals of a variety of hand motions. However, these related studies could not achieve adequate and robust decoding performance on multiple tasks of the natural hand movements due to its complex characteristics of the brain signal data related to the hand and upper limb. Therefore, we tried to solve this challenging limitation with a new approach and perspective.

The objective of this study is to confirm whether our proposed method that performs muscle activity pattern matching by creating the estimated muscle activity patterns from each electromyographic (EMG) and EEG signals improves the BCI performance of each subject or not. At the same time, we proved the feasibility of classifying various grasping tasks in the right hand from EEG signals with the proposed method in both actual movement and MI paradigm. Using this new approach, we achieved the improvement of classification accuracy. This approach will be used for further BCI applications, such as controlling a robotic hand. The signals from ten participants were acquired, and we only selected the signals related to muscle activity from the segmented data. With the running of sufficient experimental trials and data analysis, we could construct a robust decoding model based on our proposed method.

Fig. 1: Experimental environment and location of EEG and EMG electrodes
Ch. Target muscle Related Muscle activity
1 Extensor carpi ulnaris Wrist extension and abduction
2 Extensor digitorum Finger extension and abduction
3 Flexor carpi radialis Wrist and hand flexion
4 Flexor carpi ulnaris Palm and finger flexion
5 Biceps brachii Forearm lifting
6 Triceps brachii Forearm extension and retraction
TABLE I: Selected EMG channels and the targeted muscles

Ii Materials and Methods

Ii-a Participants

Ten healthy subjects with no history of neurological disease were recruited for the experiment (S1–S10; ages 24–33; six men, four women; all right-handed). This study was reviewed and approved by the International Review Board, at Korea University [1040548-KU-IRB-17-172-A-2], and written informed consent was obtained from all participants before the experiments.

Ii-B Experimental Setup

During a session of the experimental protocol, the subjects sat in front of a 24-inch LCD monitor screen, in a comfortable chair. The screen was installed on the table to make sure that the subjects could see the objects and visual cue. Fig. 1 indicates the experimental setup and the environment during the entire session. The subjects were asked to perform or imagine a specific grasping action following each auditory and visual cue. During the experiment, the subjects were asked to perform three different grasp actions or imagery, which are illustrated in Fig. 2 (a). The location of the object setup was randomly changed to reduce the effect of artifacts.

Fig. 2: Experimental protocol for data acquisition and visual cues
Fig. 3: Proposed method for classifying grasp actions with muscle activity pattern matching and comparison
Fig. 4: Average classification accuracy on the each EMG activation

Ii-C Data Acquisition

EEG data were collected at 2,500 Hz using 20 Ag/AgCl electrodes (FC1–6, C1–6, Cz, CP1–6, and CPz) in 10/20 international system via BrainAmp (BrainProduct GmbH)[5, 20, 2]. At the same time, a 60 Hz notch filter was used to remove power frequency interference. The FCz and FPz were used as reference and ground electrodes, respectively. All impedances were maintained below 10 k. The 20 channels were located only on the motor cortex to make sure that the recorded EEG signals are highly related to the motor-related potentials, which are from the actual movement and MI, as shown in Fig. 1.

EMG signals were recorded using 7 Ag/AgCl electrodes from a digital amplifier, which is the same equipment to record EEG signals. We acquired the EMG signals with the EEG signals, simultaneously. The EMG data were recorded from six related muscles of right arm movement, as shown in Table I. The last electrode was placed nearby on the right arm elbow, which is a non-muscle movement area, for a reference signal [25].

Ii-D Data Analysis

We followed the conventional process of filtering the EEG and EMG signals [3, 6, 5]. We used [4–40] Hz frequency band and separated them into eleven sub-bands for further analysis. The size of the separated bands is 4 Hz size with a step size of 2 Hz [28, 17]. We extracted spatial patterns using common spatial patterns (CSP) from the filtered EEG [19, 15]. The various window size of 500–2,000 ms was applied to make data segments from the raw EMG signals. The same sliding window was applied to the raw EEG signals as well. After creating the data segments, we adapted the function to calculate the root mean square (RMS) value on the segmented EMG data [14, 25]. Before the RMS step, we already calculated a threshold from the averaged RMS value in a single trial (0–4 sec). When the RMS value of the EMG signal occurs at the specific time point over the pre-defined threshold, we classify this data segment to 0 or 1, a binary classification. We used the sliding windows and made 30 segments from a single trial which is 4,000 ms long. The 130 size image shows the result of decoding EMG signals, as shown in Fig 3. This process was performed once by each EMG channel so we could create six images from the six channels. The 630-size image shows a muscle activity pattern. We have built the group of pattern for each grasp action class by repeating this process 50 times because each subject performed 50 trials for each class.

In the case of EEG signal decoding, we applied the same moving window from the EMG decoding process. We used this data to train a binary classifier. The label is generated from the EMG decoding and it is used as a training label and ground truth for scoring the performance of the classifier. The CSP was applied to extract spatial features. The features were used as input data to train the linear discriminant analysis (LDA). Throughout the process, we have noted that the estimated pattern from the EEG corresponds to the pattern from the EMG signals. After that, we compared the similarity with the group of patterns, the ground truth. We calculated the mean squared error (MSE) for every 150 pattern images. One of the three that showed the lowest averaged error (the highest similarity) compared to the estimated pattern can be defined as the intended grasp action by the subject.

Iii Results and Discussion

The proposed method improves the overall classification performance of the BCI system. We compared our method to the two other competitive models, the model I and model II, as shown in Tables II and III. Model I contains CSP and LDA [19, 26]. Model II contains filter bank regularized CSP (FBRCSP), which usually shows the highest performance in other BCI studies [17, 23]. The proposed method that uses muscle activity patterns to classify natural grasp actions showed 24.01% increased classification accuracy than the model I and 21.59% improvement compared to the model II.

Table III describes the results in the motor imagery paradigm. The proposed method showed 8.60% and 5.66% improvement in the motor imagery paradigm. It is much less than the result in actual movement, but we point out the proposed method increases the classification performance dramatically on specific subjects, such as S3, 4, and 8.

In motor imagery, the proposed method showed unstable performance for improving the classification accuracy. We assume that the problem is because of the binary classifier which is used to create the estimated pattern from EEG signals. Unlike the case of the actual movement paradigm, we could not obtain the corresponding EMG signals while the subjects performed motor imagery. Therefore, we recalled the trained classifier from the actual movement decoding process and then applied it to the EEG data of the motor imagery [13, 6].

As a result, the similarity of the final estimated pattern as compared to the muscle activity patterns is much lower than the pattern we could create in the actual movement paradigm due to the limitation of estimating without the corresponding EMG signals. Nevertheless, the proposed method also showed an improvement in performance, although the limitation in the motor imagery paradigm.

Subject Accuracy (%)
Proposed Model I Model II
S1 69.32 39.54 41.32
S2 71.98 33.04 40.12
S3 70.22 36.23 38.88
S4 67.13 42.32 41.32
S5 59.10 39.31 44.08
S6 68.23 35.24 39.20
S7 47.43 48.23 49.12
S8 67.02 41.48 47.33
S9 58.43 45.01 39.42
S10 60.01 37.99 42.15
Mean±Std. 63.89±7.54 39.84±4.60 42.29±3.52
TABLE II: Classification result comparison in actual movement paradigm
Subject Accuracy (%)
Proposed Model I Model II
S1 33.32 34.13 41.32
S2 38.42 33.50 36.44
S3 69.23 38.12 39.43
S4 62.13 40.31 32.54
S5 38.03 40.01 35.23
S6 43.23 39.31 59.32
S7 35.23 43.23 43.24
S8 73.48 32.48 44.24
S9 34.11 38.49 39.32
S10 42.42 44.01 41.9
Mean±Std. 46.96±15.29 38.36±3.93 41.30±7.32
TABLE III: Classification result comparison in motor imagery paradigm

Iv Conclusion and Future Work

The proposed method suggested a novel approach to decode EEG signals to classify natural grasp actions. In Fig. 4, each graph shows binary classification accuracy for each muscle activity by the EMG channel. Using this binary classifier, we could get reliable estimated muscle activity patterns from EEG data decoding. Our method has the potential to improve performance by increasing the classification accuracy on the binary classifier to get a more accurate estimated muscle activity pattern or improve the similarity comparison step using an advanced model such as deep learning [24, 29].

Acknowledgment

The authors thank to K.-H. Shim, B.-H. Kwon, B.-H. Lee, D.-Y. Lee and D.-H. Lee for help with the database construction and useful discussions of the experiment.

References

  • [1] H. A. Agashe, A. Y. Paek, Y. Zhang, and J. L. Contreras-Vidal (2015) Global cortical activity predicts shape of hand during grasping. Front Neurosci. 9, pp. 121. Cited by: §I.
  • [2] K. K. Ang, Z. Y. Chin, C. Wang, C. Guan, and H. Zhang (2012) Filter bank common spatial pattern algorithm on bci competition iv datasets 2a and 2b. Front Neurosci. 6, pp. 39. Cited by: §II-C.
  • [3] J. Cho, J. Jeong, K. Shim, D. Kim, and S. Lee (2018) Classification of hand motions within eeg signals for non-invasive bci-based robot hand control. In Conf. Proc. IEEE Int. Syst. Man Cybern. (SMC), pp. 515–518. Cited by: §II-D.
  • [4] V. Gilja, C. Pandarinath, C. H. Blabe, P. Nuyujukian, J. D. Simeral, A. A. Sarma, B. L. Sorice, J. A. Perge, B. Jarosiewicz, L. R. Hochberg, et al. (2015) Clinical translation of a high-performance neural prosthesis. Nat. Med. 21, pp. 1142. Cited by: §I.
  • [5] J. Jeong, K. Kim, D. Kim, and S. Lee (Oct. 7-10 2018) Decoding of multi-directional reaching movements for EEG-based robot arm control. In Conf. Proc. IEEE Int. Syst. Man Cybern. (SMC), pp. 511–514. Cited by: §II-C, §II-D.
  • [6] J. Jeong, M. Lee, N. Kwak, and S. Lee (Jan. 9-11 2017) Single-trial analysis of readiness potentials for lower limb exoskeleton control. In Int. Winter Conf. on Brain-Computer Interface (BCI), pp. 50–52. Cited by: §II-D, §III.
  • [7] J. Jeong, K. Shim, D. Kim, and S. Lee (July 23 2019) Trajectory decoding of arm reaching movement imageries for brain-controlled robot arm system. In Int. Conf. Proc. IEEE Eng. Med. Biol. Soc. (EMBC), pp. 5544–5547. Cited by: §I.
  • [8] T. Kam, H. Suk, and S. Lee (2013) Non-homogeneous spatial filter optimization for electroencephalogram (EEG)-based motor imagery classification. Neurocomputing 108, pp. 58–68. Cited by: §I.
  • [9] I. Kim, J. Kim, S. Haufe, and S. Lee (2014) Detection of braking intention in diverse situations during simulated driving based on eeg feature combination. J. Neural Eng. 12, pp. 016001. Cited by: §I.
  • [10] J. Kim, F. Bießmann, and S. Lee (2014) Decoding three-dimensional trajectory of executed and imagined arm movements from electroencephalogram signals. IEEE Trans. Neural Syst. Rehabil. Eng. 23, pp. 867–876. Cited by: §I.
  • [11] K. Kim, H. Suk, and S. Lee (2016) Commanding a brain-controlled wheelchair using steady-state somatosensory evoked potentials. IEEE Trans. Neural Syst. Rehabil. Eng. 26, pp. 654–665. Cited by: §I.
  • [12] N. Kwak, K. Müller, and S. Lee (2017)

    A convolutional neural network for steady state visual evoked potential classification under ambulatory environment

    .
    PLoS One 12, pp. e0172578. Cited by: §I.
  • [13] M. Lee, S. Fazli, J. Mehnert, and S. Lee (2015) Subject-dependent classification for robust idle state detection using multi-modal neuroimaging and data-fusion techniques in bci. Pattern Recognit. 48, pp. 2725–2737. Cited by: §III.
  • [14] X. Li, O. W. Samuel, X. Zhang, H. Wang, P. Fang, and G. Li (2017) A motion-classification strategy based on semg-eeg signal combination for upper-limb amputees. J. Neuroeng. Rehabil. 14, pp. 2. Cited by: §II-D.
  • [15] J. Müller-Gerking, G. Pfurtscheller, and H. Flyvbjerg (1999) Designing optimal spatial filters for single-trial eeg classification in a movement task. Clin. Neurophysiol. 110, pp. 787–798. Cited by: §II-D.
  • [16] P. Ofner, A. Schwarz, J. Pereira, and G. R. Müller-Putz (2017) Upper limb movements can be decoded from the time-domain of low-frequency eeg. PLoS One 12, pp. e0182578. Cited by: §I.
  • [17] S. Park, D. Lee, and S. Lee (2017) Filter bank regularized common spatial pattern ensemble for small sample motor imagery classification. IEEE Trans. Neural Syst. Rehabil. Eng. 26, pp. 498–505. Cited by: §II-D, §III.
  • [18] G. Pfurtscheller (1997) EEG event-related desynchronization (erd) and synchronization (ers). Electroencephalogr. Clin. Neurophysiol. 1, pp. 26. Cited by: §I.
  • [19] H. Ramoser, J. Muller-Gerking, and G. Pfurtscheller (2000) Optimal spatial filtering of single trial eeg during imagined hand movement. IEEE Trans. Rehabil. Eng. 8, pp. 441–446. Cited by: §II-D, §III.
  • [20] N. Robinson, A. P. Vinod, K. K. Ang, K. P. Tee, and C. T. Guan (2013) EEG-based classification of fast and slow hand movements using wavelet-csp algorithm. IEEE Trans. Biomed. Eng. 60, pp. 2123–2132. Cited by: §II-C.
  • [21] A. Schwarz, P. Ofner, J. Pereira, A. I. Sburlea, and G. R. Mueller-Putz (2017) Decoding natural reach-and-grasp actions from human eeg. J. Neural Eng. 15, pp. 016005. Cited by: §I.
  • [22] H. Suk, S. Lee, D. Shen, A. D. N. Initiative, et al. (2016)

    Deep sparse multi-task learning for feature selection in alzheimer’s disease diagnosis

    .
    Brain Struct. Funct. 221, pp. 2569–2587. Cited by: §I.
  • [23] H. Suk and S. Lee (2012)

    A novel bayesian framework for discriminative feature extraction in brain-computer interfaces

    .
    IEEE Trans. Pattern Anal. Mach. Intell. 35, pp. 286–299. Cited by: §III.
  • [24] Y. R. Tabar and U. Halici (2016) A novel deep learning approach for classification of eeg motor imagery signals. J. Neural Eng. 14, pp. 016003. Cited by: §IV.
  • [25] E. Trigili, L. Grazi, S. Crea, A. Accogli, J. Carpaneto, S. Micera, N. Vitiello, and A. Panarese (2019) Detection of movement onset using emg signals for upper-limb exoskeletons in reaching tasks. J. Neuroeng. Rehabil. 16, pp. 45. Cited by: §II-C, §II-D.
  • [26] L. Yao, N. Mrachacz-Kersting, X. Sheng, X. Zhu, D. Farina, and N. Jiang (2018) A multi-class bci based on somatosensory imagery. IEEE Trans. Neural Syst. Rehabil. Eng. 26, pp. 1508–1515. Cited by: §III.
  • [27] S. Yeom, S. Fazli, K. Müller, and S. Lee (2014) An efficient erp-based brain-computer interface using random set presentation and face familiarity. PLoS One 9, pp. e111157. Cited by: §I.
  • [28] Y. Zhang, C. S. Nam, G. Zhou, J. Jin, X. Wang, and A. Cichocki (2018) Temporally constrained sparse group spatial patterns for motor imagery bci. IEEE Trans. Cybern. 49, pp. 3322–3332. Cited by: §II-D.
  • [29] Z. Zhang, F. Duan, J. Solé-Casals, J. Dinarès-Ferran, A. Cichocki, Z. Yang, and Z. Sun (2019) A novel deep learning approach with data augmentation to classify motor imagery signals. IEEE Access 7, pp. 15945–15954. Cited by: §IV.