Motor Imagery Classification of Single-Arm Tasks Using Convolutional Neural Network based on Feature Refining

02/04/2020 ∙ by Byeong-Hoo Lee, et al. ∙ Korea University 0

Brain-computer interface (BCI) decodes brain signals to understand user intention and status. Because of its simple and safe data acquisition process, electroencephalogram (EEG) is commonly used in non-invasive BCI. One of EEG paradigms, motor imagery (MI) is commonly used for recovery or rehabilitation of motor functions due to its signal origin. However, the EEG signals are an oscillatory and non-stationary signal that makes it difficult to collect and classify MI accurately. In this study, we proposed a band-power feature refining convolutional neural network (BFR-CNN) which is composed of two convolution blocks to achieve high classification accuracy. We collected EEG signals to create MI dataset contained the movement imagination of a single-arm. The proposed model outperforms conventional approaches in 4-class MI tasks classification. Hence, we demonstrate that the decoding of user intention is possible by using only EEG signals with robust performance using BFR-CNN.



There are no comments yet.


page 2

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Brain-computer interface (BCI) decodes brain signals to understand user intention and status that can be used for external device control. Since brain signals contain diverse information about user status, many studies have attempted to understand brain signals through BCI [11, 1, 7, 10, 29]. Invasive BCI directly places the electrodes on the brain to acquire high-quality brain signals such as electrocorticogram (ECoG) [21]. However, there are many safety issues associated with invasive BCI because it involves surgery to implant electrodes. On the other hand, non-invasive BCI uses electroencephalogram (EEG) because it is easy to acquire without brain surgery. EEG-based BCI has several paradigms for signal acquisition such as motor imagery (MI) [15, 14, 27], movement-related cortical potential (MRCP) [10], and event-related potential (ERP) [26, 31, 30]. As applications of EEG-based BCI, speller [6] and wheelchair [18], and drone [17] were commonly used for communication between user and devices. Among these paradigms, MI is related to specific potentials from the supplementary motor area and pre-motor cortex [8]. When the user imagines specific movements, event-related desynchronization/synchronization (ERD/ERS) patterns are generated in supplementary motor area and pre-motor cortex [2]. MI paradigm captures these patterns to detect user intention. Due to its origin, MI is commonly used for recovery or rehabilitation of the user’s motor functions using external devices [13]. Additionally, MI-based BCI provides extra motor functions using robotic arm [1].

Fig. 1: (a) Experimental environment for EEG data acquisition. (b) Experimental paradigm of single-arm tasks. From 0 to 3 seconds, resting state was given as relaxation. After resting state, 3 seconds of visual cue like above figures was given for readiness. Finally, 4 seconds of imagery period was given.

EEG is an oscillatory and non-stationary signal thus decoding EEG signals is challenging work [25, 22]

. Similar to the denoising technique in computer vision

[9, 20]

, EEG signal should be treated after denoising using filters. A number of MI classification methods have been developed to achieve satisfactory classification performance. Filter bank common spatial patterns (FBCSP) is conventional feature extraction method to decode EEG signal using spectral power modulations

[16]. Linear discriminant analysis (LDA) is jointly used with FBCSP as a classifier. Cho et al. [12] used FBCSP with regularized linear discriminant analysis (RLDA) to decode MI tasks focusing on a single category of MI tasks such as hand grasping and arm reaching. Convolutional neural network (CNN) approaches are applied in BCI [23]. Schirrmeister et al.[24] proposed three different types of CNN-based models depending on the number of layers, inspired by FBCSP. Among the three models, ShallowConvNet extracts log band power features. MI classification performance of the ShallowConvNet is better than the DeepConvNet which is designed for general purpose dealing with signal amplitude. Using the depth wise and separable convolutions, CNN performs classification well regardless of the types of EEG signals including MI [28]. However, these studies mainly focused on simple tasks using competition dataset (left-hand, right-hand, foot, and tongue) and classes are not related to each other to perform sequential work such as drinking water and opening the door. Since the commands are not intuitive, artificial command matching should be required to control external devices.

In this study, we collected three different types of MI tasks of a single-arm: elbow extension, wrist-twisting, and hand grasping to perform sequential upper limb works. Second, we proposed a band-power feature refining convolutional neural network (BFR-CNN) which has only two convolution blocks for MI classification by extracting band-power features. It is designed to classify single-arm MI tasks without artificial command matching. Finally, the proposed BFR-CNN achieved robust classification performance in the 4-class single-arm MI tasks classification.

This paper is structured as follows. Section II gives a description of the data acquisition, dataset for evaluation, and the proposed BFR-CNN model. Section III presents the results of classification accuracies, performance comparison using other models and discusses the advantages and limitations. In session IV, conclusions and future work are described.

Fig. 2: Representative topoplots of each MI task. 8-12 Hz of frequency band was selected from a subject. High amplitudes were observed in the left side of supplementary motor area and the pre-motor cortex because the subject is right-handed.

Ii Methods

Ii-a Data description

Data acquisition process was conducted with eight healthy subjects at the age of 22-30 (6 right-handed males and 2 right-handed females). We used EEG signal amplifier (BrainAmp, BrainProduct GmbH, Germany) to record EEG signals. The sampling rate was 1,000 Hz and a band-pass filter (1-60 Hz) was applied in all channels. We applied 60 Hz notch filter to remove noise from the wires. Brain Products VisionRecorder (BrainProduct GmbH, Germany) recorded and filtered raw EEG data from the subjects. 64 Ag/AgCl electrodes in 10-20 international system were used. The FPz and FCz channels were selected as ground and reference respectively. Impedance of each electrode was measured to maintain the impedance below 10k using conductive gel. 64 EEG channels were used for data acquisition and we selected 24 channels (F3, F1, Fz, F2, F4, FC3, FC1, FC2, FC4, C3, C1, Cz, C2, C4, CP3, CP1, CPz, CP2, CP4, P3, P1, Pz, P2, and P4) for evaluation [12]. These channels are placed on the somatosensory area and pre-motor cortex. During the data acquisition experiment, every subject performed the 150 trials of MI tasks (i.e., 50 trials of elbow extension, twisting and grasping tasks). Relaxation was given before the imagery period and extracted as a resting state (Fig. 1). Subjects were asked to imagine specific muscle movements. Collected MI dataset were resampled at 250 Hz for the classification and it contained 3 classes of single-arm tasks and resting state. Data validation was conducted using FBCSP algorithm and RLDA for each MI task. The protocols and environments were reviewed and approved by the Institutional Review Board at Korea University [1040548-KU-IRB-17-172-A-2].

Fig. 3: Overall flowchart of the proposed BFR-CNN. It consists of two convolution blocks. The first convolution block was designed for creating a receptive field and the second block is for feature refining.
BFR-CNN DeepConvNet ShallowConvNet EEGNet FBCSP+RLDA
sub1 0.82 0.74 0.83 0.68 0.68
sub2 0.83 0.61 0.74 0.63 0.70
sub3 0.84 0.78 0.83 0.84 0.69
sub4 0.80 0.50 0.72 0.58 0.64
sub5 0.90 0.71 0.85 0.71 0.75
sub6 0.80 0.59 0.71 0.63 0.53
sub7 0.84 0.60 0.76 0.66 0.65
sub8 0.88 0.55 0.68 0.60 0.72
Avg. 0.84 0.64 0.77 0.67 0.67
Std. 0.04 0.10 0.06 0.10 0.11
TABLE I: Comparison of MI tasks classification results

Fig. 4: Representative confusion matrices of each model. Through these confusion matrices, we could analyze the classification tendencies of each model.

Ii-B Bfr-Cnn

BFR-CNN is a singular CNN architecture and it is designed for single-arm MI tasks classification. Raw EEG signal contains large amounts of information (channel by time size of matrix) which are not relevant to MI tasks. Therefore, if the classification model is able to extract features and refine them into more relevant features, then the classification performance can be improved. Because the classes of our dataset are composed of single-arm tasks, we assumed that spatial features from the restricted cortex region would not be sufficient to be used on CNN. In addition, since the EEG signals have a high temporal resolution, a higher performance can be achieved by extracting frequency features rather than spatial features [19]

. We were inspired by the concept of shallowConvnet which extracts log-band power features. Considering class complexity of our data, we assumed that using more and refined features would be proper. We conducted frequency domain analysis and there was high amplitude in the similar brain regions (left side of somatosensory cortex and motor cortex) found in topoplot of each MI task (Fig. 2). Thus, we attempted to develop the shallow CNN architecture that would extract frequency features that are highly relevant to single-arm MI tasks through the convolutional layer. The first convolution block consists of a temporal convolution layer, spatial filter layer, and average pooling layer


. Spatial filter was applied along the input channels to reduce the dimensionality as a single input channel. We set the temporal filter size to a quarter of sampling rate to remove the ocular artifact creating a receptive field above 4 Hz. The second block was designed to refine band-power features. We comprised the second convolution block with convolution layer and average pooling layer to reduce the number of features that are less relevant for classification. The last layer contains softmax function with the flatten layer for classification which normalizes output probability distribution. The exponential linear unit (ELU) was applied as an activation function in every convolution block

[5]. We used adam optimizer [4]

and cross-entropy loss function for training

[32]. The overall flowchart of BFR-CNN is described in Fig. 3.

Iii Experimental results and discussion

For the evaluation, we set mini-batch size as 32 and 200 times of training epochs. Evaluation environment was Window 10 desktop with specification Intel(R) Core i7-7700 CPU at 3.60 GHz, 32GB RAM, and Geforce Titan XP GPU. All comparisons were conducted under the same conditions.

Table i@ shows a comparison results of classification. The average accuracy of the BFR-CNN is 0.84 as the highest accuracy among the comparison groups. However, the ShallowConvNet ranked as second-place records 0.77 and that is because it extracts log band power features similarly BFR-CNN which refines band-power features. The remaining methods show similar classification performance. The DeepConvNet records the lowest performance that is because it is designed for general purpose especially concerning signal amplitude. EEGNet is also designed to decode EEG signal regardless of its dominant features even in MI classification thus EEGNet classifies slightly better than DeepConvNet. Interesting thing is that FBCSP with RLDA performs MI classification as well as EEGNet even it is not a deep learning. Through the comparison, we confirm that using the band-power features is advantageous for MI task classification, and refinement can yield higher classification performance.

Fig. 4 is the confusion matrices of classification results. DeepConvNet tends to confuse all MI tasks (elbow extension, grasping and twisting) especially elbow extension but it classifies relatively well the resting state. The ShallowConvNet clearly classifies twisting but confuses elbow extension, grasping and resting state. Unlike other methods, ShallowConvNet is weak in classifying resting states. On the other hand, the ShallowConvNet performs MI tasks classification with high accuracies. EEGNet strongly confuses the elbow extension class with the grasping and twisting. However, none of the MI tasks have been misclassified as resting state. FBCSP with RLDA classifies MI tasks as well as EEGNet but it shows higher classification accuracy in elbow extension classification. BFR-CNN clearly classifies twisting and resting state. Like ShallowConvNet, there is a tendency to slightly confuse elbow extension and grasping. Overall, we find that all methods used in this study tend to confuse MI tasks rather than resting state.

Iv Conclusion and Future works

In this paper, we propsed a BFR-CNN that refines band-power features to classify single-arm MI tasks. The decoding of MI dataset is time-consuming and costly work because it is oscillatory and non-stationary signals. To improve MI classification performance, we proposed BFR-CNN to extract and refine frequency features that are highly relevant to MI. Through the evaluation, we demonstrated that the BFR-CNN achieved the highest classification accuracies compared to existing approaches. Thus, the proposed model can be applied to control external devices with high performance such as a robotic arm.

V Acknowledgement

The authors thanks to J.-H. Cho for their help with the dataset construction and discussion of the data analysis.


  • [1] C. I. Penaloza and S. Nishio (2018) BMI control of a third arm for multitasking. Sci. Robot. 3, pp. eaat1228. Cited by: §I.
  • [2] C. Neuper, M. Wörtz, and G. Pfurtscheller (2006) ERD/ERS patterns reflecting sensorimotor activation and deactivation. Prog. Brain Res. 159, pp. 211–222. Cited by: §I.
  • [3] C.-S. Wei, T. Koike-Akino, and Y. Wang (2019) Spatial component-wise convolutional network (SCCNet) for motor-imagery EEG classification. In 9th Int. IEEE EMBS Conf. Neural Eng., pp. 328–331. Cited by: §II-B.
  • [4] D. P. Kingma and J. Ba (2014) Adam: A Method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §II-B.
  • [5] D.-A. Clevert, T. Unterthiner, and S. Hochreiter (2015) Fast and accurate deep network learning by exponential linear units (ELUs). arXiv preprint arXiv:1511.07289. Cited by: §II-B.
  • [6] D.-O. Won, H.-J. Hwang, S. Dähne, K. R. Müller, and S.-W. Lee (2015) Effect of higher frequency on the classification of steady-state visual evoked potentials. J. Neural Eng. 13, pp. 016014. Cited by: §I.
  • [7] G. Buzsáki, C. A. Anastassiou, and C. Koch (2012) The origin of extracellular fields and currents—EEG, ECoG, LFP and spikes. Nat. Rev. Neurosci. 13, pp. 407. Cited by: §I.
  • [8] G. Pfurtscheller and C. Neuper (2001) Motor imagery and direct brain-computer communication. Proc. IEEE 89, pp. 1123–1134. Cited by: §I.
  • [9] H. H. Bülthoff, S.-W. Lee, T. A. Poggio, and C. Wallraven (2003) Biologically motivated computer vision. Springer-Verlag. Cited by: §I.
  • [10] I. K. Niazi, N. Jiang, O. Tiberghien, J. F. Nielsen, K. Dremstrup, and D. Farina (2011) Detection of movement intention from single-trial movement-related cortical potentials. J. Neural Eng. 8, pp. 066009. Cited by: §I.
  • [11] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan (2002) Brain-computer interfaces for communication and control. Clin. Neurophysiol. 113, pp. 767–791. Cited by: §I.
  • [12] J.-H. Cho, J.-H. Jeong, K.-H. Shim, D.-J. Kim, and S.-W. Lee (2018) Classification of hand motions within EEG signals for non-invasive BCI-based robot hand control. In Conf. Proc. IEEE Int. Conf. Syst. Man. Cybern. (SMC), pp. 515–518. Cited by: §I, §II-A.
  • [13] J.-H. Jeong, K.-H. Shim, J.-H. Cho, and S.-W. Lee (2019) Trajectory decoding of arm reaching movement imageries for brain–controlled robot arm system. In Int. Conf. Proc. IEEE Eng. Med. Biol. Soc. (EMBC), pp. 23–27. Cited by: §I.
  • [14] J.-H. Jeong, K.-T. Kim, D.-J. Kim, and S.-W. Lee (2019) Decoding of multi-directional reaching movements for EEG-based robot arm control. In Conf. Proc. IEEE Int. Conf. Syst. Man. Cybern. (SMC), pp. 511–514. Cited by: §I.
  • [15] J.-H. Kim, F. Bießmann, and S.-W. Lee (2014) Decoding three-dimensional trajectory of executed and imagined arm movements from electroencephalogram signals. IEEE. Trans. Neural. Syst. Rehabil. Eng. 23, pp. 867–876. Cited by: §I.
  • [16] K. K. Ang, Z. Y. Chin, H. Zhang, and C. Guan (2008) Filter bank common spatial pattern (FBCSP) in brain-computer interface. In Proc. IEEE Int. Jt. Conf. Neural Netw., pp. 2390–2397. Cited by: §I.
  • [17] K. LaFleur, K. Cassady, A. Doud, K. Shades, E. Rogin, and B. He (2013) Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain-computer interface. J. Neural. Eng. 10, pp. 1–15. Cited by: §I.
  • [18] K.-T. Kim, H.-I. Suk, and S.-W. Lee (2016) Commanding a brain-controlled wheelchair using steady-state somatosensory evoked potentials. IEEE. Trans. Neural. Syst. Rehabil. Eng. 26, pp. 654–665. Cited by: §I.
  • [19] L. F. Nicolas-Alonso and J. Gomez-Gil (2012) Brain computer interfaces, a review. Sensors 12, pp. 1211–1279. Cited by: §II-B.
  • [20] M. Kim, G. Wu, Q. Wang, S.-W. Lee, and D. Shen (2015)

    Improved image registration by sparse patch-based deformation estimation

    Neuroimage 105, pp. 257–268. Cited by: §I.
  • [21] M.-H. Lee, J. Williamson, D.-O. Won, S. Fazli, and S.-W. Lee (2018) A high performance spelling system based on EEG-EOG signals with visual feedback. IEEE Trans. Neural Syst. Rehabil. Eng. 26, pp. 1443–1459. Cited by: §I.
  • [22] M.-H. Lee, S. Fazli, J. Mehnert, and S.-W. Lee (2015) Subject-dependent classification for robust idle state detection using multi-modal neuroimaging and data-fusion techniques in BCI. Pattern Recognit. 48, pp. 2725–2737. Cited by: §I.
  • [23] N.-S. Kwak, K.-R. Müller, and S.-W. Lee (2017) A convolutional neural network for steady state visual evoked potential classification under ambulatory environment. PLoS one 12, pp. e0172578. Cited by: §I.
  • [24] R. T. Schirrmeister, J. T. Springenberg, L. D. J. Fiederer, M. Glasstetter, K. Eggensperger, M. Tangermann, F. Hutter, W. Burgard, and T. Ball (2017) Deep learning with convolutional neural networks for EEG decoding and visualization. Hum. Brain Mapp. 38, pp. 5391–5420. Cited by: §I.
  • [25] S. R. Liyanage, C. Guan, H. Zhang, K. K. Ang, J. Xu, and T. H. Lee (2013) Dynamically weighted ensemble classification for non-stationary EEG processing. J. Neural Eng. 10, pp. 036007. Cited by: §I.
  • [26] S.-K. Yeom, S. Fazli K.-R. Müller, and S.-W. Lee (2014) An efficient erp-based brain-computer interface using random set presentation and face familiarity. PLoS one 9, pp. e111157. Cited by: §I.
  • [27] T.-E. Kam, H,-I Suk, and S.-W. Lee (2013) Non-Homogeneous Spatial Filter Optimization for ElectroEncephaloGram (EEG)-based Motor Imagery Classification. Neurocomputing 108, pp. 58–68. Cited by: §I.
  • [28] V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, and B. J. Lance (2018) EEGNet: A compact convolutional neural network for EEG-based brain-computer interfaces. J. Neural Eng. 15, pp. 056013. Cited by: §I.
  • [29] X. Ding and S.-W. Lee (2013) Changes of functional and effective connectivity in smoking replenishment on deprived heavy smokers: a resting-state FMRI study. PLoS One 8, pp. e59331. Cited by: §I.
  • [30] X. Zhu, H.-I. Suk, S.-W. Lee, and D. Shen (2016)

    Canonical feature selection for joint regression and multi-class identification in alzheimer’s disease diagnosis

    Brain imaging behav. 10, pp. 818–828. Cited by: §I.
  • [31] Y. Chen, A. D. Atnafu, I. Schlattner, W. T. Weldtsadik, and M.-C. Roh, H. J. Kim, S.-W. Lee, B. Blankertz, and S. Fazli (2016) A high-security eeg-based login system with rsvp stimuli and dry electrodes. IEEE Inf. Fore. Sec. 11, pp. 2635–2647. Cited by: §I.
  • [32] Z. Zhang and M. Sabuncu (2018) Generalized cross entropy loss for training deep neural networks with noisy labels. In Adv. Neural Inf. Process. Syst. (NIPS), pp. 8778–8788. Cited by: §II-B.