Brain–computer interface (BCI) is an emerging technology that allows communicable pathways between a brain and an external device, e.g., a robotic arm, by measuring and identifying intention-reflected brain activities . Generally, non-invasive BCI systems, commonly using electroencephalogram (EEG), are categorized into two types, evoked and spontaneous BCIs. While evoked BCIs exploit evoked potentials like P300, mostly induced by an external stimulus, spontatneous BCIs focus on internal cognitive processes such as event-related (de)synchronization (ERD/ERS). In this work, we focus on motor imagery (MI) induced brain signals .
Since MI-EEGs are voluntarily inducible, MI-based BCIs show great values in the clinical and applicational standpoints. However, because of the self-inducing property and difficulty in consistently inducing spontaneous EEG signals for a period of time, the MI-EEG signals are highly likely to have not only MI-relevant information, but also irrelevant information in trials , which is regarded as unreliable EEG segments in the following description. Generally, in an MI-EEG acquisition protocol, self-induced MI-EEG data is obtained by presenting a cue-signal (e.g., left-arrow sign to imagine the movement of left-hand, right-arrow sign to imagine the movement of right-hand, etc.) . Therefore, the acquired EEG data could have unreliable segments, when the subject does not fully concentrate during the MI-EEG acquisition because of lack of familiarity in BCIs or uncomfortable condition, e.g., long-calibration time. Further, the MI-EEG can also have different physiological noise, e.g., heartbeat, eyeball movement, etc. . Thus, it is not reasonable to have a complete reliability to the acquired EEG trials.
As an example, Fig. 1 compares the power spectrogram of the C4 channel in left-hand MI trials from two subjects. Many neurophysiological studies on physical or imagery movements [20, 13] have consistently witnessed that MI-caused signal patterns are observed in (8-12Hz) and/or (12-30Hz) bands, even though there are not generic frequency ranges that can be applicable to all subjects and there are high variations in signal patterns among subjects and even among sessions of the same subject. In the spectrogram of the Subject #28, the high power pattern in the near
-band is observed and lasting for a period of time. However, such evident patterns are not observable in the spectrogram of the Subject #11. Thus, the typical machine-learning algorithms including recent deep learning methods[25, 13, 10, 12, 4, 2] that exploit the whole signals of trials for model training and intention identification may not be equally applicable to those subjects.
There have been recent studies that considered the unreliability of information in features or raw data in training predictive models [16, 29, 18]. Among them, Li et al.  suggested that training predictive models with the full EEG signals of BCI trials is not necessarily helpful to enhance classification performance in MI-BCIs. Inspired by their work, we performed a preliminary study to compare the performance changes between two models trained and tested with (1) full signals (FM) and (2) randomly masked out signals in time, thus discarding the respective features (RM) for individual subjects. The resulting plot is shown in Fig. 2. Interestingly, we could observe that for many subjects, their respective performance with RM was almost the same with or higher than that with FM. Based on that result, we hypothesize that rather than extracting features from the full signals in a trial, it would be effective to select intention-related signal segments, i.e.
, to discard intention-unrelated or noisy signals, and to use them only for feature representation and the ensuing classifier learning.
In the meantime, while MI-EEGs are obtained by the general protocol, there is no way to know whether the given temporal signal is MI-relevant or not. In other words, we cannot have any information about MI-relevancy for acquired EEG signals explicitly. Thus, we formulate the problem of selecting MI-relevant signal segments without any supervision in the form of a Markov decision process and tackle it via reinforcement learning (RL)  systematically. To our best knowledge, this is the first work that proposes RL-based intention-related signal segments selection and jointly learning feature representation and a classifier in a unified framework.
The main contributions of our work are as follows:
First, we tackle the problem of estimating and selecting reliable signals in MI-EEG, which can be an important issue to practical usage of BCI, by formulating in an RL framework.
Second, we devise an actor-critic model for MI-based BCI and define a novel reward function.
As our proposed of the RL-based feature vectors selection over time is modular, it is easy to plug into the existing deep-learning architectures with minor modification, and thus to help enhance classification performance.
In our experiments over a big MI dataset, we achieved statistically significant performance improvements with our proposed method injected in various deep networks, further outperforming other comparative methods in the literature.
This paper is organized as follows: Section II reviews the previous studies on EEG decoding methods including deep learning approaches and MI-relevant EEG trials selection. In Section III, we propose an MI-relevant EEG signal segments selection method in an actor-critic framework  and describe our objective optimization strategy with a novel reward function. Section IV describes the EEG dataset, experimental settings, and quantitative results by comparing with the existing methods in the literature. We then analyze the results to further validate the effectiveness of our method in Section V, and finally summarize our work in Section VI.
Ii Related Work
Over the past decades, a common spatial pattern (CSP) algorithm  and its variants [2, 26] have been studied most actively for MI-EEG decoding by focusing on spatial filters learning such that the signals are transformed and dimension-reduced to be better discriminative. In particular, Ang et al.  band-pass filtered MI signals before applying CSP, thereby representing spatio-spectral features of EEG signals. Suk and Lee 
proposed a Bayesian framework to jointly optimize the spectral filters and spatial filters in a unified framework by defining frequency bands as random variables.
proposed various convolutional neural networks (CNNs) for MI classification,e.g., Shallow ConvNet and Deep ConvNet. Ko et al.  proposed an interesting recurrent spatio-temporal CNN architecture. Lawhern et al.  proposed an EEGNet that exploited depth-wise convolutional layers and separable convolutional layers  for reducing tunable parameters, thereby learnable with a limited number of EEG samples. Zhang et al. 
proposed Parallel CRN and Cascade CRN, combined recurrent neural network (RNN) and CNN to extract spatio-spectral features of MI-EEG. Further, Kwonet al.  also proposed multi spectral-spatial feature representation (SSFR) using spectral filtering and CNN for MI decoding on both subject-dependent and independent manners. More recently, Ko et al.  devised multi-scale neural network (MSNN), which learns multi-scale (in frequency) feature representations of EEG signals, and presented its applicability to various EEG-based applications.
Unlike most of the existing methods that focused on spatial or spatio-spectral feature extraction with no attempt to find task-relevant EEG trials or signals in trials, Fruitetet al.  focused on task-related trials selection by formulating it as a multi-armed bandit problem . In particular, given an EEG trial, their method estimates the confidence of containing task-relevant information compared to idle state EEG signals. Recently, Li et al. 
proposed spectral component CSP (SCCSP) to select MI relevant EEG trials. Specifically, they conducted independent component analysis on bandpass-filtered signals for MI-relevant and MI-irrelevant components extraction on each class independently. The extracted components were then used for MI-relevant EEG trials selection from the training dataset, based on which they ran CSP for feature extraction and trained a classifier.
Our method can be comparable to their methods in the sense of concerning MI-relevant signals selection in a framework. First, we consider signal segments selection in each trial, rather than selecting trials in a dataset. That is, we can still use the whole trials in a training set by allowing to maximally utilize all the available samples. Second, when comparing with Fruitet et al.’s work , our method do not require idle state EEG trials, which otherwise could be great limitation as requiring additional time for data acquisition, thus causing a longer calibration time accordingly. Further, unlike Li et al.’s work  of learning baseline components, which are used to determine MI-relevance of EEG signals, feature extraction and classifier learning separately, we devise a systematically integrated framework for feature representations learning, estimation and selection of MI-relevant feature vectors of signal segments, and classifier learning in a unified framework. It is also noteworthy that those modules are jointly optimized in an end-to-end manner. Throughout the paper, we use the terms of signal segments and temporal feature vectors of EEG signals interchangeably.
In this section, we define the MI-relevant EEG signal segments selection problem, and formulate it in a novel framework where a reinforcement-learning induced module plays a vital role for performance enhancement. The proposed framework has three main modules as schematized in Fig. 3(a). Given a sequence of signals in a trial , where and denote, respectively, the number of channels and timepoints, it first passes through an embedding network for features representation. The represented feature vectors are then fed into our novel agent module to estimate their task-relevancy and to select the informative signal segments for the target task. Finally, a classifier makes a decision for the task, i.e., MI classification, using the selected feature vectors over time.
Iii-a Embedding Network
Notably, this module is flexible with many kinds of network architectures, varying from the existing ones in the literature to newly customized networks. In our experiments, we exploit the existing CNN architectures, namely, ShallowNet , DeepNet , EEGNet , and MSNN . Basically, these architectures were proposed by different research groups and presented their superiority or validity in their respective experiments over various datasets. In the following, we denote an embedding network for feature representation as with a tunable parameters .
Iii-B Agent Network
We introduce a learnable agent that adaptively and automatically selects task-relevant feature vectors of EEG signals over time in a trial without supervision, as there is no explicit way of observing such information in a trial. For the feature vectors of the input signals, where is the dimension of feature vectors, we devise a method for automatic selection of signal segments over time , , such that the selected feature vectors carry the most information related to the user’s intention, induced by means of MI. However, as MI involves an internal cognitive process in a brain, and thus there are no clear labels, i.e., informative or non-informative, for signals at which timepoints they actually include the intention-related information.
Here, we formulate the problem of informative feature vectors selection of signals in a Markov decision process  and devise an RL-assisted module to enhance the MI-EEG classification performance. Specifically, an agent interacts with the environment defined with a given MI-EEG trial via a sequence of states (defined with the set of feature vectors represented by an embedding network ), actions (selection or rejection), and rewards (effects of making specific actions, i.e., decisions) over time, as illustrated in Fig. 3(b).
In order to demystify our method, we define states, actions, and rewards as follows:
A state in our work is represented as a continuous vector constructed by concatenating the aggregated feature vectors of the selected up to the previous time point, i.e., and the same one but further including the feature vector of the current time , i.e., as follows:
where is an index set of the selected feature vectors up to the time . the operators of and denote, respectively, a vector concatenation operator and an aggregation operator. In our work, we use a mean aggregator defined as
where is a cardinality of the set .
An action space is defined to make it possible for the agent to select (1) or reject (0) the sequence of feature vectors over time and we are interested in finding an optimal action sequence to maximize the expected rewards. Concretely, referring to the current state that involves the comparative information of both aggregating and non-aggregating the feature vector of the current time with the features of the earlier selected, it estimates the effect of the current feature vector to increase the resulting expected rewards. Based on the agent’s action, the set is updated as follows:
In order to define the rewards with respect to actions made by the agent, we first define the base information by taking a global average pooling (GAP)  over the whole feature vectors over time in a trial as follows:
and calculate the classification loss as a criterion. Then, the reward with respect to the current action and the corresponding feature vector is defined to measure the relative improvement to the base feature vector of Eq. (4) in terms of the loss as follows:
where is a classification loss of . With the reward given in Eq. (5), we then define the total return as
where denotes a discount factor to deal with a delayed reward .
Iii-B4 Actor-Critic Network
Technically speaking, of various RL approaches, we exploit an actor-critic model , thanks to its popularity and fitness to our problem. That is, our agent maintains a policy network as an actor and a value estimation function as a critic. For the th timepoint, the agent receives a state and decides its action from a set of possible actions based on the policy . Then, the reward and the next state are obtained from the environment as in Eq. (3).
In our work, we utilize a synchronized parallel actor-critic network. Specifically, two distinct deep neural networks are used for a policy estimation and the expected return or value
estimation, respectively. The output neurons in our policy network
correspond to the probability of taking a selection or rejection action with respect to the current feature vector under the state, i.e., . Meanwhile, the value estimation network has a single output neuron, which produces the expected return under the current state .
After selecting informative feature vectors by our agent over time in a trial, the aggregated vector representation of those is then fed into a densely-connected layer for decision-making. As for the aggregation, we again introduce the mean average of feature vectors in Eq. (4), also called as the GAP . In the viewpoint of BCI, the GAP layer can be understood as a means of emphasizing an important spectral range and its neighboring region for each of the feature dimension. Using the aggregated feature vector , the classifier outputs a class label of the input EEG trial.
Iii-D Optimization and Training Strategy
To jointly optimize the embedding network, the policy and value networks of an agent module, and a classifier, the proposed framework involves two types of learning schemes, i.e.
, supervised learning and reinforcement learning. We combine these two learning strategies in our network optimization.
First, the embedding network and a classifier are pre-trained in a supervised manner without the agent module by minimizing a cross-entropy loss. After pre-training, the actor and critic networks in an agent module are trained to select task-informative features by interacting with the environment. Initially, the agent takes the feature vectors represented by the pre-trained embedding network. Thus, the agent basically starts from the more learned position in a parameter space, rather than a random initial point, thereby training parameters and faster and more robustly.
The model parameters updating is alternated between (i) the agent module and (ii) the other two modules of feature representation and classification. As the agent is directed to find more informative features by being iteratively updated, the embedding network and the classifier can also focus on the task-oriented feature learning, and thus can be better generalized in a more reliable way.
To optimize the sequential actions, we update the trainable parameters of the actor network and the critic network by performing a gradient ascent in regard to maximization of the expected total return (). Basically, the actor parameters are learned in the direction of 
. However, although the updating direction is an unbiased estimate of
, we need to reduce the variance of this estimate by introducing another value, calledadvantage, . The advantage is calculated as follows:
By applying the advantage function to the gradient estimation, we define a loss for an actor network as follows:
Meanwhile, the value estimation function approximates the expected return for the given state , i.e., . Owing to the fact that we cannot directly know a value of a specific state, the value estimation function is optimized by a bootstrapping method . According to its definition, the current state value estimation should be equal to the summation of the current reward and the next state value estimation , thus its training loss is defined as follows:
The complete pseudo-algorithm to train all the networks in our framework is presented in Algorithm 1.
In this section, we describe the dataset used for performance evaluation, our experimental scenarios, experimental settings, and performance comparison among the competitive methods. In regard to the performance comparison, we considered the mean, median and min-max accuracy over all subjects.
Iv-a Dataset and Preprocessing
We used a publicly available big KU-MI dataset 111Available at http://gigadb.org/dataset/100542., which consists of left-hand and right-hand MI tasks. MI samples were acquired across two sessions from 54 healthy subjects, recorded from 62 Ag/AgCl electrodes according to the standard 10-20 system, and sampled with 1000Hz. Each MI class of the dataset contains 50 trials with a 4-second length. For preprocessing, following [12, 14], we downsampled EEG trials to 100Hz and then applied a band-pass filtering between 8 and 30Hz, including both and bands, and segmented from 1 sec to 3.5 sec (250 timepoints). Finally, we selected 20 electrodes (FC-1/2/3/4/5/6, C-1/2/3/4/5/6/z, and CP-1/2/3/4/5/6/z) over the sensory-motor cortex areas.
Iv-B Experimental Scenarios
In order to empirically prove the validity of our proposed method, we compare with the existing subject-dependent and subject-independent methods in performance. By following the recent work of , we set the subject-dependent and subject-independent scenarios as follows:
For the subject-dependent case, the offline data (training samples) from the second session was used to train the MI classification models. Then, the online data (testing samples) also from the second session was used for the performance validation using the trained models.
For the subject-independent scenario, we conducted a leave-one-subject-out cross-validation procedure. To be concrete, we trained subject-independent MI classification models using all training subjects’ offline and online data from both sessions. After training, we evaluated the trained models on the target subject’s offline data from the second session.
Iv-C Experimental Settings
While training our proposed framework in Fig. 324], and a Xavier initializer . For the embedding and classification modules in our framework, we used the existing network architectures of [25, 13, 10]. Briefly, Shallow ConvNet 
is composed of two convolutional layers, a temporal convolutional layer and a spatial convolutional layer with a square activation function for embedding in a feature space. Deep ConvNet has a temporal convolutional layer, a spatial convolutional layer, and following three temporal convolutional layers with an exponential linear unit (eLU) activation function for feature representation. EEGNet  consists of a spectral convolutional layer, a spatial depthwise convolutional layer , and a temporal separable convolutional layer  with an eLU activation function for spatio-temporal feature representation. Finally, for the MSNN 
, a spectral convolution and three residually connected temporal separable convolutional layers and spatial convolutional layers with a leaky ReLU function were used as the embedding part. However, in order for better integration with our proposed agent module for signal segments selection, we made a slight modification in the architecture of Shallow ConvNet, Deep ConvNet, and EEGNet by replacing the last feature output layer (i.e.
, average pooling in Shallow ConvNet and EEGNet, max pooling in Deep ConvNet) with a GAP layer. In this reason, in the following, we differentiate those networks by naming with ‘original’ and ‘modified’ networks. In regard to the classification module, we utilized the above-mentioned networks’ densely-connected layers, respectively. As for the SSFR, because it was designed for energy map-based feature representation, rather than the spatio-temporal features, we did not consider it to apply in our framework.
In a pre-training phase for the embedding and classification networks, we set the number of epochs by 10. In regard to the total return estimation, a discount factor of 0.95 was used. For the actor and critic networks, we designed densely-connected layers with a softmax and a sigmoid activation functions for their output layers, respectively. During training, we also applied an elastic net regularizer with the coefficients of and as 0.01 and 0.001, respectively.
We implemented all the models considered in our experiments, except for the linear models and SSFR as their performances were taken from 
, by Tensorflow 2 and trained on a single Titan RTX GPU on Ubuntu 18.04.
|CSP ||68.57 (17.57)||64.50||100.00-42.00|
|CSSP ||69.68 (18.53)||63.00||100.00-42.00|
|FBCSP ||70.59 (18.56)||64.00||100.00-45.00|
|SCCSP ||69.13 (16.90)||64.50||100.00-48.00|
|BSSFO ||71.02 (18.83)||63.50||100.00-48.00|
|Shallow ConvNet ||72.39 (16.38)||68.00||100.00-46.00|
|Deep ConvNet ||62.63 (13.23)||58.50||100.00-50.00|
|EEGNet ||64.93 (18.04)||56.50||100.00-47.00|
|SSFR ||71.32 (15.88)||66.45||99.00-45.90|
|MSNN ||74.39 (15.59)||70.50||100.00-52.00|
|Shallow ConvNet + AM||74.26 (15.76)||69.00||100.00-53.00|
|Deep ConvNet + AM||65.02 (15.48)||58.00||100.00-51.00|
|EEGNet + AM||67.06 (18.05)||57.00||100.00-50.00|
|MSNN + AM||77.26 (13.92)||74.50||100.00-56.00|
|Pooled CSP ||65.65 (16.11)||58.00||100.00-45.00|
|Fused model ||67.37 (16.01)||62.50||98.00-41.00|
|MR FBCSP ||68.59 (15.28)||63.00||97.00-48.00|
|SSFR ||74.15 (15.83)||75.00||100.00-40.00|
|MSNN ||73.96 (17.95)||73.00||100.00-45.00|
|MSNN + AM||75.24 (17.40)||75.00||100.00-45.00|
Iv-D Experimental Results
The classification accuracy for the subject-dependent scenario is summarized in TABLE I. First, our proposed method with a modified embedding and classification modules from MSNN  achieved the highest performance with a large margin, compared to most of the other methods. Second, it is remarkable that deep learning models integrated in our proposed framework for the embedding and classification modules achieved consistently higher performances than the corresponding original methods. It is also noteworthy that the deep learning models combined with our proposed agent module also enhanced the median and minimum accuracy, compared to their counterparts. This implicitly assures that our proposed framework, especially the agent module, helped to boost the performance across all the subjects.
TABLE II summarizes the classification accuracy of the comparative methods, applicable for the subject-independent scenario. As for our proposed method, we defined the embedding and classification modules with MSNN due to its superiority to other deep models in TABLE I. Again, our proposed method achieved the highest mean accuracy with a small margin compared to the second best performance by SSFR . It is also noticeable that the use of our proposed agent module helped to enhance the performance by 1.28% compared to the original MSNN.
In this section, we present the validity of our proposed framework by conducting a statistical test between deep models of involving or non-involving our agent module. We also conduct a qualitative evaluation for the effect of our proposed agent module by comparing (1) the spectrograms of randomly selected EEG signals and our agent-selected EEG signal segments and (2) the topographic maps estimated by full EEG signals and agent-selected signal segments.
V-a Statistical Analysis
In order to quantitatively validate the effectiveness of our proposed framework, we conducted a two-tailed Wilcoxon’s signed-rank test among the original deep models, their modified ones, and the counterpart agent-involved models. The results are plotted in Fig. 4, which state the statistical significance of our proposed agent module with its superiority in classification accuracy. In detail, for Shallow ConvNet , EEGNet , and MSNN , the proposed framework showed statistical significance with -values of , , and , respectively. From this statistical comparison, it is reasonable to say that our proposed framework, specifically the agent module, played an important role to enhance the classification accuracy across all subjects. Additionally, we also compared performance of MSNN  and MSNN combined with our agent module (MSNN+AM) in the subject-independent scenario, and obtained the result that our method was statistically better with than the original model in classification accuracy.
V-B Qualitative Analysis
In Fig. 5, we visualized the spectrogram (via short-time Fourier transform: STFT) of the C3/C4 channel signals in randomly selected trials from two subjects and the respective action sequences made by our agent module plugged in MSNN. In a coupled-consideration of the power spectrum and the the agent’s action of selection, we could observe their positive relations in the sense that the selected signal segments showed high spectral power in the neighbors of the and bands. Note that basically, the timepoints of an agent’s view in our framework () is different from the original input timepoints () due to a series of convolution operations in the embedding module. For intuitive interpretation of the agent’s action, we estimated and aligned the agent’s timepoints to the input timepoints by reversely computing the corresponding points in the input space.
In the meantime, for more neurophysiological inspection, in Fig. 6, we also visualized topographic maps of full signal segments in a trial and the signal segments selected by our proposed framework for the same trials in Fig. 5. Remarkably, topographic maps based on only the selected signal segments showed more clear and localized ERD/ERS pattern than those from the full signals. In particular, the selected signal segments have more prominent ERD patterns at around the C4 channel in the -rhytm than the full signal segments in the Subject #2. When referring to the spectrogram of that subject in Fig. 5(a), it seemed there was no evident spectral power in the -range over the full signal segments in a trial. However, after selecting the task-informative signal segments, we could observe a meaningful and distinguishable local pattern at the C4 channel in the -range. Similarly, in the spectrogram of the full signals in a trial for the Subject #39 in Fig. 5(b), there seemed less prominent local activations in the -range, thus no localized ERD/ERS pattern in Fig. 6(b). However, after selecting the task-relevant signal segments and plotting the corresponding topographic map, it was then observable a localized ERD/ERS at around the C3 channel. Based on these results, we empirically conclude that our agent module combined with MSNN in our proposed framework is capable of finding MI-relevant EEG signal segments, thus better learning MI-related feature representations and classifier enhancing the MI classification accuracy. Note that there there was no explicit guide or information for our agent to learn such neurophysiological knowledge.
In spontaneous BCIs, it is not easy for a user to consistently induce EEG signals for a period of time, apparently for BCI illiterates who are less capable of inducing task-related brain signals. Furthermore, as spontaneous brain signals inducement involves unobservable internal cognitive processes in a brain, it is hard to measure the information level of observed signals with respect to the target tasks, e.g., MI. Hence, it may not for all signals in a trial to necessarily reflect a user’s intention.
In this work, we focused on the problem of signals reliability in an MI-EEG trial and proposed a novel framework for task-relevant signal segments selection with an RL-assisted module for better generalization of the trained predictive models. As the components in our proposed framework are modular, it was easy and straightforward to combine with the existing deep models. From our experimental results and analyses over a publicly available big MI dataset, we observed the validity of our proposed method in both quantitative and qualitative comparisons and understandings.
Although we could achieve the state-of-the-art performance in both subject-dependent and subject-independent scenarios in our experiments, there are still some rooms to further improve our method. In particular, the agent module works on a sequence of feature vectors obtained from a preceding embedding module with the full signals in a trial. This mechanism may not be practically useful for online BCIs. Thus, it needs improving the current agent module to be better suited for real-time BCIs, and it will be our forthcoming research issue.
This work was supported by Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korea government (No. 2017-0-00451, Development of BCI based Brain and Cognitive Computing Technology for Recognizing User’s Intentions using Deep Learning).
This work was also supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2019-0-00079, Department of Artificial Intelligence (Korea University)).
-  (2016) Tensorflow: Large-scale Machine Learning on Heterogeneous Distributed Systems. arXiv preprint arXiv:1603.04467. Cited by: §IV-C.
-  (2008) Filter Bank Common Spatial Pattern (FBCSP) in Brain-Computer Interface. In Proc. Int. Joint Conf. Neural Netw. (IJCNN), pp. 2390–2397. Cited by: §I, §II, TABLE I.
Soft Computing-based EEG Classification by Optimal Feature Selection and Neural Networks. IEEE Trans. Ind. Informat. 15 (10), pp. 5747–5754. Cited by: §I.
-  (2008) Optimizing Spatial Filters for Robust EEG Single-trial Analysis. IEEE Signal Process. Mag. 25 (1), pp. 41–56. Cited by: §I, §II, TABLE I.
Xception: Deep Learning with Depthwise Separable Convolutions.
Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1251–1258. Cited by: §II, §IV-C.
-  (2012) Bandit Algorithms Boost Brain Computer Interfaces for Motor-Task Selection of A Brain-Controlled Button. In Proc. Adv. Neural Inf. Process. Syst., pp. 449–457. Cited by: §II, §II.
-  (2010) Understanding the Difficulty of Training Deep Feedforward Neural Networks. In Proc. 13th Int. Conf. Artif. Intell. Statist. (AISTATS), pp. 249–256. Cited by: §IV-C.
-  (2020) EEG-based Brain-Computer Interfaces (BCIs): A Survey of Recent Studies on Signal Sensing Technologies and Computational Intelligence Approaches and their Applications. arXiv preprint arXiv:2001.11337. Cited by: §II.
-  (2000) Independent Component Analysis: Algorithms and Applications. Neural Netw. 13 (4-5), pp. 411–430. Cited by: §II.
-  (2020) Multi-Scale Neural Network for EEG Representation Learning in BCI. arXiv preprint arXiv:2003.02657. Cited by: §I, §II, §IV-C, §IV-D1, TABLE I, TABLE II, §V-A.
-  (2018) Deep Recurrent Spatio-Temporal Neural Network for Motor Imagery based BCI. In Proc. 6th Int. Winter Conf. Brain-Comput. Interface (BCI), pp. 1–3. Cited by: §II, §III-A.
-  (2019) Subject-Independent Brain-Computer Interfaces Based on Deep Convolutional Neural Networks. IEEE Trans. Neural Netw. Learn. Syst.. Cited by: §I, §II, §IV-A, §IV-B, §IV-C, §IV-C, §IV-D2, TABLE I, TABLE II.
-  (2018) EEGNet: A Compact Convolutional Neural Network for EEG-based Brain–Computer Interfaces. J. Neural Eng. 15 (5), pp. 056013. Cited by: §I, §II, §III-A, §IV-C, TABLE I, §V-A.
-  (2019) EEG Dataset and OpenBMI Toolbox for Three BCI Paradigms: An Investigation into BCI Illiteracy. GigaScience 8 (5), pp. giz002. Cited by: §I, §IV-A.
-  (2005) Spatio-Spectral Filters for Improving the Classification of Single Trial EEG. IEEE Trans. Biomed. Eng. 52 (9), pp. 1541–1548. Cited by: TABLE I.
-  (2017) Relevant Feature Integration and Extraction for Single-Trial Motor Imagery Classification. Front. Neurosci. 11, pp. 371. Cited by: §I, §I, §II, §II, TABLE I.
-  (2013) Network in Network. arXiv preprint arXiv:1312.4400. Cited by: §III-B3, §III-C.
-  (2017) Quality Aware Network for Set to Set Recognition. In Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5790–5799. Cited by: §I.
-  (2009) Comparison of Designs Towards a Subject-Independent Brain-Computer Interface based on Motor Imagery. In Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBC), pp. 4543–4546. Cited by: TABLE II.
-  (2000) Mu and Beta Rhythm Topographies during Motor Imagery and Actual Movements. Brain Topogr. 12 (3), pp. 177–186. Cited by: §I.
-  (2016) Asynchronous Methods for Deep Reinforcement Learning. In Proc. 33rd Int. Conf. Mach. Learn. (ICML), pp. 1928–1937. Cited by: §I, §I, §III-B4, §III-D.
-  (2001) Motor Imagery and Direct Brain–Computer Communication. Proc. IEEE 89 (7), pp. 1123–1134. Cited by: §I.
-  (2015) A Subject-Independent Pattern-based Brain-Computer Interface. Front. Behav. Neurosci. 9, pp. 269. Cited by: TABLE II.
-  (2016) An Overview of Gradient Descent Optimization Algorithms. arXiv preprint arXiv:1609.04747. Cited by: §IV-C.
-  (2017) Deep Learning with Convolutional Neural Networks for EEG Decoding and Visualization. Hum. Brain Mapp. 38 (11), pp. 5391–5420. Cited by: §I, §II, §III-A, §IV-C, TABLE I, §V-A.
-  (2012) A Novel Bayesian Framework for Discriminative Feature Extraction in Brain-Computer Interfaces. IEEE Trans. Pattern Anal. Mach. Intell. 35 (2), pp. 286–299. Cited by: §II, TABLE I.
-  (2018) Reinforcement Learning: An Introduction. MIT press. Cited by: §II, §III-B3, §III-B, §III-D, §III-D.
-  (2018) Cascade and Parallel Convolutional Recurrent Neural Networks on EEG-based Intention Recognition for Brain Computer Interface. In Proc. 32nd AAAI Conf. Artif. Intell. (AAAI), Cited by: §II.
-  (2019) Feature Aggregation with Reinforcement Learning for Video-Based Person Re-Identification. IEEE Trans. Neural Netw. Learn. Syst. 30 (12), pp. 3847–3852. Cited by: §I.
-  (2019) A Survey on Deep Learning based Brain Computer Interface: Recent Advances and New Frontiers. arXiv preprint arXiv:1905.04149. Cited by: §II.