EEG-based Brain-Computer Interfaces (BCIs): A Survey of Recent Studies on Signal Sensing Technologies and Computational Intelligence Approaches and their Applications

by   Xiaotong Gu, et al.
University of Tasmania

Brain-Computer Interface (BCI) is a powerful communication tool between users and systems, which enhances the capability of the human brain in communicating and interacting with the environment directly. Advances in neuroscience and computer science in the past decades have led to exciting developments in BCI, thereby making BCI a top interdisciplinary research area in computational neuroscience and intelligence. Recent technological advances such as wearable sensing devices, real-time data streaming, machine learning, and deep learning approaches have increased interest in electroencephalographic (EEG) based BCI for translational and healthcare applications. Many people benefit from EEG-based BCIs, which facilitate continuous monitoring of fluctuations in cognitive states under monotonous tasks in the workplace or at home. In this study, we survey the recent literature of EEG signal sensing technologies and computational intelligence approaches in BCI applications, compensated for the gaps in the systematic summary of the past five years (2015-2019). In specific, we first review the current status of BCI and its significant obstacles. Then, we present advanced signal sensing and enhancement technologies to collect and clean EEG signals, respectively. Furthermore, we demonstrate state-of-art computational intelligence techniques, including interpretable fuzzy models, transfer learning, deep learning, and combinations, to monitor, maintain, or track human cognitive states and operating performance in prevalent applications. Finally, we deliver a couple of innovative BCI-inspired healthcare applications and discuss some future research directions in EEG-based BCIs.



There are no comments yet.


page 1


A Survey on Deep Learning based Brain Computer Interface: Recent Advances and New Frontiers

Brain-Computer Interface (BCI) bridges the human's neural world and the ...

Deep Learning in EEG: Advance of the Last Ten-Year Critical Period

Deep learning has achieved excellent performance in a wide range of doma...

A Survey of Automatic Methods for Nutritional Assessment

Nutritional assessment is key in order to make decisions about the natur...

Surface Electromyography as a Natural Human-Machine Interface: A Review

Surface electromyography (sEMG) is a non-invasive method of measuring ne...

Deep learning approaches for neural decoding: from CNNs to LSTMs and spikes to fMRI

Decoding behavior, perception, or cognitive state directly from neural s...

Computational models in Electroencephalography

Computational models lie at the intersection of basic neuroscience and h...

Modeling the Mind: A brief review

The brain is a powerful tool used to achieve amazing feats. There have b...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 An overview of brain-computer interface (BCI)

1.1.1 What is BCI

The research of brain-computer interface (BCI) was first released in the 1970s, addressing an alternative transmission channel without depending on the normal peripheral nerve and muscle output paths of the brain [181]. An early concept of BCI proposed measuring and decoding brainwave signals to control prosthetic arm and carry out a desired action [67]. Then a formal definition of the term ’BCI’ is interpreted as a direct communication pathway between the human brain and an external device [187]. In the past decade, human BCIs have attracted a lot of attention.

The corresponding human BCI systems aim to translate human cognition patterns using brain activities. It uses recorded brain activity to communicate the computer for controlling external devices or environments in a manner that is compatible with the intentions of humans [117], such as controlling a wheelchair or robot as shown in Fig. 1. There are two primary types of BCIs. The first type is active and reactive BCI. The active BCI derives pattern from brain activity, which is directly and consciously controlled by the user, independently from external events, for controlling a device [58]. The reactive BCI extracted outputs from brain activities in reaction to external stimulation, which is indirectly modulated by the user for controlling an application. The second type is passive BCI, which explores the user’s perception, awareness, and cognition without the purpose of voluntary control, for enriching a human-computer interaction (HCI) with implicit information [200].

Fig. 1: The framework of brain-computer interface (BCI)

1.1.2 Application areas

The promising future of BCIs has encouraged the research community to interpret brain activities to establish various research directions of BCIs. Here, we address the best-known application areas that BCIs have been widely explored and applied: (1) BCI is recognised with the potential to be an approach that uses intuitive and natural human mechanisms of processing thinking to facilitate interactions [156]. Since the common methods of traditional HCI are mostly restricted to manual interfaces and the majority of other designs are not being extensively adopted [92], BCIs change how the HCI could be used in complex and demanding operational environments and could become a revolution and mainstream of HCIs for different areas such as computer-aided design (CAD) [142]. Using BCIs to monitor user states for intelligent assistive systems is also substantially conducted in entertainment and health areas [79]. (2) Another area where BCI applications are broadly used is as game controllers for entertainment. Some BCI devices are inexpensive, easily portable and easy to equip, which making them feasible to be used broadly in entertainment communities. The compact and wireless BCI headsets developed for the gaming market are flexible and mobile, and require little effort to set up. Though their accuracy is not as precise as other BCI devices used in medical areas, they are still practical for game developers and successfully commercialised for the entertainment market. Some specific models [122]

are combined with sensors to detect more signals such as facial expressions that could upgrade the usability for entertainment applications. (3) BCIs have also been playing a significant role in neurocomputing for pattern recognition and machine learning on brain signals, and the analysis of computational expert knowledge. Recently, researches have shown

[13] [60] [153] that network neuroscience approaches have been used to quantify brain network reorganisation from different varieties of human learning. The results of these studies indicate the optimisation of adaptive BCI architectures and the prospective to reveal the neural basis and future performance of BCI learning. (4) For the healthcare field, brainwave headset, which could collect expressive information with the software development kit provided by the manufacturer, has also been utilised to facilitate severely disabled people to effectively control a robot by subtle movements such as moving neck and blinking [162]. BCI has also been used in assisting people who lost the muscular capacity to restore communication and control over devices. A broadly investigated clinical area is to implement BCI spelling devices, one well-known application of which is a P300-based speller. Building upon the P300-based speller, [55] using a BCI2000 platform [84] to develop the BCI speller has a positive result on non-experienced users to use this brain-controlled spelling tool. Overall, BCIs have contributed to various fields of research. As briefed in Fig. 2, they are involved in the entertainment of game interaction, robot control, emotion recognition, fatigue detection, sleep quality assessment, and clinical fields, such as abnormal brain disease detection and prediction including seizure, Parkinson’s disease, Alzheimer’s disease, Schizophrenia.

Fig. 2: BCI contributes to various fields of research

1.1.3 Brain imaging techniques

Brain-sensing devices for BCI can be categorised into three groups: invasive, partially invasive, and non-invasive [64]

. In terms of invasive and partially invasive devices, brain signals are collected from intracortical and electrocorticography (ECoG) electrodes with sensors tapping directly into the brain’s cortex. Due to the invasive devices inserting electrodes into brain cortex, each electrode of the intracortical collection technique could provide spiking to produce the population’s time developing output pattern, which causes only a slight sample of the complete set of neurons in bonded regions presented, because microelectrodes could only detect spiking when they are in the proximity of a neuron. In this case, ECoG, as an extracortical invasive electrophysiological monitoring method, uses electrodes attached under the skull. With lower surgical risk, a rather high Signal-to-Noise Ratio (SNR), and a higher spatial resolution, compared with intracortical signals of invasive devices, ECoG has a better prospect in the medical area. In specific, ECoG has a wider bandwidth to gather significant information from functional brain areas to train a high-frequency BCI system, and high SNR signals that are less prone to artefacts arising from, for instance, muscle movement and eye blink.

Even though reliable information of cortical and neuronal dynamics could be provided by invasive or partially invasive BCIs, when considering everyday applications, the potential benefit of increased signal quality is neutralised by the surgery risks and long-term implantation of invasive devices [180]. Recent studies started to investigate the non-invasive technology that uses external neuroimaging devices to record brain activity, including Functional Near-Infrared Spectroscopy (fNIRS), Functional Magnetic Resonance Imaging (fMRI), and Electroencephalography (EEG). To be specific, fNIRS uses near-infrared (NIR) light to assess the aggregation level of oxygenated hemoglobin and deoxygenated-hemoglobin. fNIRS depends on hemodynamic response or blood-oxygen-level-dependent (BOLD) response to formulate functional neuroimages [8]. Because of the power limits of the light and spatial resolution, fNIRS cannot be employed to measure cortical activity presented under 4cm in the brain. Also, due to the fact that blood flow changes slower than electrical or magnetic signals, Hb and deoxy-Hb have a slow and steady variation, so the temporal resolution of fNIRS is comparatively lower than electrical or magnetic signals. fMRI monitors brain activities by assessing changes related to blood flow in brain areas, and it relies on the magnetic BOLD response, which enables fMRI to have a higher spatial resolution and collect brain information from deeper areas than fNIRS, since magnetic fields have better penetration than the NIR light. However, similar to fNIRS, the drawback of fMRI with low temporal resolutions is obvious because of the blood flow speed constraint. With the merits of relying on the magnetic response, fMRI technique also has another flaw since the magnetic fields are more prone to be distorted by deoxy-Hb than Hb molecule. The most significant disadvantage for fMRI being used in different scenarios is that it requires an expensive and heavy scanner to generate magnetic fields and the scanner is not portable and requires a lot of effort to be moved. Considering the relative rises in signal quality, reliability and mobility compared with other imaging approaches, non-invasive EEG-based devices have been used as the most popular modality for real-world BCIs and clinical use [151].

EEG signals, featuring direct measures of cortical electrical activity and high temporal resolutions, have been pursued extensively by many recent BCI studies [143] [199] [9]. As the most generally used non-invasive technique, EEG electrodes could be installed in a headset that is more accessible and portable for diverse occasions. EEG headsets can collect signals in several non-overlapping frequency bands (e.g. Delta, Theta, Alpha, Beta, and Gamma). This is based on the powerful intra-band connection with distinct behavioural states [204], and the different frequency bands can present diverse corresponding characteristics and patterns. Furthermore, the temporal resolution is exceedingly high on milliseconds level, and the risk on subjects is very low, compared to invasive and other non-invasive techniques that require high-intensive magnetic field exposure. In this survey, we discussed different high portability and comparatively low-price EEG devices. A drawback of the EEG technique is that because of the limited number of electrodes, the signals have a low spatial resolution, but the temporal resolution is considerably high. When using EEG signals for BCI systems, the inferior SNR needs to be considered because objective factors such as environmental noises, and subjective factors such as fatigue status might contaminate the EEG signals. The recent research conducted to cope with this disadvantage of EEG technology is discussed in our survey as well.

By recording small potential changes in EEG signals immediately after visual or audial stimuli appear, it is possible to observe a specific brain’s response to specific stimuli events. This phenomenon is formally called Event-Related Potentials (ERPs), defined as slight voltages originated in the brain as responses to specific stimuli or events [169], which are separated into Visual Evoked Potential (VEP) and Auditory Evoked Potential (AEP). For EEG-based BCI studies, P300 wave is a representative potential response of ERP elicited in the brain cortex of a monitored subject, presenting as a positive deflection in voltage with a latency of roughly 250 to 500 ms. [154]. In specific to VEP tasks, Rapid Serial Visual Presentation (RSVP), the process of continuously presented multiple images per second at high displaying rates, is considered to have potential in enhancing human-machine symbiosis [104], and Steady-State visual evoked potentials (SSVEP) is a resonance phenomenon originating mainly in the visual cortex when a person is focusing the visual attention on a light source flickering with a frequency above 4 Hz [131]. In addition, the psychomotor vigilance task (PVT) is a sustained-attention, reaction-timed task, measuring the speed with which subjects respond to a visual stimulus, correlates with the assessment of alertness, fatigue, or psychomotor skills [107].

1.2 Our Contributions

Recent (2015-2019) EEG survey articles more focused on separately summarising statistical features or patterns, collecting classification algorithms, or introducing deep learning models. For example, a recent survey [116]

provided a comprehensive outline regarding the latest classification algorithms utilised in EEG-based BCIs, which comprised adaptive classifiers, transfer and deep learning, matrix and tensor classifiers, and several other miscellaneous classifiers. Although

[116] believes that deep learning methods have not demonstrated convincing enhancement over some state-of-the-art BCI methods, the results reviewed recently in [46]

illustrated that some deep learning methods, for instance, convolutional neural networks (CNN), generative adversarial network (GAN), and deep belief networks (DBN) have outstanding performance in classification accuracy. The later review

[46] synthesised performance results and the several general task groups in using deep learning for EEG classification, including sleep classifying, motor imagery, emotion recognition, seizure detection, mental workload, and event-related potential detection. In general, this review demonstrated that several deep learning methods outperformed other neural networks. However, there is no comparison between deep learning neural networks with traditional machine learning methods to prove the improvement of modern neural network algorithms in EEG-based BCIs. Another recent survey [146] systematically reviewed articles that applied deep learning to EEG in diverse domains by extracting and analysing various datasets to identify the research trend(s). It provides a comprehensive statistic evaluation for articles published between 2010 to 2018, but it does not comprise information about EEG sensors or hardware devices that collect EEG signals. Additionally, an up-to-date survey article released in early 2019 [205] comprehensively reviewed brain signal categories for BCI and deep learning techniques for BCI applications, with a discussion of applied areas for deep learning-based BCI. While this survey provides a systematic summary over relevant publications between 2015-2019, it does not investigate thoroughly about combined machine learning, from which deep learning is originated and evolved, deep transfer learning, or the interpretable fuzzy models that are used for the non-stationary and non-linear signal processing.

In short, the recent review articles lack a comprehensive survey in recent EEG sensing technologies, signal enhancement, relevant machine learning algorithms with interpretable fuzzy models, and deep learning methods for specific BCI applications, in addition to healthcare systems. In our survey, we aim to address the above limitations and include the recently released BCI studies in 2019. The main contributions of this study could be summarised as follow:

• Advances in sensors and sensing technologies.

• Characteristics of signal enhancement and online processing.

• Recent machine learning algorithms and the interpretable fuzzy models for BCI applications.

• Recent deep learning algorithms and combined approaches for BCI applications.

• Evolution of healthcare systems and applications in BCIs.

2 Advances in Sensing Technologies

2.1 An overview of EEG sensors/devices

The advanced sensor technology has enabled the development of smaller and smarter wearable EEG devices for lifestyle and related medical applications. In particular, recent advances in EEG monitoring technologies pave the way for wearable, wireless EEG monitoring devices with dry sensors. In this section, we summarise the advances of EEG devices with wet or dry sensors. We also compare the commercially available EEG devices in terms of the number of channels, sampling rate, a stationary or portable device, and covers many companies that are able to cater to the specific needs of EEG users.

2.1.1 Wet sensor technology

For non-invasive EEG measurements, wet electrode caps are normally attached to users’ scalp with gels as the interface between sensors and the scalp. The wet sensors relying on electrolytic gels provide a clean conductive path. The application of the gel interface is to decrease the skin-electrode contact interface impedance, which could be uncomfortable and inconvenient for users and can be too time-consuming and laborious for everyday use [57]. However, without the conductive gel, the electrode-skin impedance cannot be measured, and the quality of measured EEG signals could be compromised.

2.1.2 Dry sensor technology

Because of the fact that using wet electrodes for collecting EEG data requires attaching sensors over the experimenter’s skin, which is not desirable in the real-world application, the development of dry sensors of EEG devices has enhanced dramatically over the past several years [129]. One of the major advantages for dry sensors, compared with wet counterparts, is that it substantially enhances system usability [108], and the headset is very easy to wear and remove, which even allows skilled users to wear it by themselves in a short time. For example, Siddharth et al. [160] designed bio-sensors to measure physiological activities to refrain from the limitations of wet-electrode EEG equipment. These bio-sensors are dry-electrode based, and the signal quality is comparable to that obtained with wet-electrode systems but without the need for skin abrasion or preparation or the use of gels. In a follow-up study [160], novel dry EEG sensors that could actively filter the EEG signal from ambient electrostatic noise are designed and evaluated with ultra-low-noise and high-sampling analog to digital converter module. The study compared the proposed sensors with commercially available EEG sensors (Cognionics Inc.) in a steady-state visually evoked potentials (SSVEP) BCI task, and the SSVEP-detection accuracy was comparable between two sensors, with 74.23% averaged accuracy across all subjects.

Further, on the trend of wearable biosensing devices, Chi et al. [39] [38] reviewed and designed wireless devices with dry and noncontact EEG electrode sensors. Chi et al. [38]

developed a new integrated sensor controlling the sensitive input node to attain prominent input impedance, with a complete shield of the input node from the active transistor, bond-pads, to the specially built chip package. Their experiment results, using data collected from a noncontact electrode on the top of the hair demonstrate a maximum information transfer rate at 100% accuracy, show the promising future for dry and noncontact electrodes as viable tools for EEG applications and mobile BCIs.

Augmented BCIs (ABCIs) concept is proposed in [108] for everyday environments, with which signals are recorded via biosensors and processed in real-time to monitor human behaviour. An ABCI comprises non-intrusive and quick-setup EEG solutions, which requires minimal or no training, to accurately collect long-term data with the benefits of comfort, stability, robustness, and longevity. In their study of a broad range of approaches to ABCIs, developing portable EEG devices using dry electrode sensors is a significant target for mobile human brain imaging, and future ABCI applications are based on biosensing technology and devices.

2.2 Commercialised EEG devices

Table I lists 21 products of 17 brands with eight attributes providing a basic overview of EEG headsets. The attribute ’Wearable’ shows if the monitored human subjects could wear the devices and move around without movement constrains, which partially depends on the transmission types whether the headset devices are connected to software via Wi-Fi, Bluetooth, or other wireless techniques, or tethered connections. The numbers of channels of each EEG device could be categorised into three groups: a low-resolution group with 1 to 32 numbers; a medium-resolution group with 33 to 128 channels, and a high-resolution group with more than 128 channels. Most brands offer more than one device, therefore the numbers of channels in Table I have a wide range. The low-resolution devices mainly cover the frontal and temporal locations, some of which also deploy sensors to collect EEG signals from five locations, while the medium and high-resolution groups could cover locations of scalp more comprehensively. The numbers of channels also affect the EEG signal sampling rate of each device, with low and medium-resolution groups having a general sampling rate of 500 Hz and a high-resolution group obtaining a sampling rate of higher than 1,000 Hz. Additionally, Figure 3 presents all listed commercialised EEG devices listed in Table I.

Brand Product Wearable Sensors type Channels No. Locations Sampling rate Transmission Weight
NeuroSky MindWave Yes Dry 1 F 500 Hz Bluetooth 90g
Emotiv EPOC(+) Yes Dry 5-14 F, C, T, P, O 500 Hz Bluetooth 125g
Muse Muse 2 Yes Dry 4-7 F, T Bluetooth
OpenBCI EEG Electrode Cap Kit Yes Wet 8- 21 F, C, T, P, O Cable
Wearable Sensing DSI 24; NeuSenW Yes Wet; Dry 7-21 F, C, T, P, O 300/600 Hz Bluetooth 600g
ANT Neuro eego mylab / eego sports Yes Dry 32 - 256 F, C, T, P, O Up to 16 kHz Wi-Fi 500g
Neuroelectrics STARSTIM; ENOBIO Yes Dry 8-32 F, C, T, P, O 125-500 Hz Wi-Fi; USB
G.tec g.NAUTILUS series Yes Dry 8-64 F, C, T, P, O 500 Hz Wireless 140g
Advanced Brain Monitoring B-Alert Yes Dry 10-24 F, C, T, P, O 256Hz Bluetooth 110g
Cognionics Quick Yes Dry 8-30; (64-128) F, C, T, P, O 250/500/1k/2k Hz Bluetooth 610g
mBrainTrain Smarting Yes Wet 24 F, C, T, P, O 250-500 Hz Bluetooth 60g
Brain Products LiveAmp Yes Dry 8-64 F, C, T, P, O 250/500/1k Hz Wireless 30g
Brain Products AntiCHapmp Yes Dry 32-160 F, C, T, P, O 10k Hz Wireless 1.1kg
BioSemi ActiveTwo No Wet (Gel) 280 F, C, T, P, O 2k/4k/8k/16k Hz Cable 1.1kg
EGI GES 400 No dry 32-256 F, C, T, P, O 8k Hz Cable
Compumedics Neuroscan Quick-Cap + Grael 4k No Wet 32-256 F, C, T, P, O Cable
Mitsar Smart BCI EEG Headset Yes Wet 24-32 F, C, T, P, O 2k Hz Bluetooth 50g
Mindo Mindo series Yes Dry 4-64 F, C, T, P, O Wireless
Abbreviation: Frontal (F), Central (C), Temporal (T), Partial (P), and Occipital (O)
TABLE I: An Overview of EEG Devices
Fig. 3: Commercialized EEG devices for BCI applications

3 Signal Enhancement and Online Processing

3.1 Artefact handling

Based on a broad category of unsupervised learning algorithms for signal enhancement, Blind Source Separation (BSS) estimates original sources and parameters of a mixing system and removes the artefact signals, such as eye blinks and movement, represented in the sources


. There are several prevalent BSS algorithms in BCI researches, including Principal Component Analysis (PCA), Canonical Correlation Analysis (CCA) and Independent Component Analysis (ICA). PCA is one of the simplest BSS techniques, which converts correlated variables to uncorrelated variables, named principal components (PCs), by orthogonal transformation. However, the artefact components are usually correlated with EEG data and the potential of drifts are similar to EEG data, which both would cause the PCA to fail to separate the artefacts

[87]. CCA separates components from uncorrelated sources and detects a linear correlation between two multi-dimensional variables [50], which has been applied in muscle artefacts removal from EEG signals. In terms of ICA, it decomposes observed signals into the independent components (ICs), and reconstruct clean signals by removing the ICs contained artefacts. ICA is the majority approach for artefacts removal in EEG signals, so we review the methods utilised ICA to support signal enhancement in the following sections.

3.1.1 Eye blinks and movements

Eyes blinks are more prevalent during the eyes-open condition, while rolling of the eyes might influence the eyes-closed condition. Also, the signal of eye movements located in the frontal area can affect further EEG analysis. To minimise the influence of eye contamination in EEG signals, visual inspection of artefacts and data augmentation approaches are often used to remove eye contamination.

Artefact subspace reconstruction (ASR) is an automatic component-based mechanism as a pre-processing step, which could effectually remove large-amplitude or transient artefacts that contaminate EEG data. There are several limitations of ASR, first of which is that ASR effectually removes artefacts from the EEG signals collected from a standard 20-channel EEG device, while single-channel EEG recordings could not be applied for. Furthermore, without substantial cut-off parameters, the effectiveness of removing regularly occurring artefacts, such as eye blinks and eye movements, is limited. A possible enhancement has been proposed by using ICA-based artefact removal mechanism as a complement for ASR cleaning [139]. A recent study in 2019 [31] also considered ICA and used an automatic IC classifier as a quantitative measure to separate brain signals and artefacts for signal enhancement. In above studies, they extended Infomax ICA [103], and the results showed that by using an optimal ASR parameter between 20 and 30, ASR removes more eye artefacts than brain components.

For online processing of EEG data in near real-time to remove artefacts, a combination method of online ASR, online recursive ICA, an IC classifier was proposed in [139] to remove large-amplitude transients as well as to compute, categorise, and remove artefactual ICs. For the eye movement-related artefacts, the results of their proposed methods have a fluctuated saccade-related IC EyeCatch score, and the altered version of EyeCatch of their study is still not an ideal way of eliminating eye-related artefacts.

3.1.2 Muscle artefacts

Contamination of EEG data by muscle activity is a well-recognized tough problem. These artefacts can be generated by any muscle contraction and stretch in proximity to the recording sites, such as the subject talks, sniffs, swallows, etc. The degree of muscle contraction and stretch will affect the amplitude and waveform of artefacts in EEG signals. In general, the common techniques that have been used to remove muscle artefacts include regression methods, Canonical Correlation Analysis (CCA), Empirical Mode Decomposition (EMD), Blind Source Separation (BSS), and EMD-BSS [87]. A combination of the ensemble EMD (EEMD) and CCA, named EEMD-CCA, is proposed to remove muscle artefacts by [35]. By testing on real-life, semi-simulated, and simulated datasets under single-channel, few-channel and multichannel settings, the study result indicates that the EEMD-CCA method can effectively and accurately remove muscle artefacts, and it is an efficient signal processing and enhancement tool in healthcare EEG sensor networks. There are other approaches combining typical methods, such as using BSS-CCA followed by spectral-slope rejection to reduce high-frequency muscle contamination [82]

, and independent vector analysis (IVA) that takes advantage of both ICA and CCA by exploiting higher-order statistics (HOS) and second-order statistics (SOS) simultaneously to achieve high performance in removing muscle artefacts

[36]. A more extensive survey on muscle artefacts removal from EEG could be found in [87].

3.1.3 Introducing toolbox of signal enhancement

As one of the most widely used Matlab toolbox for EEG and other electrophysiological data processing, EEGLAB is developed by Swartz Center for Computational Neuroscience, in which provides an interactive graphic user interface for users to apply ICA, time/frequency analysis (TFA) and standard averaging methods to the recorded brain signals. EEGLAB extensions, previously called EEGLAB plugins, are the toolboxes that provide data processing and visualization functions for the EEGLAB users to process the EEG data. At the time of writing, there are 106 extensions available on the EEGLAB website, with a broad functional range including importing data, artefact removal, feature detection algorithms, etc. For example, many extensions are developed for artefact removal. Automatic artefact Removal (AAR) toolbox is for automatic EEG ocular and muscular artefact removal; Cochlear implant artefact correction (CIAC), as its name, is an ICA-based tool particularly for correcting electrical artefacts arising from cochlear implants; Multiple Artefact Rejection Algorithm (MARA) toolbox uses EEG features in the temporal, spectral and spatial domains to optimize a linear classifier to solve the component-reject vs. -accept problem and to remove loss electrodes; ‘icablinkmetrics’ toolbox aims at selecting and removing ICA components associated with eyeblink artefacts using time-domain methods. Some of the toolboxes have more than one major function, such as artefact rejection and pre-processing: “clean_rawdata” is a suite of pre-processing methods including ASR for correcting and cleaning continuous data; ARfitStudio could be applied to quickly and intuitively correct event-related spiky artefacts as the first step of data pre-processing using “ARfit”. ADJUST identifies and removes artefact independent components automatically without affecting neural sources or data. A more comprehensive list of toolbox functions can be found on the EEGLAB website.

3.2 EEG Online Processing

For neuronal information processing and BCI, the ability to monitor and analyze cortico-cortical interactions in real time is one of the trends in BCI research, along with the development of wearable BCI devices and effective approaches to remove artefacts. It is challenging to provide a reliable real-time system that could collect, extract and pre-process dynamic data with artefact rejection and rapid computation. In the model proposed by [128]

, EEG data collected from a wearable, high-density (64-channel), and dry EEG device is firstly reconstructed by 3751-vertex mesh, anatomically constrained low resolution electrical tomographic analysis (LORETA), singular value decomposition (SVD) based reformulation and Automated Anatomical Labelling (AAL) before forwarded to Source Information Flow Toolbox (SIFT) and vector autoregressive (VAR) model. By applying a regularized logistic regression and testing on both simulation and real data, their proposed system is capable of real-time EEG data analysis. Later,

[129] expanded their prior study by incorporating ASR for artefact removal, implementing anatomically constrained LORETA to localize sources, and adding an Alternating Direction Method of Multipliers and cognitive-state classification. The evaluation results of the proposed framework on simulated and real EEG data demonstrate the feasibility of the real-time neuronal system for cognitive state classification and identification. A subsequent study [77]

aimed to present data in instantaneous incremental convergence. Online recursive least squares (RLS) whitening and optimized online recursive ICA algorithm (ORICA) are validated for separating the blind sources from high-density EEG data. The experimental results prove the proposed algorithm’s capability to detect nonstationary in high-density EEG data and to extract artefact and principal brain sources quickly. Open-source Real-time EEG Source-mapping Toolbox (REST) to provide support for online artefact rejection and feature extraction are available to inspire more real-time BCI research in different domain areas.

4 Machine Learning and Fuzzy Models in BCI Applications

4.1 An overview of machine learning

Machine learning, a subset of computational intelligence, relies on patterns and reasonings by computer systems to explore a specific task without using explicit instructions. Machine learning tasks are generally classified into several models, such as supervised learning and unsupervised learning


. In terms of supervised learning, it is usually dividing the data into two subsets: training set (a dataset to train a model) and test set (a dataset to test the trained model) during the learning process. Supervised learning can be used for classification and regression tasks, by applying what has been learned in the training stage using labeled examples, to test the new data (testing data) for classifying types of events or predicting future events. In contrast, unsupervised machine learning is used when the data used to train is neither classified nor labelled. It contains only input data and refers to a probability function to describe a hidden structure, like grouping or clustering of data points.

Performing machine learning requires to create a model for training purpose. In EEG-based BCI applications, various types of models have been used and developed for machine learning. In the last ten years, the leading families of models used in BCIs include linear classifiers, neural networks, non-linear Bayesian classifiers, nearest neighbour classifiers and classifier combinations [98]

. The linear classifiers, such as Linear Discriminant Analysis (LDA), Regularized LDA, and Support Vector Machine (SVM), classify discriminant EEG patterns using linear decision boundaries between feature vectors for each class. In terms of neural networks, they assemble layered human neurons to approximate any nonlinear decision boundaries, where the most common type in BCI applications is the Multilayer Perceptron (MLP) that typically uses only one or two hidden layers. Moving to a nonlinear Bayesian classifier, such as a Bayesian quadratic classifier and Hidden Markov Model (HMM), the probability distribution of each class is modelled, and Bayes rules are used to select the class to be assigned to the EEG patterns. Considering the physical distances of EEG patterns, the nearest neighbour classifier, such as the k nearest neighbour (kNN) algorithm, proposes to assign a class to the EEG patterns based on its nearest neighbour. Finally, classifier combinations are combining the outputs of multiple above classifiers or training them in a way that maximizes their complementarity. The classifier combinations used for BCI applications can be an enhanced, voting or stacked combination approach.

Additionally, to apply machine learning algorithms to the EEG data, we need to pre-process EEG signals and extract features from the raw data, such as frequency band power features and connectivity features between two channels [48]. Figure 4 demonstrates an EEG-based data pre-processing, pattern recognition and machine learning pipeline, to represent EEG data processing in a compact and relevant manner.

Fig. 4: Data pre-processing, pattern recognition and machine learning pipeline in BCIs

4.2 Transfer learning

4.2.1 Why we need transfer learning?

Recently, one of the major hypotheses in traditional machine learning, such as supervised learning described above, is that training data used to train the classifier and test data used to evaluate the classifier, belong to the same feature space and follow the same probability distribution. However, this hypothesis is often violated in many applications because of human variability [12]. For example, a change in EEG data distribution typically occurs when data are acquired from different subjects or across sessions and time within a subject. Also, as the EEG signals are variant and not static, extensive BCI sessions exhibit distinctive classification problems of consistency [1].

Thus, transfer learning aims at coping with data that violate this hypothesis by exploiting knowledge acquired while learning a given task for solving a different but related task. In other words, transfer learning is a set of methodologies considered for enhancing the performance of a learned classifier trained on one task (also extended to one session or subject) based on information gained while learning another one. The advances of transfer learning can relax the limitations of BCI, as it is no need to calibrate from the beginning point, less noisy for transferred information, and relying on previous usable data to increase the size of datasets.

4.2.2 What is transfer learning?

Transferring knowledge from the source domain to the target domain acts as bias or as a regularizer for solving the target task. Here, we provide a description of transfer learning based on the survey of Pan and Yang [134]. The source domain is known, and the target domain can be inductive (known) or transductive (unknown). The transfer learning classified under three sub-settings in accord with source and target tasks and domains, as inductive transfer learning, transductive transfer learning, and unsupervised transfer learning. All learning algorithms transfer knowledge to different tasks/domains, in which situation the skills should be transferred to enhance performance and avoid a negative transfer.

In the inductive transfer learning, the available labelled data of the target domain are required, while the tasks of source and target could be different from each other regardless of their domain. The inductive transfer learning could be sub-categorized into two cases based on the labelled data availability. If available, then a multitask learning method should be applied for the target and source tasks to be learned simultaneously. Otherwise, a self-taught learning technique should be deployed. In terms of transductive transfer learning, the labelled data are available in the source domain instead of the target domain, while the target and source tasks are identical regardless of the domains (of target and source). Transductive transfer learning could be sub-categorized into two cases based on whether the feature spaces between the source domain and target domain are the same. If yes, then the sample selection bias/covariance shift method should be applied. Otherwise, a domain adaptation technique should be deployed. Unsupervised transfer learning applied if available data in neither source nor target domain, while the target and source tasks are relevant but different. The target of unsupervised transfer learning is to resolve clustering, density estimation and dimensionality reduction tasks in the target domain [184].

4.2.3 Where to transfer in BCIs?

In BCIs, discriminative and stationary information could be transferred across different domains. The selection of which types of information to transfer is based on the similarity between the target and source domains. If the domains are very similar and the data sample is small, the discriminative information should be transferred; if the domains are different while there could be common information across target and source domains, stationary information should be transferred to establish more invariable systems [150] [183].

Domain adaption, a representative of transductive transfer learning, attempts to find a strategy for transforming the data space in which the decision rules will classify all datasets. Covariate shifting is a very related technique to domain adaptation, which is the most frequent situation encountered in BCIs. In covariate shifting, the input distributions in the training and test samples are different, while output values conditional distributions are the same [158]. There exists an essential assumption - the marginal distribution of data changes from subjects (or sessions) to subjects (or sessions), and the decision rules for this marginal distribution remain unchanged. This assumption allows us to re-weight the training data from other subjects (or previous sessions) for correcting the difference in the marginal distributions in the different subjects (or sessions).

Naturally, the effectiveness of transfer learning strongly depends on how well the two circumstances are related. The transfer learning in BCI applications can be used to transfer information (a) from tasks to tasks, (b) from subjects to subjects, and (c) from sessions to sessions. As shown in Fig. 5, given a set of the training dataset (e.g., source task, subject, or session), we attempt to find a transformation space in which a training model will be beneficial to classify or predict the samples in the new dataset (e.g., target task, subject, or session).

Fig. 5: Transfer learning in BCI
Transfer from tasks to tasks

In the domain of BCI where EEG signals are collected for subjects analysis, in some situations, the mental tasks and the operational tasks could be different but dependent. For instance, in a laboratory environment, a mental task is to assess the device action, such as mental subtraction and motor imagery, while the operational task is the device action itself and the performance of a device from event-related potentials. Transferring decision rules between different tasks would introduce novel signal variations and affect error-related potential that represents as a response as an error was recognized by users [33]. The study of [81] showed that the signal variations originated from task-to-tasks transfer which substantially influenced classification feature distribution and the classifiers’ performance. Other results of their study are that the accuracy based on the baseline descended when operational tasks and subtasks were generalised, while the differences of features were larger compared with non-error responses.

Transfer from subjects to subjects

For EEG-based BCIs, before applying features learned by the conventional approaches to different subjects, it requires a training period of pilot data on each new subject due to inter-subject variability [34]. In the driving drowsiness detection study of Wei et al. [185]

, inter- and intra-subject variability were evaluated as well as transferring models’ feasibility was validated by implementing hierarchical clustering in a large-scale EEG dataset collected from many subjects. The proposed subject-to-subject transferring framework comprises a large-scale model pool, which assures sufficient data are available for positive model transferring to obtain prominent decoding performance and a small-scale baseline calibration data from the target subject as a selector of decoding models in the model pool. Without jeopardizing performance, their results in driving drowsiness detection demonstrated 90% calibration time reduction.

In BCIs, cross-subject transfer learning could be used to decrease training data collecting time, as the least-squares transformation (LST) method proposed by Chiang et al. [40]. The experiments conducted to validate the LST method performance of cross-subject SSVEP data showed the capability of reducing the number of training templates for an SSVEP BCI. Inter- and intra-subject transfer learning is also applied to unsupervised conditions when no labelled data is available. He and Wu [73] presented a method to align EEG trails directly in the Euclidean space across different subjects to increase the similarity. Their empirical results showed the potential of transfer learning from subjects to subjects in an unsupervised EEG-based BCI. In [72], He and Wu proposed a novel different set domain adaptation approach for task-to-task and also subject-to-subject transfer, which considers a very challenging case that the source subject and the target subject have partially or completely different tasks. For example, the source subject may perform left-hand and right-hand motor imageries, whereas the target subject may perform feet and tongue motor imageries. They introduced a practical setting of different label sets for BCIs, and proposed a novel label alignment (LA) approach to align the source label space with the target label space. LA only needs as few as one labelled sample from each class of the target subject, which label alignment can be used as a preprocessing step before different feature extraction and classification algorithms, and can be integrated with other domain adaptation approaches to achieve even better performance. For applying transferring learning in BCIs, especially EEG-based BCIs, subject-to-subject transfer among the same tasks are more frequently investigated.

For subject-to-subject transfer in single-trial event-related potential (ERP) classification, Wu [192] proposed both online and offline weighted adaptation regularization (wAR) algorithms to reduce the calibration effort. Experiments on a visually-evoked potential oddball task and three different EEG headsets demonstrated that both online and offline wAR algorithms are effective. Wu also proposed a source domain selection approach, which selects the most beneficial source subjects for transfer. It can reduce the computational cost of wAR by about 50%, without sacrificing the classification performance, thus making wAR more suitable for real-time applications.

Very recently, Cui et al. [47] proposed a novel approach, feature weighted episodic training (FWET), to completely eliminate the calibration requirement in subject-to-subject transfer in EEG-based driver drowsiness estimation. It integrates feature weighting to learn the importance of different features, and episodic training for domain generalization. FWET does not need any labelled or unlabelled calibration data from the new subject, and hence could be very useful in plug-and-play BCIs.

Transfer from sessions to sessions

The assumption of the session-to-session transfer learning in BCI is that features extracted by the training module and algorithms could be applied to a different session of a subject in the same task. It is important to evaluate what is in common among training sections for optimizing the decision distribution among different sessions.

Alamgir et al. [85]

reviewed several transfer learning methodologies in BCIs that explore and utilise common training data structures of several sessions to reduce training time and enhance performance. Building on the comparison and analysis of other methods in the literature, Alamgir et al. proposed a general framework for transfer learning in BCIs, which is in contrast to a general transfer learning study that focuses on domain adaptation where individual sessions feature attributes are transferred. Their framework regards decision boundaries as random variables, so the distribution of decision boundaries could be conducted from previous sessions. With an altered regression method and the consideration for feature decomposition, their experiments on amyotrophic lateral sclerosis patients using an MI BCI revealed its effectiveness in learning structure. There are also some problematic conditions of the proposed transfer learning method, including the difficulty in balancing the initialization of spatial weights, and the necessity of adding an extra loop in the algorithm for determining the spectral and spatial combination.

In one of the latest paradigms studying imagined speech, in which a human subject imagines uttering a word without physical sound or movement, García-Salinas et al. [63] proposed a method to extract codewords related to the EEG signals. After a new imagined word being represented by the EEGs characteristic codewords, the new word was merged with the prior class’s histograms and a classifier for transfer learning. This study implies a general trend of applying session-based transfer learning to an imagined speech domain in EEG-based BCIs.

Transfer from headset to headset

Apart from the above cases of transfer learning in BCIs, ideally, a BCI system should be completely independent of any specific EEG headset, such that the user can replace or upgrade his/her headset freely, without re-calibration. This should greatly facilitate real-world applications of BCIs. However, this goal is very challenging to achieve. One step towards it is to use historical data from the same user to reduce the calibration effort on the new EEG headset.

Wu et al. [191]

proposed active weighted adaptation regularization (AwAR) for headset-to-headset transfer. It integrates wAR, which uses labelled data from the previous headset and handles class-imbalance, and active learning, which selects the most informative samples from the new headset to label. Experiments on single-trial ERP classification showed that AwAR could significantly reduce the calibration data requirement for the new headset.

4.3 Interpretable Fuzzy Models

Currently, machine learning methods behave like black boxes because they cannot be explained. Exploring interpretable models may be useful for understanding and improving BCI learned automatically from EEG signals, or possibly gaining new insights in BCI. Here, we collect some interpretable models from fuzzy sets and systems to estimate interpretable BCI applications.

4.3.1 Fuzzy models for interpretability

Zadeh suggests that all classes cannot be of clear value in the real world, so that it is very difficult to define true or false or real numbers [198]. He introduced the concept of fuzzy sets with the definition ”A fuzzy set is a set with no boundaries, and the transition from boundary to the boundary is characterized by a member function” [83]. Using fuzzy sets allows us to provide the advantage of flexible boundary conditions, and the advances of this have applied in BCI applications, as shown in Fig. 6.

Furthermore, a Fuzzy Inference System (FIS) also used in BCI applications to automatically extract fuzzy ”If-Then” rules from the data that describe which input feature values correspond to which output category [52]. Such fuzzy rules enable us to classify EEG patterns and interpret what the FIS has learned, as shown in Fig. 6. What is more, the fuzzy measure theory, such as fuzzy integral, as shown in Fig. 6, is suitable to apply where data fusion requires to consider possible data interactions [167], such as the fuzzy fusion of multiple information sources.

Another category for interpretability is a hybrid model integrating fuzzy models to machine learning. For example, fuzzy neural networks (FNN) combine the advantages of neural networks and FIS. Its architecture is similar to the neural networks, and the input (or weight) is fuzzified [18]. The FNN recognizes the fuzzy rules and adjusts the membership function by tuning the connection weights. Especially, a Self-Organizing Neural Fuzzy Inference Network (SONFIN) is proposed by [89] using a dynamic FNN architecture to create a self-adaptive architecture for the identification of the fuzzy model. The advantage of designing such a hybrid structure is more explanatory because it utilises the learning capability of the neural network.

Fig. 6: Fuzzy sets, fuzzy rules, and fuzzy integrals for interpretability

4.3.2 EEG-based fuzzy models

Here, we summarized up-to-date interpretable solutions based on fuzzy models for BCI systems and applications. By using fuzzy sets, Wu et al. [188] proposed to extend the multiclass EEG typical spatial pattern (CSP) filters from classification to regression in a large-scale sustained-attention PVT, and later further integrated them with Riemannian tangent space features for improved PVT reaction time estimation performance [189]. Considering the advances of fuzzy membership degree, [27] used a fuzzy membership function instead of a step function, which decreased the sensitivity of entropy values from noisy EEG signals. It did improve EEG complexity evaluation in resting state and SSVEP sessions [22], associated with healthcare applications [26].

By integrating fuzzy sets with domain adaptation, [190] proposed an online weighted adaptation regularization for regression (OwARR) algorithm to reduce the amount of subject-specific calibration EEG data. Furthermore, by integrating fuzzy rules with domain adaptation, [32] generated a fuzzy rule-based brain-state-drift detector by Riemann-metric-based clustering, allowing that the distribution of the data can be observable. By adopting fuzzy integrals [193], motor-imagery-based BCI exhibited robust performance for offline single-trial classification and real-time control of a robotic arm. A follow-up work [95] explored the multi-model fuzzy fusion-based motor-imagery-based BCI, which also considered the possible links between EEG patterns after employing the classification of traditional BCIs. Additionally, the fusion of multiple information sources is inspired by fuzzy integrals as well, such as fusing eye movement and EEG signals to enhance emotion recognition [118].

Moving to FNN, due to the non-linear and non-stationary characteristics of EEG signals, the application of neural networks and fuzzy logic unveils a safe, accurate and reliable detection and pattern identification. For example, a fuzzy neural detector proposed in backpropagation with a fuzzy C-means algorithm

[114] and Takagi-Sugeno fuzzy measurement [178], to identify the sleep stages. Furthermore, [114] proposed a recurrent self-evolving fuzzy neural network (RSEFNN) that employs an on-line gradient descent learning rule to predict EEG-based driving fatigue.

5 Deep Learning Algorithms with BCI Applications

Deep learning is a specific family of machine learning algorithms in which features and the classifier are jointly learned directly from data. The term ‘deep learning’ refers to the architecture of the model, which is based on a cascade of trainable feature extractor modules and nonlinearities [101]

. Owing to such a cascade, learned features are usually related to increasing levels of concepts. The representative architectures of deep learning include Convolutional Neural Networks (CNN), Generative Adversarial Network (GAN), Recurrent Neural Networks (RNN), and broad Deep Neural networks (DNN). For BCI applications, deep learning has been applied broadly compared with machine learning technology mainly because currently most machine learning research concentrates on static data which is not the optimal method for accurately categorizing the quickly changing brain signals


. In this section, we introduce the spontaneous EEG applications with CNN architectures, the utilisation of GAN in recent researches, the procedure and applications of RNN, especially Long Short-Term Memory (LSTM). We also illustrate deep transfer learning extended from deep learning algorithms and transfer learning approaches, followed by exemplification of adversarial attacks to deep learning models for system testing.

5.1 Convolutional Neural Networks (CNN)

A Convolutional Neural Network (simplifying ConvNet or CNN) is a feedforward neural network in which information flows uni-directionally from the input to the convolution operator to the output [29]. As shown in Fig. 7, such a convolution operator includes at least three stacked layers in CNN comprising the convolutional layer, pooling layer, and fully connected layer. The convolutional layer convolves a tensor with shape, and the pooling layer streamlines the underlying computation to reduce the dimensions of the data. The fully connected layer connects every neuron in the previous layer to a new layer, resemble a traditional multi-layer perceptron neural network.

Fig. 7: CNN and GAN for BCI applications

The nature of CNNs with stacked layers is to reduce input data to easily-identifiable formations with minimal loss, and distinctive spatial dependencies of the EEG patterns could be captured by applying CNN. For instance, CNN has been used to automatically extract signal features from epileptic intracortical data [22] and perform an automatic diagnosis to supersede the time-consuming procedure of vision examination conducted by experts [23]. In addition to this, the recent five BCI applications employing CNNs in fatigue, stress, sleep, motor imagery (MI), and emotional studies, are reviewed below.

5.1.1 Fatigue-related EEG

As a complex mental condition, fatigue and drowsiness are stated lack of vigilance that could lead to catastrophic incidents when the subjects are conducting activities that require high and sustained attention, such as driving vehicles. Driving fatigue detection research has been attracting considerable attention in the BCI community, [42] [21] especially in recent years with the significant advancement of CNN in classification[201] [37]. An EEG-based spatial-temporal convolutional neural network (ESTCNN) was proposed by Gao et al. [62] for driver fatigue detection. This structure was applied to eight human subjects in an experiment in which multichannel EEG signals were collected. The framework comprises a core block that extracts temporal dependencies and combines with dense layers for spatial-temporal EEG information process and classification. It is illustrated that this method could consistently decrease the data dimension in the inference process and increase reference response with computational efficiency. The results of the experiments with ESTCNN reached an accuracy rate of 97.37% in fatigue EEG signal classification. CNN has also been used in other EEG-based fatigue recognition and evaluation applications. Yue and Wang [197] applied various fatigue levels’ EEG signals to their multi-scale CNN architecture named “MorletInceptionNet” for visual fatigue evaluation. This framework uses a space-time-frequency combined features extraction strategy to extract raw features, after which multi-scale temporal features are extracted by an inception architecture. The features are then provided to the CNN layers to classify visual fatigue evaluation. Their structure has a better performance in classification accuracy than the other five state-of-the-art methodologies, which is proof of the effectiveness of CNN in fatigue-related EEG signal processing and classification.

5.1.2 Stress-related EEG

Since stress is one of the leading causes of hazardous human behaviour and human errors that could cause dreadful industrial accidents, stress detection and recognition using EEG signals have become an important research area [159]. A recent study [86] proposed a new BCI framework with the CNN model and collected EEG signals from 10 construction workers whose cortisol levels, hormone-related human stress, were measured to label tasks’ stress level. The result of the proposed configuration obtained the maximum accuracy rate of 86.62%. This study proved that the BCI framework with a CNN algorithm might be the ultimate classifier for EEG-based stress recognition.

5.1.3 Sleep-related EEG

Sleep quality is crucial for human health in which the sleep stage classification, also called sleep scoring, has been investigated to understand, diagnose and treat sleep disorders [165]. Because of the lightness and portability of EEG devices, EEG is particularly suitable to recognize sleep scores. CNN has been applied to sleep stage classification by numerous studies, while the approaches of CNN using single-channel EEG are the mainstream of research investigation [127] [141] mainly due to the simplicity [138]. A single-channel EEG-based method using CNN for 5-class sleep stage conjecture in [165] shows competitive performance in sensible pattern detection and visualization. The significance of this research for single-channel sleep EEG processing is that it does not require feature extraction from expert knowledge or signal to learn the most suitable features to task classification end-to-end. Mousavi et al. [127]

use a data-augmentation preprocessing method and apply raw EEG signals directly to nine convolutional layers and two fully connected layers without implicating feature extraction or feature selection. The simulation results of the study indicate the accuracy of over 93% for the classification of 2 to 6 sleep stages classes. Furthermore, a CNN-based combined classification and prediction framework, called multitask neural networks, has also been considered for automatic sleep classifying in a recent study

[138]. It is stated that this framework has the ability to generate multiple decisions, the reliability to form a final decision by aggregation, and the capability to avoid the disadvantages of the conventional many-to-one approach.

5.1.4 MI-related EEG

MI indicates imaging executing movement of a body part rather than conducting actual body movement in BCI systems [172]. MI is based on the fact that brain activation will change and activate correlated brain path when actually moving a body part. The common spatial pattern (CSP) algorithm [144]

is an effective spatial filter that searches for a discriminative subspace to maximize one class variance and minimize the other simultaneously to classify the movement actions. CNN has also been employed to MI EEG data processing for stimulating classification performance, and there is a stream of recent research trends of combining CNN with CSP together, improving the methodology, and enhance MI classification performance

[97]. The MI classification framework proposed by Sakhavi, Guan and Yan [148] presents a new data temporal representation generated from the filter-bank CSP algorithm, with CNN classification architecture. Their accuracy on the 4-class MI BCI dataset approves the usability and effectiveness of the proposed application. Olivas-Padilla and Chacon-Murguia [133] presented two multiple MI classification methodologies that used a variation of Discriminative Filter Bank Common Spatial Pattern (DFBCSP) to extract features, after which the outcome samples have proceeded to a matrix with one or multiple pre-optimized CNN. It is stated that this proposed method could be an applicable alternative for multiple MI classification of a practical BCI application both online and offline.

5.1.5 Emotion-related EEG

Since it is believed that EEG contains comparatively comprehensive emotional information and better accessibility for affective research, while CNN holds the capacity to take spatial information into account with two-dimensional filters, CNN-based deep learning algorithm has been applied to EEG signals for emotion recognition and classification in numerous recent studies [94] [182] [196] [102]. Six basic emotional states that could be recognized and classified by using EEG signals [17] [111], including joy, sadness, surprise, anger, fear, and disgust, while the emotions could also be simply categorized in binary classification as positive or negative [125]. To apply EEG signals to CNN-based modules, EEG signals could be directly introduced to the modules, or to extract diverse entropy and power spectral density (PSD) features as the input of the models. Three connectivity features extracted from EEG signals, phase-locking value (PLV), Pearson correlation coefficient (PCC) and phase lag index (PLI), were examined in [125] to the proposed three different CNN structures. A popular EEG-based emotion classification database DEAP [96] was applied to the framework, and the PSD features performance was enhanced by the connectivity features, with PLV matrices obtaining 99.72% accuracy utilizing CNN-5. Further, on this, dynamical graph CNN (DGCNN) has also been proposed for multichannel EEG emotion recognition. In the study of Song et al. [164], the presented DGCNN method uses a graph to model EEG features by learning intrinsic correlations between each EEG channel to produce an adjacency matrix, which is then applied to learn more discriminative features. The experiments conducted in the SJTU emotion EEG dataset (SEED) [207] and the DREAMER dataset [91] achieved recognition accuracy rate at 90.4% and 86.23% respectively.

5.2 Generative Adversarial Networks (GAN)

5.2.1 GAN for data augmentation

In classification tasks, a substantial amount of real-world data is required for training machine learning and deep learning modules, and in some cases, there are limitations of acquiring enough amount of real data or simply the investment of time and human resources could be too overwhelming. Proposed in 2014 and becoming more active in recent years, GAN is mainly used data augmentation to address the question of how to generate artificial natural looking samples to mimic real-world data via implying generative models, so that unrevealed training data sample number could be increased [70].

GAN includes two synchronously trained neural networks, “generator networks” and “discriminator networks” as shown in Fig. 7. The “generator networks” can capture the input data distribution and aim to generate fake sample data, and the “discriminator networks” can distinguish whether the sample comes from the true training data. These two neural networks aim to generate an aggregation of samples from the pre-trained generator and to employ the samples for additional functions such as classification.

5.2.2 EEG data augmentation

The significance of applying GAN for EEG is that it could address the major practical issue of insufficient training data. Abdelfattah, Abdelrahman and Wang [2] proposed a novel GAN model that learns statistical characteristics of the EEG and increases datasets to improve classification performance. Their study showed that the method outperforms other generative models dramatically. The Wasserstein GAN with gradient penalty (WGAN-GP) proposed by Panwar et al. [135] incorporates a BCI classifier into the framework to synthesize EEG data and simulate time-series EEG data. WGAN-GP was applied to event-related classification and perform task classification with the Class-Conditioned WGAN-CP. GAN has also been used in EEG data augmentation for improving recognition performance, such as emotion recognition. The framework presented in [119] was built upon a conventional GAN, named Conditional Wasserstein GAN (CWGAN), to enhance EEG-based emotion recognition. The high-dimensional EEG data generated by the proposed GAN framework was evaluated by three indicators to ensure high-quality synthetic data are appended to manifold supplement. The positive experiment results on SEED and DEAP emotion recognition datasets proved the effectiveness of the CWGAN model. A conditional Boundary Equilibrium GAN based EEG data augmentation method [120] for artificial differential entropy features generation was also proven to be effective in improving multimodal emotion recognition performance.

As a branch of deep learning, GAN has been employed to generate super-resolution image copies from low-resolution images. A GAN-based deep EEG super-resolution method proposed by Corley and Huang


is a novel approach to generate high spatial resolution EEG data from low-resolution EEG samples via producing unsampled data from different channels. This framework could address the limitation of insufficient data collected from low-density EEG devices by interpolating multiple missing channels effectively.

To the best of our knowledge, in contrast with CNN, GAN was comparatively less studied in BCIs. One major reason is that the feasibility of using GAN for generating time sequence data is yet to be fully evaluated [53]. In the investigation of GAN performance in producing synthetic EEG signals, Fahimi et al. used real EEG signals as random input to train a CNN-based GAN to produce synthetic EEG data and then evaluate the similarities in the frequency and time domains. Their result indicates that the generated EEG data from GAN resemble the spatial, spectral, and temporal characteristics of real EEG signals. This initiates novel perspectives for future research of GAN in EEG-based BCIs.

5.3 Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)

The traditional neural networks usually are not capable of reasoning from the previous information, but RNN, inspired by human’s memory, can address this issue by adding a loop that allows information to be passed from one step of the network to the next. As shown in Fig. 8, the recurrent procedure of RNN describes a specific node in the time range . The node at time receives two inputs variables: denotes the input at time and the “backflow loop” represents the hidden state in time , and the node at time exports the variable . However, the current RNN only looks at recent information to perform the present tasks in practice, so it cannot retain long-term dependencies. In this case, long short-term memory (LSTM) networks, a special kind of RNN that is capable of learning long-term dependencies, are proposed. As shown in Fig. 8, an LSTM cell receives three inputs: the input at the current time , the output of previous time , and the “input arrows” representing the hidden state of previous time . Then, the LSTM cell exports two outputs: the output and the hidden state (representing as the “out arrows”) of the current time . The LSTM cell contains four gates, input gate, output gate, forget gate, and input modulation gate, to control the data flow by the operations and the sigmoid and tanh functions.

Compared with traditional classification algorithms, Deep learning methods of RNN with LSTM lead to superior accuracy [177]. Attia et al. [11] presented a hybrid architecture of CNN and RNN model to categorize SSVEP signals in the time domain. Using RNN with LSTM architectures could take temporal dependence into account for EEG time-series signals and could achieve an average classification accuracy of 93.0% [74]. In the research of applying RNN to auditory stimulus classification, [124] used a regulated RNN reservoir to classify three English vowels a, u and i. Their result showed an average accuracy rate of 83.2% with the RNN approach outperformed a deep neural network method. A framework aiming at addressing visual object classification was proposed by [166] by applying RNN to EEG data invoked by visual stimulation. After discriminative brain activities are learned by the RNN, they are trained by a CNN-based regressor to project images onto the learned manifold. This automated object categorization approach achieved an accuracy rate of approximate 83%, which proved its comparability to those empowered merely by CNN models.

Over the past several years, the research of the RNN framework in EEG-based BCIs has increased substantially with many studies showing that the results of RNN-based methods outperform a benchmark or other traditional algorithms [137] or the RNN combined with other deep neural networks such as CNN to optimize performance [176]. The RNN framework has also been applied to other EEG-based tasks, such as identifying individuals [204], hand motion identification [6], sleep staging [15], and emotion recognition [203]. It is worth noting that in [15], the best performance model among basic machine learning, CNN, RNN, a combination of RNN and CNN (RCNN) is an RNN model with expert-defined features for sleep staging, which could inspire further research of combining the expert system with DL algorithms. Other novel framework proposals based on RNN, such as the spatial-temporal RNN (STRNN) [203] for feature learning integration from both temporal and spatial information of the signals, are also being explored in recent years.

As a special kind of RNN, LSTM has also been combined with CNN algorithms for a diverse range of EEG-based tasks. For automatic sleep stage scoring, Supratak et al. [168] employed CNN to extract time-invariable features and an LSTM bidirectional algorithm for transitional rules learning. To predict human decisions from continuous EEG signals, [71] proposed a hierarchical LSTM model with two layers encoding local-temporal correlations and temporal correlations respectively to address non-stationarities of the EEG.

Being able to learn sequential data and improve classification performance, LSTM could also be added to other neural networks for temporal sequential pattern detection and optimize the overall prediction accuracy for the entire framework. For temporal sleep stage classification, [49]

proposed a Mixed Neural Network with an LSTM for its capacity in learning sleep-scoring strategy automatically compared to the decision tree in which rules are defined manually.

Fig. 8: Illustration of RNN and LSTM

5.4 Deep Transfer Learning (DTL)

A recent survey [175] classified deep transfer learning into four categories: instance-based, mapping-based, network-based, and adversarial based deep transfer learning. In specific, the instance-based and mapping-based deep transfer learning consider instances by adjusting weights from the source domain and mapping similarities from the source to the target domains, respectively. The main benefits of applying deep learning are saving the time-consuming pre-processing while featuring engineering steps, and capturing high-level symbolic features and imperceptible dependencies simultaneously [121]. The realization of these two advantages is because deep learning operates straightly on raw brain signals to learn identifiable information through back-propagation and deep structures, while transfer learning is commonly applied to improve the capacity in generalization for machine learning.

Network-based deep transfer learning reuses the parts of the network pre-trained in the source domain, such as extracting the front-layers trained by CNN. Adversarial-based deep transfer learning uses adversarial technology, such as GAN, to find transferable features that are suitable for two domains.

The combination of transfer learning and CNN has been widely used in medical applications [140] [186] [93] and the general applicational purposes for instance image classification [69] and object recognition [4]. In this section, we focus on transfer learning using deep neural network and its EEG-based BCI applications.

MI EEG signal classification is one of the major areas where deep transfer learning is applied. Sakhavi and Guan [149] used a CNN model to transfer knowledge from subjects to subjects to decrease calibration time for recording data and training the model. The EEG data pipeline of deep CNN, transferring model parameters and fine-tuning on new data and using labels to regularize fine-tuning/training process, is a novel method for subjects to subjects and session to session deep transfer learning. Xu et al. [194] proposed a deep transfer CNN framework comprising a pre-trained VGG-16 CNN model and a target CNN model, between which the parameters are directly transferred, frozen and fine-tuned in the target model and MI dataset. The performance of their framework in terms of efficiency and accuracy exceed many traditional methods such as standard CNN and SVM. Dose et al. [51] applied a Deep Learning approach to an EEG-based MI BCI system in healthcare, aiming to enhance the present stroke rehabilitation process. The unified model they build includes CNN layers that learn generalized features and reduce dimension, and a conventional fully connected layer for classification. By using transfer learning in this approach for adapting global classifiers to single individuals and applying raw EEG signals to this model, the results of their study reached a mean accuracy of 86.49%, 79.25%, and 68.51% for datasets with two, three and four classes, respectively. For the effectiveness of alleviating training burden with transfer learning, a recent research [209] encoded EEG features extracted from the traditional CSP by a separated channel convolutional neural network. The encoded features were then used to train a recognition network for MI classification. The accuracy of the proposed method outperformed multiple traditional machine learning algorithms.

Generally, the purpose of proposing a DTL framework as a classification strategy is to avoid time-consuming re-training and improve accuracy compared with solitary CNN and transfer learning. A deep CNN with an inter-subject transfer learning method was applied to detect attention information from the EEG time series [54]. DTL has also been applied to classify EEG data in imagined vowel pronunciation [44].

As introduced in the previous section, GAN, combined with transfer learning, could be rewarding in restraining domain divergence to improve domain adaptation [113] [208]. Hu et al. [78] proposed DupGAN, a GAN framework with one encoder, one generator and two adversarial discriminators, to attain domain transformation for classification. In other streams of deep transfer learning, RNN is also applied in many EEG-based BCI studies. For EEG classification with attention-based transfer learning, [174] proposed a framework of a cross-domain TL encoder and an attention-based TL decoder with RNN for improvement in EEG classification and brain functional areas detection under different tasks.

5.5 Adversarial Attacks to Deep Learning Models in BCI

Albeit their outstanding performance, deep learning models are vulnerable to adversarial attacks, where deliberately designed small perturbations, which may be hard to be detected by human eyes or computer programs, are added to benign examples to mislead the deep learning model and cause dramatic performance degradation. This phenomena was first discovered in 2014 in computer vision

[171] and soon received great attention [65] [99] [10].

Adversarial attacks to EEG-based BCIs could also cause great damage. For example, EEG-based BCIs can be used to control wheelchairs or exoskeleton for the disabled [105], where adversarial attacks could cause malfunction. In the worst case, adversarial attacks can hurt the user by driving him/her into danger on purpose. In clinical applications of BCIs in awareness/consciousness evaluation [105], adversarial attacks could lead to serious misdiagnosis.

Zhang and Wu [206] were the first to study adversarial attacks in EEG-based BCIs. They considered three different attack scenarios: 1) White-box attacks, where the attacker has access to all information of the target model, including its architecture and parameters; 2) Black-box attacks, where the attacker can observe the target model’s responses to inputs; 3) Gray-box attacks, where the attacker knows some but not all information about the target model, e.g., the training data that the target model is tuned on, instead of its architecture and parameters. They showed that three popular CNN models in EEG-based BCIs, i.e., EEGNet [100], DeepCNN and ShallowCNN [152], can all be effectively attacked in all three scenarios.

Recently, Jiang et al. [88] showed that query synthesis based active learning could help reduce the number of required training EEG trials in black-box adversarial attacks to the above three CNN classifiers, and Meng et al. [123] studied white-box target attacks for EEG-Based BCI regression problems, i.e., by adding a tiny perturbation, they can change the estimated driver drowsiness level or user reaction time by at least a fixed amount. Liu et al. [115] proposes a novel total loss minimization approach to generate universal adversarial perturbations for EEG classifiers. Their approach resolved two limitations of Zhang and Wu’s approaches (the attacker needs to know the complete EEG trial in advance to compute the adversarial perturbation, and the perturbation needs to be computed specifically for each input EEG trial), and hence made adversarial attacks more practical.

6 BCI-based Healthcare Systems

With the enhancement in the affordability and quality of EEG headsets, EEG-based BCI researches for classifying and predicting cognitive states have increased dramatically, such as tracking operators’ inappropriate states for tasks and monitoring mental health and productivity [14]. EEG and other brain signals such as MEG contain substantial information related to the health and disease conditions of the human brain, for instance, extracting the “slowing down” features of EEG signals could be used to categorise neurodegenerative diseases [16].

One excessive and abnormal brain disorder is epilepsy that patients suffer from recurring unprovoked seizures, which is a cause and symptom of abrupt upsurge. Clinically, EEG signals are one of the leading indicators that could be monitored and studied for seizure brain electrical activity, while EEG-based BCI researches contribute to the prediction of epilepsy. In the medical area, EEG recordings are used for screening seizures of epilepsy patients with an automated seizure detection system. The Gated Recurrent Unit RNN framework developed by

[173] showed approximate 98% accuracy in epileptic seizure detection. Tsiouris et al. [179] introduced a two-layer LSTM network to assess seizure prediction performance by exploiting a broad scope of features before classification between EEG channels. The empirical results showed that the LSTM-based methodology outperforms traditional machine learning and CNN in seizure prediction performance. A novel method of EEG based automatic seizure detection was proposed by [68] with a multi-rate filter bank structure and statistic model to optimise signal attributes for better seizer classification accuracy. For seizure detection, one of the main confusing elements is an artefact, which appears on several EEG channels that could be misinterpreted with wave and spike discharges similar to the occurrence of seizure. To optimise channel selection and accuracy for seizure detection with minimal false alarms, [155] proposed a CNN-LSTM algorithm to reject artefacts and optimise the framework’s performance on seizure detection. It is believed that the implementation of BCI and real-time EEG signal processing are suitable for the standard clinical application and caring for epilepsy patients [5].

Parkinson’s disease (PD) is a progressive degradation illness classified by brain motor function, which, as an abnormal brain disease, is usually diagnosed with EEG signals. Oh et al. [132] proposed an EEG-based deep learning approach with CNN architecture as a computer-aided diagnosis system for PD detection. The positive performance result of the proposed model demonstrates its possibility in clinical usage. A specific class of RNN framework, called Echo State Networks (ESNs), was proposed by [147] to classify EEG signals collected from random eye movement sleep (REM) Behavioural Disorder (RBD) patients and healthy controls subjects, while RBD is a major risk feature for neurodegenerative diseases such as PD. ESNs possess RNN’s competence of temporal pattern classification and could expedite training, and the test set performance accuracy of the proposed ESN by [147] reached 85% as an approve of effectiveness.

As one of the most mysterious pathology, the cause of Alzheimer’s disease (AD) is still deficiently understood, and intelligent assistive technologies are believed to have the potential in supporting dementia care [79]. BCI with machine learning and deep learning models is also utilised in novel researches of classifying and detecting AD, while monitoring disease effect is increasingly significant for clinical intervention. For supporting the clinical investigation, EEG signals screening of people who are vulnerable to AD could be utilised to spot the origination of AD development. With the potentiality of classification in CNN, a deep learning model with multiple convolutional-subsampling layers is proposed in [126] and attained an averaged 80% accuracy in categorising sets of EEG from two different classifications of subjects, one is from mild cognitive impairment subjects, a prodromal version of AD, and the other is from same age healthy control group. Simpraga et al. [161] used machine learning with multiple EEG biomarkers to enhance AD classification performance and demonstrated the effectiveness of their research in improving disease identification accuracy and supports in clinical trials.

Comparable to using deep learning models with multiple EEG biomarkers for AD classification, machine learning techniques have also been applied to EEG biomarkers for diagnosing schizophrenia. Shim et al. [157] used sensor and source level EEG features to classify schizophrenia patients and healthy controls. The result of their research indicates that the proposed tool could be promising in supporting schizophrenia diagnosis. In [41], a modified deep learning architecture with a voting layer was proposed for individual schizophrenia classification of EEG streams. The high classification accuracy result indicates the framework’s feasibility in categorising first-episode schizophrenia patients and healthy controls.

As non-conventional neurorehabilitation methodology, BCI has been investigated for assisting and aiding motor impairment rehabilitation, such as for patients who suffered and survived the stroke, which is a frequent high disease and generally declines patient’s mobility afterwards [163]. Non-invasive BCI, for instance, EEG-based technology, supports volitional transmission of brain signals to aid hand movement. BCI has great potentials in facilitating motor impairment rehabilitation via the utilisation of assistive sensation by rewarding cortical action related to sensory-motor features [145]. Frolov et al. [61] investigated the effectiveness of rehabilitation for stroke survivors with BCI training session and the research results of the participated patients indicates that combining BCI to physical therapy could enhance the results of post-stroke motor impairment rehabilitation. Other researchers also found that using BCI for motor impairment rehabilitation for post-stroke patients could help them regain body function and improve life quality [43] [30] [80].

BCIs have also been employed in other healthcare areas such as investigation of migraine, pain and depressive disorders [20] [24] [23] [109] [25]. Patil et al. [136] proposed an artificial neural network with supervised classifiers for EEG classification to detect migraine subjects. They believed that the positive results confirm that EEG-based neural network classification framework could be used for migraine detection and as a substitution for migraine diagnosis. Cao et al. [28] [26] presented a multi-scale relative inherent fuzzy entropy application for SSVEPs-EEG signals of two migraine phases, the pre-ictal phase before migraine attacks and the inter-ictal phase which is the baseline. The study found that for migraine patients compared with healthy controls, there are changes in EEG complexity in a repetitive SSVEP environment. Their study proved that inherent fuzzy entropy could be used in visual stimulus environments for migraine studies and has the potential in pre-ictal migraine prediction. EEG signals have also been monitored and analysed to prove the correlation between cerebral cortex spectral patterns and chronic pain intensity [19]. BCI based signal processing approaches could also be used in training for phantom limb pain controls by helping patients to reorganise sensorimotor cortex with the practice of hand control [195]. The potential of BCI in healthcare for the general public could attract more novel researches being conducted in the near future.

Machine learning and deep learning neural networks have been productively applied to EEG signals for various neurological disorders screening, recognition and diagnosing, and the recent researches revealed some important findings for depression detection with BCI [3] [106]. The EEG based CAD system with CNN architecture and transfer learning method proposed by [106] indicates that the spectral information of EEG signals is critical for depression recognition while the temporal information of EEG could significantly improve accuracy for the framework. Liao et al. citeliao2017major proved in their research that the 8 electrodes of EEG devices from the temporal areas could provide higher accuracies in major depression detection compared with other scalp areas, which could be efficient implications for future EEG-based BCI system for depression screening. The CNN approach proposed by [3] for EEG-Based depression screening experiments on EEG signals of depressive and normal subjects and obtain an accuracy rate of 93.5% and 96.0% of left and right hemisphere EEG signals respectively. Their study also confirmed the findings of a theory that depression is linked to a hyperactive right hemisphere that could inspire more novel researches for depression detection and diagnosis.

7 Discussion and Conclusion

In this review, we highlighted the recent studies in the field of EEG-based by analysing over 150 studies published between 2015 and 2019 developing signal sensing technologies and applying computational intelligence approaches to EEG data. Although the advances of the dry sensor, wearable devices, the toolbox for signal enhancement, transfer learning, deep learning, or interpretable fuzzy models have lifted the performance of EEG-based BCI systems, the real-world usability challenges remain, such as the prediction or classification capability and stability in complex BCI scenarios.

By mapping out the trend of BCI study in the past five years, we would also like to share the tendency of future directions of BCI researches. The cost-effectiveness and availability of EEG devices are attributed to the evolution of dry sensors, which in turn stimulate more research in developing enhanced sensors. The current tendency of sensor techniques focuses on augmenting signal quality with the improvement of sensor materials and emphasising user experience when collecting signals via BCI devices with comfortable sensor attachments. Fiedler et al. presented the basis for improved EEG cap designs of dry multipin electrodes in their research of polymer-based multichannel EEG electrodes system [59]. Their study was focused on the correlation of EEG recording quality with applied force and resulting contact pressure. Considering the comfort of wearing an EEG device for subjects, Lin et al. [110] developed a soft, pliable pad for an augmented wire-embedded silicon-based dry-contact sensors (WSBDSs). Their study introduced copper wires in the acicular WSBDSs to ensure scalp contact on hair-covered sites and shows good performance and feasibility for applications. Chen et al. [35] proposed flexible material-based wearable sensors for EEG and other bio-signals monitoring for the tendency of smart personal devices and e-health. A closed-loop (CL) BCI method that uses biosignal simulation instant resolution could be beneficial for healthcare therapy [76]

. As an example, reinforcement learning (RL) could also support improving training model accuracy in BCI applications

[112]. Based on the illustration of TL and DTL, the benefits of transferring extracted features and training models among subjects or tasks are apparent, such as improving training efficiency and enhancing classification accuracy. Therefore it would be encouraging to pursue experiments with adaptive EEG-based BCI training. One of the significant challenges for EEG-based technology is artefacts removal, while despite the multiple novels approaches discussed in the previous section, integrating BCI with other technical or physiological signals, which is hybrid BCI system, would be a future focus of research for improving classification accuracy and general outcomes [130] [75]. The scientific community is also investigating enhanced conjunction of technology and interface for HCI that is the combination of Augmented Reality (AR) and EEG-based BCI [66]. Previous researches have been inducing one popular protocol used in exogenous BCIs, SSVEPs, with visual stimulus from AR glasses such as smart glasses used in [7], and capturing the SSVEP response by measuring EEG signals to perform tasks [202] [56]. With the accessibility of AR and commercialised non-invasive BCI devices, using AR and EEG devices, augmentation becomes feasible and also effective in outcomes. Finally, recent research has shown that deep learning (and even traditional machine learning) models in EEG-based BCIs are vulnerable to adversarial attacks, and there is an urgent need to develop strategies to defend such attacks.

In this paper, we systematically survey the recent advances in advances of the dry sensor, wearable devices, signal enhancement, transfer learning, deep learning, and interpretable fuzzy models for EEG-based BCIs. The various computational intelligence approaches enable us to learn reliable brain cortex features and understand human knowledge from EEG signals. In a word, we summarise the recent EEG signal sensing and interpretable fuzzy models, followed by discussing dominant transfer and deep learning for BCI applications. Finally, we overview healthcare applications and point out the open challenges and future directions.


Credit authors for icons made from


  • [1] H. A. Abbass, J. Tang, R. Amin, M. Ellejmi, and S. Kirby (2014) Augmented cognition using real-time eeg-based adaptive strategies for air traffic control. In Proceedings of the human factors and ergonomics society annual meeting, Vol. 58, pp. 230–234. Cited by: §4.2.1.
  • [2] S. M. Abdelfattah, G. M. Abdelrahman, and M. Wang (2018) Augmenting the size of eeg datasets using generative adversarial networks. In 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–6. Cited by: §5.2.2.
  • [3] U. R. Acharya, S. L. Oh, Y. Hagiwara, J. H. Tan, H. Adeli, and D. P. Subha (2018) Automated eeg-based screening of depression using deep convolutional neural network. Computer methods and programs in biomedicine 161, pp. 103–113. Cited by: §6.
  • [4] L. A. Alexandre (2016) 3D object recognition using convolutional neural networks with transfer learning between input channels. In Intelligent Autonomous Systems 13, pp. 889–898. Cited by: §5.4.
  • [5] R. Alkawadri (2019) Brain computer interface (bci) applications in mapping of epileptic brain networks based on intracranial-eeg. Frontiers in Neuroscience 13, pp. 191. Cited by: §6.
  • [6] J. An and S. Cho (2016) Hand motion identification of grasp-and-lift task from electroencephalography recordings using recurrent neural networks. In 2016 International Conference on Big Data and Smart Computing (BigComp), pp. 427–429. Cited by: §5.3.
  • [7] L. Angrisani, P. Arpaia, N. Moccaldi, and A. Esposito (2018) Wearable augmented reality and brain computer interface to improve human-robot interactions in smart industry: a feasibility study for ssvep signals. In 2018 IEEE 4th International Forum on Research and Technology for Society and Industry (RTSI), pp. 1–5. Cited by: §7.
  • [8] T. Arichi, G. Fagiolo, M. Varela, A. Melendez-Calderon, A. Allievi, N. Merchant, N. Tusor, S. J. Counsell, E. Burdet, C. F. Beckmann, et al. (2012) Development of bold signal hemodynamic responses in the human brain. Neuroimage 63 (2), pp. 663–673. Cited by: §1.1.3.
  • [9] P. Aricò, G. Borghini, G. Di Flumeri, N. Sciaraffa, and F. Babiloni (2018) Passive bci beyond the lab: current trends and future directions. Physiological measurement 39 (8), pp. 08TR02. Cited by: §1.1.3.
  • [10] A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok (2017) Synthesizing robust adversarial examples. arXiv preprint arXiv:1707.07397. Cited by: §5.5.
  • [11] M. Attia, I. Hettiarachchi, M. Hossny, and S. Nahavandi (2018) A time domain classification of steady-state visual evoked potentials using deep recurrent-convolutional neural networks. In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 766–769. Cited by: §5.3.
  • [12] A. M. Azab, J. Toth, L. S. Mihaylova, and M. Arvaneh (2018) A review on transfer learning approaches in brain–computer interface. In Signal Processing and Machine Learning for Brain-Machine Interfaces, pp. 81–101. Cited by: §4.2.1.
  • [13] N. A. Badcock, K. A. Preece, B. de Wit, K. Glenn, N. Fieder, J. Thie, and G. McArthur (2015) Validation of the emotiv epoc eeg system for research quality auditory event-related potentials in children. PeerJ 3, pp. e907. Cited by: §1.1.2.
  • [14] P. Bashivan, I. Rish, and S. Heisig (2016) Mental state recognition via wearable eeg. arXiv preprint arXiv:1602.00985. Cited by: §6.
  • [15] S. Biswal, J. Kulas, H. Sun, B. Goparaju, M. B. Westover, M. T. Bianchi, and J. Sun (2017) SLEEPNET: automated sleep staging system via deep learning. arXiv preprint arXiv:1707.08262. Cited by: §5.3.
  • [16] J. R. Brazète, J. Gagnon, R. B. Postuma, J. Bertrand, D. Petit, and J. Montplaisir (2016) Electroencephalogram slowing predicts neurodegeneration in rapid eye movement sleep behavior disorder. Neurobiology of aging 37, pp. 74–81. Cited by: §6.
  • [17] E. L. Broek (2013) Ubiquitous emotion-aware computing. Personal and Ubiquitous Computing 17 (1), pp. 53–67. Cited by: §5.1.5.
  • [18] J. J. Buckley and Y. Hayashi (1994) Fuzzy neural networks: a survey. Fuzzy sets and systems 66 (1), pp. 1–13. Cited by: §4.3.1.
  • [19] D. Camfferman, G. L. Moseley, K. Gertz, M. W. Pettet, and M. P. Jensen (2017) Waking eeg cortical markers of chronic pain and sleepiness. Pain Medicine 18 (10), pp. 1921–1931. Cited by: §6.
  • [20] Z. Cao, L. Ko, K. Lai, S. Huang, S. Wang, and C. Lin (2015) Classification of migraine stages based on resting-state eeg power. In 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–5. Cited by: §6.
  • [21] Z. Cao, C. Chuang, J. King, and C. Lin (2019) Multi-channel eeg recordings during a sustained-attention driving task. Scientific data 6. Cited by: §5.1.1.
  • [22] Z. Cao, W. Ding, Y. Wang, F. K. Hussain, A. Al-Jumaily, and C. Lin (2019) Effects of repetitive ssveps on eeg complexity using multiscale inherent fuzzy entropy. Neurocomputing. Cited by: §4.3.2.
  • [23] Z. Cao, K. Lai, C. Lin, C. Chuang, C. Chou, and S. Wang (2018) Exploring resting-state eeg complexity before migraine attacks. Cephalalgia 38 (7), pp. 1296–1306. Cited by: §6.
  • [24] Z. Cao, C. Lin, C. Chuang, K. Lai, A. C. Yang, J. Fuh, and S. Wang (2016) Resting-state eeg power and coherence vary between migraine phases. The journal of headache and pain 17 (1), pp. 102. Cited by: §6.
  • [25] Z. Cao, C. Lin, W. Ding, M. Chen, C. Li, and T. Su (2018) Identifying ketamine responses in treatment-resistant depression using a wearable forehead eeg. IEEE Transactions on Biomedical Engineering 66 (6), pp. 1668–1679. Cited by: §6.
  • [26] Z. Cao, C. Lin, K. Lai, L. Ko, J. King, K. Liao, J. Fuh, and S. Wang (2019) Extraction of ssveps-based inherent fuzzy entropy using a wearable headband eeg in migraine patients. IEEE Transactions on Fuzzy Systems. Cited by: §4.3.2, §6.
  • [27] Z. Cao and C. Lin (2017) Inherent fuzzy entropy for the improvement of eeg complexity evaluation. IEEE Transactions on Fuzzy Systems 26 (2), pp. 1032–1035. Cited by: §4.3.2.
  • [28] Z. Cao, M. Prasad, and C. Lin (2017) Estimation of ssvep-based eeg complexity using inherent fuzzy entropy. In 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–5. Cited by: §6.
  • [29] H. Cecotti and A. Graser (2010) Convolutional neural networks for p300 detection with application to brain-computer interfaces. IEEE transactions on pattern analysis and machine intelligence 33 (3), pp. 433–445. Cited by: §5.1.
  • [30] M. A. Cervera, S. R. Soekadar, J. Ushiba, J. d. R. Millán, M. Liu, N. Birbaumer, and G. Garipelli (2018) Brain-computer interfaces for post-stroke motor rehabilitation: a meta-analysis. Annals of clinical and translational neurology 5 (5), pp. 651–663. Cited by: §6.
  • [31] C. Chang, S. Hsu, L. Pion-Tonachini, and T. Jung (2019) Evaluation of artifact subspace reconstruction for automatic artifact components removal in multi-channel eeg recordings. IEEE Transactions on Biomedical Engineering. Cited by: §3.1.1.
  • [32] Y. Chang, Y. Wang, D. Wu, and C. Lin (2017) Generating a fuzzy rule-based brain-state-drift detector by riemann-metric-based clustering. In 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 1220–1225. Cited by: §4.3.2.
  • [33] R. Chavarriaga, A. Sobolewski, and J. d. R. Millán (2014) Errare machinale est: the use of error-related potentials in brain-machine interfaces. Frontiers in neuroscience 8, pp. 208. Cited by: §4.2.3.
  • [34] X. Chen, Y. Wang, M. Nakanishi, X. Gao, T. Jung, and S. Gao (2015) High-speed spelling with a noninvasive brain–computer interface. Proceedings of the national academy of sciences 112 (44), pp. E6058–E6067. Cited by: §4.2.3.
  • [35] X. Chen, Q. Chen, Y. Zhang, and Z. J. Wang (2018) A novel eemd-cca approach to removing muscle artifacts for pervasive eeg. IEEE Sensors Journal. Cited by: §3.1.2, §7.
  • [36] X. Chen, H. Peng, F. Yu, and K. Wang (2017) Independent vector analysis applied to remove muscle artifacts in eeg data. IEEE Transactions on Instrumentation and Measurement 66 (7), pp. 1770–1779. Cited by: §3.1.2.
  • [37] E. J. Cheng, K. Young, and C. Lin (2018) Image-based eeg signal processing for driving fatigue prediction. In 2018 International Automatic Control Conference (CACS), pp. 1–5. Cited by: §5.1.1.
  • [38] Y. M. Chi and G. Cauwenberghs (2010) Wireless non-contact eeg/ecg electrodes for body sensor networks. In 2010 International Conference on Body Sensor Networks, pp. 297–301. Cited by: §2.1.2.
  • [39] Y. M. Chi, T. Jung, and G. Cauwenberghs (2010) Dry-contact and noncontact biopotential electrodes: methodological review. IEEE reviews in biomedical engineering 3, pp. 106–119. Cited by: §2.1.2.
  • [40] K. Chiang, C. Wei, M. Nakanishi, and T. Jung (2019) Cross-subject transfer learning improves the practicality of real-world applications of brain-computer interfaces. In 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER), pp. 424–427. Cited by: §4.2.3.
  • [41] L. Chu, R. Qiu, H. Liu, Z. Ling, T. Zhang, and J. Wang (2017)

    Individual recognition in schizophrenia using deep learning methods with random forest and voting classifiers: insights from resting state eeg streams

    arXiv preprint arXiv:1707.03467. Cited by: §6.
  • [42] C. Chuang, Z. Cao, J. King, B. Wu, Y. Wang, and C. Lin (2018) Brain electrodynamic and hemodynamic signatures against fatigue during driving. Frontiers in neuroscience 12, pp. 181. Cited by: §5.1.1.
  • [43] E. Clark, A. Czaplewski, S. Dourney, A. Gadelha, K. Nguyen, P. Pasciucco, M. Rios, R. Stuart, E. Castillo, and M. Korostenskaja (2019) Brain-computer interface for motor rehabilitation. In International Conference on Human-Computer Interaction, pp. 243–254. Cited by: §6.
  • [44] C. Cooney, F. Raffaella, and D. Coyle (2019) Optimizing input layers improves cnn generalization and transfer learning for imagined speech decoding from eeg. In IEEE International Conference on Systems, Man, and Cybernetics, 2019: Industry 4.0, Cited by: §5.4.
  • [45] I. A. Corley and Y. Huang (2018) Deep eeg super-resolution: upsampling eeg spatial resolution with generative adversarial networks. In 2018 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), pp. 100–103. Cited by: §5.2.2.
  • [46] A. Craik, Y. He, and J. L. Contreras-Vidal (2019) Deep learning for electroencephalogram (eeg) classification tasks: a review. Journal of neural engineering 16 (3), pp. 031001. Cited by: §1.2.
  • [47] Y. Cui, Y. Xu, and D. Wu (2019) EEG-based driver drowsiness estimation using feature weighted episodic training. IEEE transactions on neural systems and rehabilitation engineering 27 (11), pp. 2263–2273. Cited by: §4.2.3.
  • [48] I. Daly, S. J. Nasuto, and K. Warwick (2012) Brain computer interface control via functional connectivity dynamics. Pattern recognition 45 (6), pp. 2123–2136. Cited by: §4.1.
  • [49] H. Dong, A. Supratak, W. Pan, C. Wu, P. M. Matthews, and Y. Guo (2017) Mixed neural network approach for temporal sleep stage classification. IEEE Transactions on Neural Systems and Rehabilitation Engineering 26 (2), pp. 324–333. Cited by: §5.3.
  • [50] L. Dong, Y. Zhang, R. Zhang, X. Zhang, D. Gong, P. A. Valdes-Sosa, P. Xu, C. Luo, and D. Yao (2015)

    Characterizing nonlinear relationships in functional imaging data using eigenspace maximal information canonical correlation analysis (emicca)

    NeuroImage 109, pp. 388–401. Cited by: §3.1.
  • [51] H. Dose, J. S. Møller, H. K. Iversen, and S. Puthusserypady (2018) An end-to-end deep learning approach to mi-eeg signal classification for bcis. Expert Systems with Applications 114, pp. 532–542. Cited by: §5.4.
  • [52] L. Fabien, L. Anatole, L. Fabrice, and A. Bruno (2007) Studying the use of fuzzy inference systems for motor imagery classification. IEEE transactions on neural systems and rehabilitation engineering 15 (2), pp. 322–324. Cited by: §4.3.1.
  • [53] F. Fahimi, Z. Zhang, W. B. Goh, K. K. Ang, and C. Guan (2019) Towards eeg generation using gans for bci applications. In 2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), pp. 1–4. Cited by: §5.2.2.
  • [54] F. Fahimi, Z. Zhang, W. B. Goh, T. Lee, K. K. Ang, and C. Guan (2018) Inter-subject transfer learning with end-to-end deep convolutional neural network for eeg-based bci. Journal of neural engineering. Cited by: §5.4.
  • [55] F. D. V. Fallani and D. S. Bassett (2019) Network neuroscience for optimizing brain-computer interfaces. Physics of life reviews. Cited by: §1.1.2.
  • [56] J. Faller, B. Z. Allison, C. Brunner, R. Scherer, D. Schmalstieg, G. Pfurtscheller, and C. Neuper (2017) A feasibility study on ssvep-based interaction with motivating and immersive virtual and augmented reality. arXiv preprint arXiv:1701.03981. Cited by: §7.
  • [57] T. C. Ferree, P. Luu, G. S. Russell, and D. M. Tucker (2001) Scalp electrode impedance, infection risk, and eeg data quality. Clinical Neurophysiology 112 (3), pp. 536–544. Cited by: §2.1.1.
  • [58] E. E. Fetz (1999) Real-time control of a robotic arm by neuronal ensembles. Nature neuroscience 2 (7), pp. 583. Cited by: §1.1.1.
  • [59] P. Fiedler, R. Mühle, S. Griebel, P. Pedrosa, C. Fonseca, F. Vaz, F. Zanow, and J. Haueisen (2018) Contact pressure and flexibility of multipin dry eeg electrodes. IEEE Transactions on Neural Systems and Rehabilitation Engineering 26 (4), pp. 750–757. Cited by: §7.
  • [60] K. Finc, K. Bonna, M. Lewandowska, T. Wolak, J. Nikadon, J. Dreszer, W. Duch, and S. Kühn (2017) Transition of the functional brain network related to increasing cognitive demands. Human brain mapping 38 (7), pp. 3659–3674. Cited by: §1.1.2.
  • [61] A. A. Frolov, O. Mokienko, R. Lyukmanov, E. Biryukova, S. Kotov, L. Turbina, G. Nadareyshvily, and Y. Bushkova (2017) Post-stroke rehabilitation training with a motor-imagery-based brain-computer interface (bci)-controlled hand exoskeleton: a randomized controlled multicenter trial. Frontiers in neuroscience 11, pp. 400. Cited by: §6.
  • [62] Z. Gao, X. Wang, Y. Yang, C. Mu, Q. Cai, W. Dang, and S. Zuo (2019) EEG-based spatio-temporal convolutional neural network for driver fatigue evaluation. IEEE transactions on neural networks and learning systems. Cited by: §5.1.1.
  • [63] J. S. García-Salinas, L. Villaseñor-Pineda, C. A. Reyes-García, and A. A. Torres-García (2019) Transfer learning in imagined speech eeg-based bcis. Biomedical Signal Processing and Control 50, pp. 151–157. Cited by: §4.2.3.
  • [64] A. Girouard, E. T. Solovey, L. M. Hirshfield, K. Chauncey, A. Sassaroli, S. Fantini, and R. J. Jacob (2009) Distinguishing difficulty levels with non-invasive brain activity measurements. In IFIP Conference on Human-Computer Interaction, pp. 440–452. Cited by: §1.1.3.
  • [65] I. J. Goodfellow, J. Shlens, and C. Szegedy (2015) EXPLAINING and harnessing adversarial examples. stat 1050, pp. 20. Cited by: §5.5.
  • [66] U. H. Govindarajan, A. J. Trappey, and C. V. Trappey (2018) Immersive technology for human-centric cyberphysical systems in complex manufacturing processes: a comprehensive overview of the global patent profile using collective intelligence. Complexity 2018. Cited by: §7.
  • [67] C. Guger, W. Harkam, C. Hertnaes, and G. Pfurtscheller (1999) Prosthetic control by an eeg-based brain-computer interface (bci). In Proc. aaate 5th european conference for the advancement of assistive technology, pp. 3–6. Cited by: §1.1.1.
  • [68] A. Gupta, P. Singh, and M. Karlekar (2018) A novel signal modeling approach for classification of seizure and seizure-free eeg signals. IEEE Transactions on Neural Systems and Rehabilitation Engineering 26 (5), pp. 925–935. Cited by: §6.
  • [69] D. Han, Q. Liu, and W. Fan (2018) A new image classification method using cnn transfer learning and web data augmentation. Expert Systems with Applications 95, pp. 43–56. Cited by: §5.4.
  • [70] K. G. Hartmann, R. T. Schirrmeister, and T. Ball (2018) EEG-gan: generative adversarial networks for electroencephalograhic (eeg) brain signals. arXiv preprint arXiv:1806.01875. Cited by: §5.2.1.
  • [71] M. M. Hasib, T. Nayak, and Y. Huang (2018) A hierarchical lstm model with attention for modeling eeg non-stationarity for human decision prediction. In 2018 IEEE EMBS international conference on biomedical & health informatics (BHI), pp. 104–107. Cited by: §5.3.
  • [72] H. He and D. Wu (2019) Different set domain adaptation for brain-computer interfaces: a label alignment approach. arXiv preprint arXiv:1912.01166. Cited by: §4.2.3.
  • [73] H. He and D. Wu (2019) Transfer learning for brain-computer interfaces: a euclidean space data alignment approach. IEEE Transactions on Biomedical Engineering. Cited by: §4.2.3.
  • [74] R. G. Hefron, B. J. Borghetti, J. C. Christensen, and C. M. S. Kabban (2017) Deep long short-term memory structures model temporal dependencies improving cognitive workload estimation. Pattern Recognition Letters 94, pp. 96–104. Cited by: §5.3.
  • [75] K. Hong and M. J. Khan (2017) Hybrid brain–computer interface techniques for improved classification accuracy and increased number of commands: a review. Frontiers in neurorobotics 11, pp. 35. Cited by: §7.
  • [76] B. Houston, M. Thompson, A. Ko, and H. Chizeck (2018) A machine-learning approach to volitional control of a closed-loop deep brain stimulation system. Journal of neural engineering 16 (1), pp. 016004. Cited by: §7.
  • [77] S. Hsu, T. R. Mullen, T. Jung, and G. Cauwenberghs (2015) Real-time adaptive eeg source separation using online recursive independent component analysis. IEEE transactions on neural systems and rehabilitation engineering 24 (3), pp. 309–319. Cited by: §3.2.
  • [78] L. Hu, M. Kan, S. Shan, and X. Chen (2018) Duplex generative adversarial network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1498–1507. Cited by: §5.4.
  • [79] M. Ienca, J. Fabrice, B. Elger, M. Caon, A. S. Pappagallo, R. W. Kressig, and T. Wangmo (2017) Intelligent assistive technology for alzheimer’s disease and other dementias: a systematic review. Journal of Alzheimer’s Disease 56 (4), pp. 1301–1340. Cited by: §1.1.2, §6.
  • [80] D. C. Irimia, W. Cho, R. Ortner, B. Z. Allison, B. E. Ignat, G. Edlinger, and C. Guger (2017) Brain-computer interfaces with multi-sensory feedback for stroke rehabilitation: a case study. Artificial organs 41 (11), pp. E178–E184. Cited by: §6.
  • [81] I. Iturrate, L. Montesano, and J. Minguez (2013) Task-dependent signal variations in eeg error-related potentials for brain–computer interfaces. Journal of neural engineering 10 (2), pp. 026024. Cited by: §4.2.3.
  • [82] A. S. Janani, T. S. Grummett, T. W. Lewis, S. P. Fitzgibbon, E. M. Whitham, D. DelosAngeles, H. Bakhshayesh, J. O. Willoughby, and K. J. Pope (2018) Improved artefact removal from eeg using canonical correlation analysis and spectral slope. Journal of neuroscience methods 298, pp. 1–15. Cited by: §3.1.2.
  • [83] J. R. Jang, C. Sun, and E. Mizutani (1997) Neuro-fuzzy and soft computing-a computational approach to learning and machine intelligence [book review]. IEEE Transactions on automatic control 42 (10), pp. 1482–1484. Cited by: §4.3.1.
  • [84] W. A. Jang, S. M. Lee, and D. H. Lee (2014) Development bci for individuals with severely disability using emotiv eeg headset and robot. In 2014 International Winter Workshop on Brain-Computer Interface (BCI), pp. 1–3. Cited by: §1.1.2.
  • [85] V. Jayaram, M. Alamgir, Y. Altun, B. Scholkopf, and M. Grosse-Wentrup (2016) Transfer learning in brain-computer interfaces. IEEE Computational Intelligence Magazine 11 (1), pp. 20–31. Cited by: §4.2.3.
  • [86] H. Jebelli, M. M. Khalili, and S. Lee (2019) Mobile eeg-based workers’ stress recognition by applying deep neural network. In Advances in Informatics and Computing in Civil and Construction Engineering, pp. 173–180. Cited by: §5.1.2.
  • [87] X. Jiang, G. Bian, and Z. Tian (2019) Removal of artifacts from eeg signals: a review. Sensors 19 (5), pp. 987. Cited by: §3.1.2, §3.1.
  • [88] X. Jiang, X. Zhang, and D. Wu (2019) Active learning for black-box adversarial attacks in eeg-based brain-computer interfaces. arXiv preprint arXiv:1911.04338. Cited by: §5.5.
  • [89] C. Juang and C. Lin (1999) A recurrent self-organizing neural fuzzy inference network. IEEE Transactions on Neural Networks 10 (4), pp. 828–845. Cited by: §4.3.1.
  • [90] N. Kasabov (2001) Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 31 (6), pp. 902–918. Cited by: §4.1.
  • [91] S. Katsigiannis and N. Ramzan (2017) DREAMER: a database for emotion recognition through eeg and ecg signals from wireless low-cost off-the-shelf devices. IEEE journal of biomedical and health informatics 22 (1), pp. 98–107. Cited by: §5.1.5.
  • [92] B. Kerous, F. Skola, and F. Liarokapis (2018) EEG-based bci and video games: a progress report. Virtual Reality 22 (2), pp. 119–135. Cited by: §1.1.2.
  • [93] A. Khatami, M. Babaie, H. R. Tizhoosh, A. Khosravi, T. Nguyen, and S. Nahavandi (2018)

    A sequential search-space shrinking using cnn transfer learning and a radon projection pool for medical image retrieval

    Expert Systems with Applications 100, pp. 224–233. Cited by: §5.4.
  • [94] B. Ko (2018) A brief review of facial emotion recognition based on visual information. sensors 18 (2), pp. 401. Cited by: §5.1.5.
  • [95] L. Ko, Y. Lu, H. Bustince, Y. Chang, Y. Chang, J. Ferandez, Y. Wang, J. A. Sanz, G. P. Dimuro, and C. Lin (2019) Multimodal fuzzy fusion for enhancing the motor-imagery-based brain computer interface. IEEE Computational Intelligence Magazine 14 (1), pp. 96–106. Cited by: §4.3.2.
  • [96] S. Koelstra, C. Muhl, M. Soleymani, J. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, and I. Patras (2011) Deap: a database for emotion analysis; using physiological signals. IEEE transactions on affective computing 3 (1), pp. 18–31. Cited by: §5.1.5.
  • [97] N. Korhan, Z. Dokur, and T. Olmez (2019) Motor imagery based eeg classification by using common spatial patterns and convolutional neural networks. In 2019 Scientific Meeting on Electrical-Electronics & Biomedical Engineering and Computer Science (EBBT), pp. 1–4. Cited by: §5.1.4.
  • [98] S. B. Kotsiantis, I. D. Zaharakis, and P. E. Pintelas (2006) Machine learning: a review of classification and combining techniques. Artificial Intelligence Review 26 (3), pp. 159–190. Cited by: §4.1.
  • [99] A. Kurakin, I. Goodfellow, and S. Bengio (2016) Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533. Cited by: §5.5.
  • [100] V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, and B. J. Lance (2018) EEGNet: a compact convolutional neural network for eeg-based brain–computer interfaces. Journal of neural engineering 15 (5), pp. 056013. Cited by: §5.5.
  • [101] Y. LeCun and M. Ranzato (2013) Deep learning tutorial. In Tutorials in International Conference on Machine Learning (ICML’13), pp. 1–29. Cited by: §5.
  • [102] S. Lee, S. Han, and S. C. Jun (2018) EEG hyperscanning for eight or more persons-feasibility study for emotion recognition using deep learning technique. In 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pp. 488–492. Cited by: §5.1.5.
  • [103] T. Lee, M. Girolami, and T. J. Sejnowski (1999) Independent component analysis using an extended infomax algorithm for mixed subgaussian and supergaussian sources. Neural computation 11 (2), pp. 417–441. Cited by: §3.1.1.
  • [104] S. Lees, N. Dayan, H. Cecotti, P. Mccullagh, L. Maguire, F. Lotte, and D. Coyle (2018) A review of rapid serial visual presentation-based brain–computer interfaces. Journal of neural engineering 15 (2), pp. 021001. Cited by: §1.1.3.
  • [105] J. Li, K. Cheng, S. Wang, F. Morstatter, R. P. Trevino, J. Tang, and H. Liu (2018) Feature selection: a data perspective. ACM Computing Surveys (CSUR) 50 (6), pp. 94. Cited by: §5.5.
  • [106] X. Li, R. La, Y. Wang, J. Niu, S. Zeng, S. Sun, and J. Zhu (2019) EEG-based mild depression recognition using convolutional neural network. Medical & biological engineering & computing 57 (6), pp. 1341–1352. Cited by: §6.
  • [107] Y. Li, A. Vgontzas, I. Kritikou, J. Fernandez-Mendoza, M. Basta, S. Pejovic, J. Gaines, and E. O. Bixler (2017) Psychomotor vigilance test and its association with daytime sleepiness and inflammation in sleep apnea: clinical implications. Journal of clinical sleep medicine 13 (09), pp. 1049–1056. Cited by: §1.1.3.
  • [108] L. Liao, C. Lin, K. McDowell, A. E. Wickenden, K. Gramann, T. Jung, L. Ko, and J. Chang (2012) Biosensor technologies for augmented brain–computer interfaces in the next decades. Proceedings of the IEEE 100 (Special Centennial Issue), pp. 1553–1566. Cited by: §2.1.2, §2.1.2.
  • [109] C. Lin, C. Chuang, Z. Cao, A. K. Singh, C. Hung, Y. Yu, M. Nascimben, Y. Liu, J. King, T. Su, et al. (2017) Forehead eeg in support of future feasible personal healthcare solutions: sleep management, headache prevention, and depression treatment. IEEE Access 5, pp. 10612–10621. Cited by: §6.
  • [110] C. Lin, Y. Yu, J. King, C. Liu, and L. Liao (2019) Augmented wire-embedded silicon-based dry-contact sensors for electroencephalography signal measurements. IEEE Sensors Journal. Cited by: §7.
  • [111] Y. Lin, C. Wang, T. Jung, T. Wu, S. Jeng, J. Duann, and J. Chen (2010) EEG-based emotion recognition in music listening. IEEE Transactions on Biomedical Engineering 57 (7), pp. 1798–1806. Cited by: §5.1.5.
  • [112] J. Liu, S. Qu, W. Chen, J. Chu, and Y. Sun (2019) Online adaptive decoding of motor imagery based on reinforcement learning. In 2019 14th IEEE Conference on Industrial Electronics and Applications (ICIEA), pp. 522–527. Cited by: §7.
  • [113] M. Liu and O. Tuzel (2016) Coupled generative adversarial networks. In Advances in neural information processing systems, pp. 469–477. Cited by: §5.4.
  • [114] Y. Liu, Y. Lin, S. Wu, C. Chuang, and C. Lin (2015) Brain dynamics in predicting driving fatigue using a recurrent self-evolving fuzzy neural network. IEEE transactions on neural networks and learning systems 27 (2), pp. 347–360. Cited by: §4.3.2.
  • [115] Z. Liu, X. Zhang, and D. Wu (2019) Universal adversarial perturbations for cnn classifiers in eeg-based bcis. arXiv preprint arXiv:1912.01171. Cited by: §5.5.
  • [116] F. Lotte, L. Bougrain, A. Cichocki, M. Clerc, M. Congedo, A. Rakotomamonjy, and F. Yger (2018) A review of classification algorithms for eeg-based brain–computer interfaces: a 10 year update. Journal of neural engineering 15 (3), pp. 031005. Cited by: §1.2.
  • [117] F. Lotte, L. Bougrain, and M. Clerc (1999) Electroencephalography (eeg)-based brain–computer interfaces. Wiley Encyclopedia of Electrical and Electronics Engineering, pp. 1–20. Cited by: §1.1.1.
  • [118] Y. Lu, W. Zheng, B. Li, and B. Lu (2015) Combining eye movements and eeg to enhance emotion recognition. In Twenty-Fourth International Joint Conference on Artificial Intelligence, Cited by: §4.3.2.
  • [119] Y. Luo and B. Lu (2018) EEG data augmentation for emotion recognition using a conditional wasserstein gan. In 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 2535–2538. Cited by: §5.2.2.
  • [120] Y. Luo, L. Zhu, and B. Lu (2019) A gan-based data augmentation method for multimodal emotion recognition. In International Symposium on Neural Networks, pp. 141–150. Cited by: §5.2.2.
  • [121] M. Mahmud, M. S. Kaiser, A. Hussain, and S. Vassanelli (2018) Applications of deep learning and reinforcement learning to biological data. IEEE transactions on neural networks and learning systems 29 (6), pp. 2063–2079. Cited by: §5.4.
  • [122] T. McMahan, I. Parberry, and T. D. Parsons (2015) Modality specific assessment of video game player’s experience using the emotiv. Entertainment Computing 7, pp. 1–6. Cited by: §1.1.2.
  • [123] L. Meng, C. Lin, T. Jung, and D. Wu (2019) White-box target attack for eeg-based bci regression problems. In International Conference on Neural Information Processing, pp. 476–488. Cited by: §5.5.
  • [124] M. Moinnereau, T. Brienne, S. Brodeur, J. Rouat, K. Whittingstall, and E. Plourde (2018) Classification of auditory stimuli from eeg signals with a regulated recurrent neural network reservoir. arXiv preprint arXiv:1804.10322. Cited by: §5.3.
  • [125] S. Moon, S. Jang, and J. Lee (2018) Convolutional neural network approach for eeg-based emotion recognition using brain connectivity and its spatial information. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2556–2560. Cited by: §5.1.5.
  • [126] F. C. Morabito, M. Campolo, C. Ieracitano, J. M. Ebadi, L. Bonanno, A. Bramanti, S. Desalvo, N. Mammone, and P. Bramanti (2016) Deep convolutional neural networks for classification of mild cognitive impaired and alzheimer’s disease patients from scalp eeg recordings. In 2016 IEEE 2nd International Forum on Research and Technologies for Society and Industry Leveraging a better tomorrow (RTSI), pp. 1–6. Cited by: §6.
  • [127] Z. Mousavi, T. Y. Rezaii, S. Sheykhivand, A. Farzamnia, and S. Razavi (2019) Deep convolutional neural network for classification of sleep stages from single-channel eeg signals. Journal of neuroscience methods, pp. 108312. Cited by: §5.1.3.
  • [128] T. Mullen, C. Kothe, Y. M. Chi, A. Ojeda, T. Kerth, S. Makeig, G. Cauwenberghs, and T. Jung (2013) Real-time modeling and 3d visualization of source dynamics and connectivity using wearable eeg. In 2013 35th annual international conference of the IEEE engineering in medicine and biology society (EMBC), pp. 2184–2187. Cited by: §3.2.
  • [129] T. R. Mullen, C. A. Kothe, Y. M. Chi, A. Ojeda, T. Kerth, S. Makeig, T. Jung, and G. Cauwenberghs (2015) Real-time neuroimaging and cognitive monitoring using wearable dry eeg. IEEE Transactions on Biomedical Engineering 62 (11), pp. 2553–2567. Cited by: §2.1.2, §3.2.
  • [130] G. Müller-Putz, R. Leeb, M. Tangermann, J. Höhne, A. Kübler, F. Cincotti, D. Mattia, R. Rupp, K. Müller, and J. d. R. Millán (2015) Towards noninvasive hybrid brain–computer interfaces: framework, practice, clinical application, and beyond. Proceedings of the IEEE 103 (6), pp. 926–943. Cited by: §7.
  • [131] M. Nakanishi, Y. Wang, X. Chen, Y. Wang, X. Gao, and T. Jung (2017) Enhancing detection of ssveps for a high-speed brain speller using task-related component analysis. IEEE Transactions on Biomedical Engineering 65 (1), pp. 104–112. Cited by: §1.1.3.
  • [132] S. L. Oh, Y. Hagiwara, U. Raghavendra, R. Yuvaraj, N. Arunkumar, M. Murugappan, and U. R. Acharya (2018) A deep learning approach for parkinson’s disease diagnosis from eeg signals. Neural Computing and Applications, pp. 1–7. Cited by: §6.
  • [133] B. E. Olivas-Padilla and M. I. Chacon-Murguia (2019) Classification of multiple motor imagery using deep convolutional neural networks and spatial filters. Applied Soft Computing 75, pp. 461–472. Cited by: §5.1.4.
  • [134] S. J. Pan and Q. Yang (2009) A survey on transfer learning. IEEE Transactions on knowledge and data engineering 22 (10), pp. 1345–1359. Cited by: §4.2.2.
  • [135] S. Panwar, P. Rad, T. Jung, and Y. Huang (2019) Modeling eeg data distribution with a wasserstein generative adversarial network to predict rsvp events. arXiv preprint arXiv:1911.04379. Cited by: §5.2.2.
  • [136] A. U. Patil, A. Dube, R. K. Jain, G. D. Jindal, and D. Madathil (2019) Classification and comparative analysis of control and migraine subjects using eeg signals. In Information Systems Design and Intelligent Applications, pp. 31–39. Cited by: §6.
  • [137] S. Patnaik, L. Moharkar, and A. Chaudhari (2017) Deep rnn learning for eeg based functional brain state inference. In 2017 International Conference on Advances in Computing, Communication and Control (ICAC3), pp. 1–6. Cited by: §5.3.
  • [138] H. Phan, F. Andreotti, N. Cooray, O. Y. Chén, and M. De Vos (2018) Joint classification and prediction cnn framework for automatic sleep stage classification. IEEE Transactions on Biomedical Engineering 66 (5), pp. 1285–1296. Cited by: §5.1.3.
  • [139] L. Pion-Tonachini, S. Hsu, C. Chang, T. Jung, and S. Makeig (2018) Online automatic artifact rejection using the real-time eeg source-mapping toolbox (rest). In 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 106–109. Cited by: §3.1.1, §3.1.1.
  • [140] S. A. Prajapati, R. Nagaraj, and S. Mitra (2017) Classification of dental diseases using cnn and transfer learning. In 2017 5th International Symposium on Computational and Business Intelligence (ISCBI), pp. 70–74. Cited by: §5.4.
  • [141] M. M. Rahman, M. I. H. Bhuiyan, and A. R. Hassan (2018) Sleep stage classification using single-channel eog. Computers in biology and medicine 102, pp. 211–220. Cited by: §5.1.3.
  • [142] R. Rai and A. V. Deshpande (2016) Fragmentary shape recognition: a bci study. Computer-Aided Design 71, pp. 51–64. Cited by: §1.1.2.
  • [143] R. A. Ramadan and A. V. Vasilakos (2017) Brain computer interface: control signals review. Neurocomputing 223, pp. 26–44. Cited by: §1.1.3.
  • [144] H. Ramoser, J. Muller-Gerking, and G. Pfurtscheller (2000) Optimal spatial filtering of single trial eeg during imagined hand movement. IEEE transactions on rehabilitation engineering 8 (4), pp. 441–446. Cited by: §5.1.4.
  • [145] A. Remsik, B. Young, R. Vermilyea, L. Kiekhoefer, J. Abrams, S. Evander Elmore, P. Schultz, V. Nair, D. Edwards, J. Williams, et al. (2016) A review of the progression and future implications of brain-computer interface therapies for restoration of distal upper extremity motor function after stroke. Expert review of medical devices 13 (5), pp. 445–454. Cited by: §6.
  • [146] Y. Roy, H. Banville, I. Albuquerque, A. Gramfort, T. H. Falk, and J. Faubert (2019) Deep learning-based electroencephalography analysis: a systematic review. Journal of neural engineering. Cited by: §1.2.
  • [147] G. Ruffini, D. Ibañez, M. Castellano, S. Dunne, and A. Soria-Frisch (2016) EEG-driven rnn classification for prognosis of neurodegeneration in at-risk patients. In International Conference on Artificial Neural Networks, pp. 306–313. Cited by: §6.
  • [148] S. Sakhavi, C. Guan, and S. Yan (2018) Learning temporal information for brain-computer interface using convolutional neural networks. IEEE transactions on neural networks and learning systems 29 (11), pp. 5619–5629. Cited by: §5.1.4.
  • [149] S. Sakhavi and C. Guan (2017) Convolutional neural network-based transfer learning and knowledge distillation using multi-subject data in motor imagery bci. In 2017 8th International IEEE/EMBS Conference on Neural Engineering (NER), pp. 588–591. Cited by: §5.4.
  • [150] W. Samek, F. C. Meinecke, and K. Müller (2013) Transferring subspaces between subjects in brain–computer interfacing. IEEE Transactions on Biomedical Engineering 60 (8), pp. 2289–2298. Cited by: §4.2.3.
  • [151] G. Schalk, D. J. McFarland, T. Hinterberger, N. Birbaumer, and J. R. Wolpaw (2004) BCI2000: a general-purpose brain-computer interface (bci) system. IEEE Transactions on biomedical engineering 51 (6), pp. 1034–1043. Cited by: §1.1.3.
  • [152] R. T. Schirrmeister, J. T. Springenberg, L. D. J. Fiederer, M. Glasstetter, K. Eggensperger, M. Tangermann, F. Hutter, W. Burgard, and T. Ball (2017) Deep learning with convolutional neural networks for eeg decoding and visualization. Human brain mapping 38 (11), pp. 5391–5420. Cited by: §5.5.
  • [153] C. Schmidt, D. Piper, B. Pester, A. Mierau, and H. Witte (2018) Tracking the reorganization of module structure in time-varying weighted brain functional connectivity networks. International journal of neural systems 28 (04), pp. 1750051. Cited by: §1.1.2.
  • [154] H. Serby, E. Yom-Tov, and G. F. Inbar (2005) An improved p300-based brain-computer interface. IEEE Transactions on neural systems and rehabilitation engineering 13 (1), pp. 89–98. Cited by: §1.1.3.
  • [155] V. Shah, M. Golmohammadi, S. Ziyabari, E. Von Weltin, I. Obeid, and J. Picone (2017) Optimizing channel selection for seizure detection. In 2017 IEEE Signal Processing in Medicine and Biology Symposium (SPMB), pp. 1–5. Cited by: §6.
  • [156] S. S. Shankar and R. Rai (2014) Human factors study on the usage of bci headset for 3d cad modeling. Computer-Aided Design 54, pp. 51–55. Cited by: §1.1.2.
  • [157] M. Shim, H. Hwang, D. Kim, S. Lee, and C. Im (2016) Machine-learning-based diagnosis of schizophrenia using combined sensor-level and source-level eeg features. Schizophrenia research 176 (2-3), pp. 314–319. Cited by: §6.
  • [158] H. Shimodaira (2000) Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference 90 (2), pp. 227–244. Cited by: §4.2.3.
  • [159] D. Shon, K. Im, J. Park, D. Lim, B. Jang, and J. Kim (2018)

    Emotional stress state detection using genetic algorithm-based feature selection on eeg signals

    International journal of environmental research and public health 15 (11), pp. 2461. Cited by: §5.1.2.
  • [160] Siddharth, A. N. Patel, T. Jung, and T. J. Sejnowski (2019-04) A wearable multi-modal bio-sensing system towards real-world applications. IEEE Transactions on Biomedical Engineering 66 (4), pp. 1137–1147. External Links: Document, ISSN 1558-2531 Cited by: §2.1.2.
  • [161] S. Simpraga, R. Alvarez-Jimenez, H. D. Mansvelder, J. M. Van Gerven, G. J. Groeneveld, S. Poil, and K. Linkenkaer-Hansen (2017) EEG machine learning for accurate detection of cholinergic intervention and alzheimer’s disease. Scientific reports 7 (1), pp. 5775. Cited by: §6.
  • [162] A. Siswoyo, Z. Arief, and I. A. Sulistijono (2017) Application of artificial neural networks in modeling direction wheelchairs using neurosky mindset mobile (eeg) device. EMITTER International Journal of Engineering Technology 5 (1), pp. 170–191. Cited by: §1.1.2.
  • [163] S. R. Soekadar, N. Birbaumer, M. W. Slutzky, and L. G. Cohen (2015) Brain–machine interfaces in neurorehabilitation of stroke. Neurobiology of disease 83, pp. 172–179. Cited by: §6.
  • [164] T. Song, W. Zheng, P. Song, and Z. Cui (2018) EEG emotion recognition using dynamical graph convolutional neural networks. IEEE Transactions on Affective Computing. Cited by: §5.1.5.
  • [165] A. Sors, S. Bonnet, S. Mirek, L. Vercueil, and J. Payen (2018) A convolutional neural network for sleep stage scoring from raw single-channel eeg. Biomedical Signal Processing and Control 42, pp. 107–114. Cited by: §5.1.3.
  • [166] C. Spampinato, S. Palazzo, I. Kavasidis, D. Giordano, N. Souly, and M. Shah (2017) Deep learning human mind for automated visual classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6809–6817. Cited by: §5.3.
  • [167] M. Sugeno (1993) Fuzzy measures and fuzzy integrals—a survey. In Readings in Fuzzy Sets for Intelligent Systems, pp. 251–257. Cited by: §4.3.1.
  • [168] A. Supratak, H. Dong, C. Wu, and Y. Guo (2017) DeepSleepNet: a model for automatic sleep stage scoring based on raw single-channel eeg. IEEE Transactions on Neural Systems and Rehabilitation Engineering 25 (11), pp. 1998–2008. Cited by: §5.3.
  • [169] S. Sur and V. Sinha (2009) Event-related potential: an overview. Industrial psychiatry journal 18 (1), pp. 70. Cited by: §1.1.3.
  • [170] K. T. Sweeney, T. E. Ward, and S. F. McLoone (2012) Artifact removal in physiological signals—practices and possibilities. IEEE transactions on information technology in biomedicine 16 (3), pp. 488–500. Cited by: §3.1.
  • [171] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: §5.5.
  • [172] Y. R. Tabar and U. Halici (2016) A novel deep learning approach for classification of eeg motor imagery signals. Journal of neural engineering 14 (1), pp. 016003. Cited by: §5.1.4.
  • [173] S. S. Talathi (2017) Deep recurrent neural networks for seizure detection and early seizure detection systems. arXiv preprint arXiv:1706.03283. Cited by: §6.
  • [174] C. Tan, F. Sun, T. Kong, B. Fang, and W. Zhang (2019) Attention-based transfer learning for brain-computer interface. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1154–1158. Cited by: §5.4.
  • [175] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu (2018) A survey on deep transfer learning. In International Conference on Artificial Neural Networks, pp. 270–279. Cited by: §5.4.
  • [176] C. Tan, F. Sun, W. Zhang, J. Chen, and C. Liu (2017) Multimodal classification with deep convolutional-recurrent neural networks for electroencephalography. In International Conference on Neural Information Processing, pp. 767–776. Cited by: §5.3.
  • [177] J. Thomas, T. Maszczyk, N. Sinha, T. Kluge, and J. Dauwels (2017) Deep learning-based classification for brain-computer interfaces. In 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 234–239. Cited by: §5.3.
  • [178] T. Tsai, L. Kau, and K. Chao (2016) A takagi-sugeno fuzzy neural network-based algorithm with single-channel eeg signal for the discrimination between light and deep sleep stages. In 2016 IEEE Biomedical Circuits and Systems Conference (BioCAS), pp. 532–535. Cited by: §4.3.2.
  • [179] K. M. Tsiouris, V. C. Pezoulas, M. Zervakis, S. Konitsiotis, D. D. Koutsouris, and D. I. Fotiadis (2018) A long short-term memory deep learning network for the prediction of epileptic seizures using eeg signals. Computers in biology and medicine 99, pp. 24–37. Cited by: §6.
  • [180] F. Velasco-Álvarez, S. Sancha-Ros, E. García-Garaluz, Á. Fernández-Rodríguez, M. T. Medina-Juliá, and R. Ron-Angevin (2019) UMA-bci speller: an easily configurable p300 speller tool for end users. Computer methods and programs in biomedicine 172, pp. 127–138. Cited by: §1.1.3.
  • [181] J. J. Vidal (1973) Toward direct brain-computer communication. Annual review of Biophysics and Bioengineering 2 (1), pp. 157–180. Cited by: §1.1.1.
  • [182] K. Wang, Y. Ho, Y. Huang, and W. Fang (2019) Design of intelligent eeg system for human emotion recognition with convolutional neural network. In 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), pp. 142–145. Cited by: §5.1.5.
  • [183] P. Wang, J. Lu, B. Zhang, and Z. Tang (2015) A review on transfer learning for brain-computer interface classification. In 2015 5th International Conference on Information Science and Technology (ICIST), pp. 315–322. Cited by: §4.2.3.
  • [184] Z. Wang, Y. Song, and C. Zhang (2008) Transferred dimensionality reduction. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 550–565. Cited by: §4.2.2.
  • [185] C. Wei, Y. Lin, Y. Wang, C. Lin, and T. Jung (2018) A subject-transfer framework for obviating inter-and intra-subject variability in eeg-based drowsiness detection. NeuroImage 174, pp. 407–419. Cited by: §4.2.3.
  • [186] G. Wimmer, A. Vécsei, and A. Uhl (2016) CNN transfer learning for the automated diagnosis of celiac disease. In 2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA), pp. 1–6. Cited by: §5.4.
  • [187] J. Wolpaw and E. W. Wolpaw (2012) Brain-computer interfaces: principles and practice. OUP USA. Cited by: §1.1.1.
  • [188] D. Wu, J. King, C. Chuang, C. Lin, and T. Jung (2017) Spatial filtering for eeg-based regression problems in brain–computer interface (bci). IEEE Transactions on Fuzzy Systems 26 (2), pp. 771–781. Cited by: §4.3.2.
  • [189] D. Wu, B. J. Lance, V. J. Lawhern, S. Gordon, T. Jung, and C. Lin (2017) EEG-based user reaction time estimation using riemannian geometry features. IEEE Transactions on Neural Systems and Rehabilitation Engineering 25 (11), pp. 2157–2168. Cited by: §4.3.2.
  • [190] D. Wu, V. J. Lawhern, S. Gordon, B. J. Lance, and C. Lin (2016) Driver drowsiness estimation from eeg signals using online weighted adaptation regularization for regression (owarr). IEEE Transactions on Fuzzy Systems 25 (6), pp. 1522–1535. Cited by: §4.3.2.
  • [191] D. Wu, V. J. Lawhern, W. D. Hairston, and B. J. Lance (2016) Switching eeg headsets made easy: reducing offline calibration effort using active weighted adaptation regularization. IEEE Transactions on Neural Systems and Rehabilitation Engineering 24 (11), pp. 1125–1137. Cited by: §4.2.3.
  • [192] D. Wu (2016) Online and offline domain adaptation for reducing bci calibration effort. IEEE Transactions on human-machine Systems 47 (4), pp. 550–563. Cited by: §4.2.3.
  • [193] S. Wu, Y. Liu, T. Hsieh, Y. Lin, C. Chen, C. Chuang, and C. Lin (2016)

    Fuzzy integral with particle swarm optimization for a motor-imagery-based brain–computer interface

    IEEE Transactions on Fuzzy Systems 25 (1), pp. 21–28. Cited by: §4.3.2.
  • [194] G. Xu, X. Shen, S. Chen, Y. Zong, C. Zhang, H. Yue, M. Liu, F. Chen, and W. Che (2019) A deep transfer convolutional neural network framework for eeg signal classification. IEEE Access 7, pp. 112767–112776. Cited by: §5.4.
  • [195] T. Yanagisawa, R. Fukuma, B. Seymour, K. Hosomi, H. Kishima, T. Shimizu, H. Yokoi, M. Hirata, T. Yoshimine, Y. Kamitani, et al. (2019) Using a bci prosthetic hand to control phantom limb pain. In Brain-Computer Interface Research, pp. 43–52. Cited by: §6.
  • [196] Y. Yang, Q. Wu, M. Qiu, Y. Wang, and X. Chen (2018) Emotion recognition from multi-channel eeg through parallel convolutional recurrent neural network. In 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–7. Cited by: §5.1.5.
  • [197] K. Yue and D. Wang (2019) EEG-based 3d visual fatigue evaluation using cnn. Electronics 8 (11), pp. 1208. Cited by: §5.1.1.
  • [198] L. A. Zadeh (1996) On fuzzy algorithms. In fuzzy sets, fuzzy logic, and fuzzy systems: selected papers By Lotfi A Zadeh, pp. 127–147. Cited by: §4.3.1.
  • [199] T. O. Zander, J. Brönstrup, R. Lorenz, and L. R. Krol (2014) Towards bci-based implicit control in human–computer interaction. In Advances in Physiological Computing, pp. 67–90. Cited by: §1.1.3.
  • [200] T. O. Zander and C. Kothe (2011) Towards passive brain–computer interfaces: applying brain–computer interface technology to human–machine systems in general. Journal of neural engineering 8 (2), pp. 025005. Cited by: §1.1.1.
  • [201] H. Zeng, C. Yang, G. Dai, F. Qin, J. Zhang, and W. Kong (2018) EEG classification of driver mental states by deep learning. Cognitive neurodynamics 12 (6), pp. 597–606. Cited by: §5.1.1.
  • [202] R. Zerafa, T. Camilleri, O. Falzon, and K. P. Camilleri (2016) A real-time ssvep-based brain-computer interface music player application. In XIV Mediterranean Conference on Medical and Biological Engineering and Computing 2016, pp. 173–178. Cited by: §7.
  • [203] T. Zhang, W. Zheng, Z. Cui, Y. Zong, and Y. Li (2018) Spatial–temporal recurrent neural network for emotion recognition. IEEE transactions on cybernetics 49 (3), pp. 839–847. Cited by: §5.3.
  • [204] X. Zhang, L. Yao, C. Huang, T. Gu, Z. Yang, and Y. Liu (2017) DeepKey: an eeg and gait based dual-authentication system. arXiv preprint arXiv:1706.01606. Cited by: §1.1.3, §5.3.
  • [205] X. Zhang, L. Yao, X. Wang, J. Monaghan, and D. Mcalpine (2019) A survey on deep learning based brain computer interface: recent advances and new frontiers. arXiv preprint arXiv:1905.04149. Cited by: §1.2, §5.
  • [206] X. Zhang and D. Wu (2019) On the vulnerability of cnn classifiers in eeg-based bcis. IEEE Transactions on Neural Systems and Rehabilitation Engineering 27 (5), pp. 814–825. Cited by: §5.5.
  • [207] W. Zheng and B. Lu (2015) Investigating critical frequency bands and channels for eeg-based emotion recognition with deep neural networks. IEEE Transactions on Autonomous Mental Development 7 (3), pp. 162–175. Cited by: §5.1.5.
  • [208] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017)

    Unpaired image-to-image translation using cycle-consistent adversarial networks

    In Proceedings of the IEEE international conference on computer vision, pp. 2223–2232. Cited by: §5.4.
  • [209] X. Zhu, P. Li, C. Li, D. Yao, R. Zhang, and P. Xu (2019) Separated channel convolutional neural network to realize the training free motor imagery bci systems. Biomedical Signal Processing and Control 49, pp. 396–403. Cited by: §5.4.