Epileptic seizure detection using deep learning techniques: A Review

07/02/2020 ∙ by Afshin Shoeibi, et al. ∙ Deakin University 0

A variety of screening approaches have been proposed to diagnose epileptic seizures, using Electroencephalography (EEG) and Magnetic Resonance Imaging (MRI) modalities. Artificial intelligence encompasses a variety of areas, and one of its branches is deep learning. Before the rise of deep learning, conventional machine learning algorithms involving feature extraction were performed. This limited their performance to the ability of those handcrafting the features. However, in deep learning, the extraction of features and classification is entirely automated. The advent of these techniques in many areas of medicine such as diagnosis of epileptic seizures, has made significant advances. In this study, a comprehensive overview of the types of deep learning methods exploited to diagnose epileptic seizures from various modalities has been studied. Additionally, hardware implementation and cloud-based works are discussed as they are most suited for applied medicine.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 5

page 6

page 7

page 9

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Epilepsy is a non-communicable disease and one of the most common neurological disorders of humans, usually associated with sudden attacks [one]. Seizures are a swift and early abnormality in the electrical activity of the brain that disrupts the part of the whole body [two]. Epileptic seizures are affecting around 60 million people worldwide by varied kinds [three]. These attacks occasionally provoke cognitive disorders which can cause severe physical injury to the patient. Besides, people with epileptic seizures sometimes suffer emotional distress due to embarrassment and lack of appropriate social status. Hence, early detection of epileptic seizures can help the patients and improve their quality of life.

Various screening techniques have been developed to diagnose epileptic seizures, including magnetic resonance imaging (MRI) [four], Electroencephalogram (EEG) [five], Magnetoencephalography (MEG) [six] and Positron Emission Tomography (PET) [seven]

. The EEG signals are widely preferred as they are economical, portable, and show clear rhythms in the frequency domain

[eight, ACHARYA2018103]

. The EEG provides the voltage variations produced by the ionic current of neurons in the brain, which indicate the brain’s bioelectric activity

[nine]. Diagnosing epilepsy with EEG signals is time-consuming and strenuous, as the epileptologist or neurologist needs to screen the EEG signals minutely. Also, there is a possibility of human error, and hence, developing a computer-based diagnosis may alleviate these problems.

Many machine learning algorithms have been developed using statistical, frequency domain and nonlinear parameters to detect epileptic seizures [ten, eleven, twelve, thirteen, fourteen, fifteen]

. In conventional machine learning techniques, the selection of features and classifiers is done by trial and error method. Also, one needs to have sound knowledge of signal processing and data mining techniques to develop an accurate model. Such models perform well for limited data. Nowadays, with the increase in the availability of data, machine learning techniques may not perform very well. Hence, the deep learning techniques, which are the state-of-art methods, have been employed.

In traditional machine learning algorithms, most simulations were executed in the Matlab software environment, but the deep learning models are usually developed using Python programming language with numerous open-source toolboxes. The python language with more freely available deep learning libraries have helped the researchers to develop novel automated systems. Also created the accessibility of computation resource to everyone thanks to cloud computing. Figure

1

shows that, the Tensorflow and one of its high-level APIs, Keras, are widely used for epileptic seizure detection using deep learning in reviewed works due to their versatility and applicability.

Fig. 1: Number of times each deep learning tools used for automated detection of epileptic seizure by various studies.

Since 2016, substantial research has been done to detect epilepsy using deep learning models such as Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Deep Belief Networks (DBN), Autoencoders (AE), CNN-RNN, and CNN-AE

[seventeen, eighteen, ninteen, twenty]. The number of studies in this area using deep learning is growing by proposing new efficient models. Figure 2 provides the overview of number of studies conducted using various deep learning models from 2014 to 2020 in detecting epileptic seizures.

Fig. 2: Number of studies conducted using various deep learning models from 2014 to till-date (2020) by various authors.

The main aims of this study are as follows:

  • Providing information on available EEG datasets.

  • Reviewing works done using various deep learning models for automated detection of epileptic seizures with various modality signals.

  • Introducing future challenges on the detection of epileptic seizures.

  • Analyzing the best performing model for various modalities of data.

Epileptic seizures detection using deep learning is discussed in section II. Section III describes the non-EEG based epileptic seizure detection. Hardware used for epileptic seizures detection is provided in section IV. Discussion on the paper is outlined in section V. The challenges faced by employing deep learning methods for the epileptic seizure detection are summarized in section VI. Finally, the conclusion and future work are delineated in section VII.

Ii Epileptic Seizure Detection Based on Deep Learning Techniques

Figure 3 illustrates the working of a Computer-Aided Diagnosis System (CADS) for epileptic seizures using deep learning methods. The input to the deep learning model can be EEG, MEG, Electrocorticography (ECoG), functional Near-Infrared Spectroscopy (fNIRS), PET, Single-Photon Emission Computed Tomography (SPECT) and MRI. Then the signal is subjected to the preprocessing to remove the noise. These noise eliminated signals are used to develop the deep learning models. The performance of the model is evaluated using accuracy, sensitivity, and specificity. Additionally, a table combining all the works conducted on epileptic seizure detection using deep learning is presented in the table form in Appendix A of the paper.

Fig. 3: Block diagram of a deep learning based CAD system for epileptic seizures.

Ii-a Epileptic Datasets

Datasets play an important role in developing accurate and robust CADS. Multiple EEG datasets, namely Freiburg [twentyOne], CHB-MIT [twentyTwo], Kaggle [twentyThree], Bonn [twentyFour], Flint-Hills [thirteen], Bern-Barcelona [twentyFive], Hauz Khas [thirteen], and Zenodo [twentySix] are available to develop the automated epileptic seizure detection systems. The signals from these datasets are recorded either intracranially or from the scalp of humans or animals. The supplementary information on each dataset is listed in table I.

Dataset Number of Patients Number of Seizures Recording Total Duration (hour) Sampling Frequency (Hz)
Flint-Hills [thirteen] 10 59 Continues intracranial ling term ECoG 1419 249
Hauz Khas [thirteen] 10 NA Scalp EEG (sEEG) NA 200
Freiburg [twentyOne] 21 87 Intracranial Electroencephalography (IEEG) 708 256
CHB-MIT [twentyTwo] 22 163 sEEG 844 256
Kaggle [twentyThree] 5 dogs 48 IEEG 627 400
2 patients 5 KHz
Bonn [twentyFour] 10 NA Surface and IEEG 39 m 173.61
Barcelona [twentyFive] 5 3750 IEEG 83 512
Zenodo [twentySix] 79 neonatal 460 sEEG 74 m 256
TABLE I: List of popular epileptic seizure datasets.

Figure 4 shows the number of times each dataset employed to detect epileptic seizures using deep learning techniques. It can be observed that the Bonn dataset is most widely used for automated detection of seizure using deep learning.

Fig. 4: Usage of various datasets for automated detection of seizure using deep learning techniques by various studies.

Ii-B Preprocessing

In developing CAD system using deep learning with EEG signals, the preprocessing involves three steps: noise removal, normalization, and signal preparation for deep learning network applications [ACHARYA2018270, Craik_2019]

. In the noise removal step, finite impulse response (FIR) or infinite impulse response (IIR) filters are usually used to eliminate extra signal noise. Normalization is then performed using various schemes such as the z-score technique. Finally, different time domain, frequency, and time-frequency methods are employed to prepare the signals for the deployment of deep networks.

Ii-C Review of Deep Learning Techniques

In contrast to conventional neural networks, or so-called shallow networks, deep neural networks are structures with more than two hidden layers. Some recent deep nets have more than hundreds of layers [seventeen]

. This increase in the size of the networks results in massive rise in the number of parameters of the network, requiring appropriate methods for learning, and also measures to avoid overfitting of the learned network. Convolutional networks use filters convolved with input patterns instead of multiplying a weight vector (matrix), which reduces the number of trainable parameters dramatically.

Furthermore, other methods are suggested to help the network to learn, as well [goodfellow]

. Pooling layers reduce the size of the input pattern to the next convolutional layer. Batch normalization, dropout, early stopping, unsupervised or semi-unsupervised learning, and regularization techniques prevent the learned network from overfitting and increase the learning ability and speed. The AE and DBN are employed as unsupervised learning and then fine-tuned to avoid overfitting for limited labeled data. Long-Short-Time-Memory (LSTM) and Gated-Recurrent-Units (GRU) are recurrent neural networks capable of revealing the long term time dependencies of data samples.

Ii-C1 Convolutional Neural Networks (CNNs)

CNNs are one class of the most popular deep learning networks in which most of the researches in machine learning have been devoted to these networks [seventeen]. They were initially presented for image processing applications but have recently been adopted to one and two-dimensional architectures for diagnosis and prediction of diseases using biological signals [addedOne]. This class of deep learning networks is widely used for the detection of epileptic seizures using EEG signals. In two dimensional convolutional neural networks (2D-CNN), the one dimensional (1D) EEG signals are first transformed into two dimensional plots employing visualization methods such as spectrogram [YILDIRIM2019103387], higher order bispectrum [Martis13500147, ijerph17030971], and wavelet transforms then are applied to the input of the convolutional network. In 1D architectures, the EEG signals are applied in the form of one dimensional to the input of convolutional networks. In these networks, changes are made to the core architecture of 2D-CNN that are capable of processing the 1D-EEG signals. Therefore, since both 2D and one-dimensional convolutional neural networks (1D-CNNs) are used in the field of epileptic seizures detection, they are investigated separately.

2D - Convolutional Neural Networks

Nowadays, deep 2D networks are applied to resolve a wide range of computer vision obstacles such as image segmentation

[twentySeven], medical image classification [twentyEight]

, and face recognition

[face]. First, in 2012, Krizovsky et al. [alexnet] suggested this network to solve image classification problems, and then quickly used similar networks for different tasks such as medical image classification, in an effort to obviate the difficulties of previous networks and solve more intricate problems with better performance. Figure 5 shows a general form of a 2D-CNN used for epileptic seizure detection. The application of 2D-CNN architectures is arguably the most important architecture in the deep neural nets. Also, more information about visualization and preprocessing method can be found in Appendix A.

Fig. 5: A typical 2D-CNN for epileptic seizure detection.

In one study [thirty], the SeizNet 16-layer convolutional network is introduced, with additional dropout layers and batch normalization (BN) behind each convolutional layer having a structure similar to VGG-Net. The researchers in [thirtyTwo] presented a new 2D-CNN model that can extract the spectral and temporal characteristics of EEG signals and used them to learn the general structure of seizures. Zuo et al. [thirtyThree] developed the diagnosis of Higher Frequency Oscillations (HFO) epilepsy from 16-layer 2D-CNN and EEG signals. A deep learning framework called SeizureNet is proposed in [thirtyFour] using convolution layers with dense connections. A novel deep learning model called the temporal graph convolutional network (TGCN) has been introduced by Covert et al. [thirtySeven], comprising of five architectures with 14, 18, 22, 23 and 26 layers. Bouaziz et al. [fourty] split the EEG signals of CHB-MIT with 23 channels into 2-second time windows and then converted them into density images (spatial representation), which were fed as inputs to the CNN network.

Alexnet

FeiFei Li, Professor of Stanford University, created a dataset of labeled images of real-world objects and termed her project as ImageNet

[imagenet]. ImageNet organized a computer vision competition called ILSVRC annually to solve the image classification problems. Alex Krizhevsky revolutionized the image classification world with his algorithm, AlexNet, which won the 2012 ImageNet challenge and started the whole deep learning era [alexnet]. AlexNet won the competition by achieving the top-5 test accuracy of 84.6%. Taqi et al. [fourtyTwo]

used the AlexNet network to diagnose focal epileptic seizures. This proposed network used the feature extraction approach and eventually applied the softmax layer for classification purposes and achieved 100% accuracy. In another research, the AlexNet network was employed

[fourtyThree]. They transformed the 1D signal in to 2D image by passing through Signal2Image (S2I) module. The several methods used in this are signal as image, spectrogram, one layer 1D-CNN, and two-layer 1D-CNN.

VGG

A research team at Oxford proposed the Visual Geometry Group (VGG) CNN model in 2014 [vgg]. They configured various models, and one such model was VGG-16 submitted to ILSVRC 2014 competition. The model was known as VCG-16 because it comprised of 16 layers. It delivered an excellent performance in image detection and classification problems. Ahmedt-Aristizabal et al. [fourtyFour] performed VGG-16 architecture to diagnose epilepsy from facial images. Their proposed approach attempted to extract and classify semiological patterns of facial states automatically. After recording the images, the proposed VGG architecture is trained primarily by well-known datasets, followed by various networks such as 1D-CNN and LSTM in the last few layers. In [fourtyThree]

, the VGG network used one-dimensional and two-dimensional signals. To train the models, Adam’s optimizer and a cross-entropy error function were used. They used the batch size and number of epochs as 20 and 100 respectively. The idea of detecting epileptic seizures on the sEEG signal plots was examined by Emami et al.

[fourtyFive]. In the pre-processing step, the signals were segmented into different time windows and VGG-16 was used for classification using small (3×3) convolution filters to efficiently detect small EEG signal changes. This architecture was pre-trained by applying an ImageNet dataset to differentiate 1000 classes, and the last two layers had 4096 and 1000 dimensional vectors. They modified these last two layers to have 32 and 2 dimensions, respectively, to detect seizure and non-seizure classes.

GoogLeNet

GoogLeNet won the 2014 ImageNet competition with 93.3% top-5 test accuracy [googlenet]. This 22-layer network was called GoogLeNet to honor Yann Lecun, who designed LeNet. Before the introduction of GoogLeNet, it was stated that by going deep, one could achieve better accuracy and results. Nevertheless, the google team proposed an architecture called inception, which achieved better performance by not going deep but by better design. It represented a robust design by using filters of different sizes on the same image. In the field of EEG signal processing to diagnose epileptic seizures, this architecture has recently received the attention of researchers. Taqi et al. [fourtyTwo] used this network in their preliminary researches to diagnose epileptic seizures. Their model was used to extract features from the Bern-Barcelona dataset and achieved excellent results.

ResNet

Microsoft’s ResNet won ImageNet challenge with 96.4% accuracy by applying a 152-layer network which utilized a Resnet module [resnet]. In this network, residual blocks capable of training deep architecture were introduced by using skip connections which copied inputs of each layer to the next layer. The idea was to learn something different and new in the next layer. So far, not much research has been accomplished on the implementation of ResNet networks to diagnose epilepsy, but may grow significantly in coming days. Bizopoulos et al. [fourtyThree] introduced two ResNet and DenseNet architectures to diagnose epileptic seizures and attained good results. They showed that S2I-DenseNet base model with an average of 70 epochs was sufficient to gain the best accuracy of 85.3%. A summary of related works done using 2D CNNs is shown in Table II. A sketch of accuracy (%) obtained by various authors is shown in Figure 6.

Work Networks Number of Layers Classifier Accuracy%
[twentyNine] 2D-CNN 3 Logistic Regression (LR) 87.51
4
[sixtyOne] 2D-CNN 9 softmax NA
[sixtyTwo] Combination of 1D- CNN and 2D-CNN 11 sigmoid 90.58
[sixtyFour] 2D-CNN 18 softmax NA
[sixtyFive] 2D-CNN/MLP hybrid 11 sigmoid NA
[fourtySix] 2D-CNN 9 softmax 86.31
[thirty] SeizNet 16 NA NA
[thirtyOne] 2D-CNN with 1D-CNN 12 softmax NA
[thirtyTwo] 2D-CNN 9 softmax 98.05
[thirtyThree] 2D-CNN 16 softmax NA
[thirtyFour] SeizureNet 133 softmax NA
[fourtyFour] 2D-CNN VGG-16,8 SVM 95.19
[thirtyFive] 2D-CNN 6 softmax 74
[sixtyEight] 2D-CNN 12 softmax and sigmoid 99.50
[thirtySix] 2D-CNN 16 softmax 91.80
[thirtySeven] TGCN 14 sigmoid NA
18
22
22
26
[thirtyEight] 2D-CNN 23 softmax 100
[sixtyNine] 2D-CNN 5 softmax 100
[thirtyNine] 2D-CNN 14 softmax 2 classes 98.30
3 classes 90.10
[seventy] 2D-CNN 7 MV-TSK-FS 98.33
5
3D-CNN 8
[fourty] 2D-CNN 8 softmax 99.48
[fourtyOne] 2D-CNN 23 sigmoid NA
18 RF
[seventyTwo] 2D-CNN 7 KELM 99.33
[fourtyTwo] GoogleNet Standard Networks softmax 100
AlexNet
LeNet
[fourtyFive] 2D-CNN VGG-16 softmax NA
[fourtyThree] Standard Networks softmax 85.30
TABLE II: Summary of related works done using 2D CNNs.
Fig. 6: Sketch of accuracy (%) obtained by various authors using 2D-CNN models for seizure detection.

1D - Convolutional Neural Networks

1D-CNNs are intrinsically suitable for processing of biological signals such as EEG for detection of epileptic seizures. These architectures present a more straightforward structure and a single pass of them is faster as compared to CNN with 2D architecture, due to fewer parameters. The most important superiority of 1D to 2D architectures is the possibility of employing pooling and convolutional layers with a larger size. In addition to that, signals are 1D in nature, and using pre-processing methods to transform them to 2D may lead to information loss. Figure 7 shows a general form of a 1D-CNN used for epileptic seizure detection.

Fig. 7: Typical sketch of the 1D-CNN model that can be used for epileptic seizure detection.

The authors in [fourtyThree] conducted experiments using one-dimensional LeNet, AlexNet, VGGnet, ResNet, and DenseNet architectures, and applied well-known 2D architectures in 1D space is the first study in this section. In [fourtyNine], 1D-CNN was used for the feature extraction procedure. The researchers in [fifty] used 1D-CNN for other work. They used CHB-MIT dataset and the signals from each channel are segmented into 4-second intervals; overlapping segments are also accepted to increase the data and accuracy. Combining CNNs with traditional feature extraction methods was explored in [fiftyThree]; they used the Empirical Mode Decomposition (EMD) method for feature extraction, and CNN was used to acquire high accuracy in the multi-class classification tasks. In [fiftyFive], an integrated framework for the diagnosis of epileptic seizures is presented that combined the capability of interpreting probabilistic graphical models (PGMs) with advances in deep learning. The authors in [fiftyEight] submitted a 1D-CNN architecture defined CNN-BP (stading for CNN bipolar). In this work, they used the data from patients monitored with combined foramen ovale (FO) electrodes and EEG surface electrodes. A new scheme to classify EEG signals based on temporal convolution neural networks (TCNN) was introduced by Zhang et al. [fiftyTwo]. Table III shows the summary of related works done using 1D CNNs. Figure 8 shows the sketch of accuracy (%) obtained by various authors using 1D-CNN models for seizure detection.

Work Networks Number of Layers Classifier Accuracy%
[fourtySix] 1D-CNN 7 softmax 82.04
[fourtyThree] 1D-CNN VGGNet - 13 83.30
VGGNet - 19
Densenet - 161
[seventyThree] P-1D-CNN 14 softmax 99.10
[fourtySeven] 1D-CNN 13 softmax 88.67
[seventyFour] MPCNN 11 softmax NA
[fourtyEight] 1D-FCNN 11 softmax NA
[seventyFive] 1D-CNN 5 binary LR NA
[seventySix] 1D-CNN 23 softmax 79.34
[fourtyNine] 1D-CNN 5 softmax, SVM 83.86
[fifty] 1D-CNN 33 NA 99.07
[fiftyOne] 1D-CNN 4 sigmoid 97.27
[fiftyTwo] 1D-TCNN NA NA 100
[fiftyThree] 1D-CNN 12 softmax 98.60
[fiftyFour] 1D-CNN 13 NA 82.90
[seventyEight] 1D-CNN with residual connections 17 softmax 99.00
91.80
[fiftyFive] PGM-CNN 10 softmax NA
[seventyNine] 1D-CNN 15 softmax 84
[fiftySix] 1D-CNN 10 sigmoid 86.29
[fiftySeven] 1D-CNN 13 softmax NA
[fiftyEight] 1D-CNN-BP 14 sigmoid NA
[fiftyNine] 1D-CNN 9 sigmoid NA
TABLE III: Summary of related works done using 1D CNNs.
Fig. 8: Sketch of accuracy (%) versus authors obtained using 1D-CNN models for seizure detection.

Ii-C2 Recurrent Neural Networks (RNNs)

Sequential data such as text, signals, and videos, show characteristics like variable and great length, which makes them not suitable for simple deep learning methods [goodfellow]. However, these data form a significant part of the information in the world, compelling the need for deep learning based schemes to process these types of data. RNNs are the solution suggested to overcome the mentioned challenges, and are widely used for physiological signals. Figure 9 shows a general form of RNN used for epileptic seizure detection. In the following section, an overview of popular RNN model is presented in addition to the reviewed papers.

Fig. 9: Sample RNN model that can be used for seizure detection.

The main problem of a simple recurrent neural network is short-term memory. RNN may leave out key information as it has a hard time transporting information from earlier time steps to the next steps in long sequence data. Another drawback of RNN is the vanishing gradient problem

[seventeen, eighteen, ninteen, twenty]. The problem arises because of the shrinking of gradients as it back-propagates. To solve the short-term memory problem, LSTM gates were created [seventeen]. The flow of information can be regulated through gates. The gates can preserve the long sequence of necessary data, and throw away the undesired ones. The building block of LSTM is the cell state and its gates.

In this section, Golmohammadi et al. [sixtyFive] evaluated two LSTM architectures with 3 and 4 layers together with the softmax classifier in their investigation and obtained satisfactory results. In [fiftyOne], a 3-layer LSTM deep network is used for feature extraction and classification. The last layer of this network is a sigmoid classification algorithm, and they achieved 96.82% accuracy. According to directed experiments in [fiftyNine], they employed two LSTM and GRU architectures. The LSTM, GRU model architecture, comprised a layer of Reshape, four layers of LSTM / GRU with the activator, and one layer of Fully Connected (FC) with sigmoid activator. In another study, Yao et al. [eighty] practiced ten different and ameliorated Independently Recurrent Neural Network (IndRNN) architectures and achieved the best accuracy using Dense IndRNN with attention (DIndRNN) with 31 layers.

Gated Recurrent Unit (GRU)

One variation of LSTM is GRU, which combines the input and forgets gates into one update gate [seventeen, eighteen, ninteen, twenty]. It merges the input and forgets gates and also makes some other modifications. The gating signals are decreased to two. One is the reset gate, and another is the updating gate. These two gates decide which information is necessary to pass to the output. In one experiment, Chen et al. [fiftyOne] used a 3-layer GRU network with sigmoid classification and yielded 96.67% accuracy. A new GRU-based epileptic seizure detection system has been conducted by Talathi et al. [eightyOne]. In the proposed technique, during the pre-processing, the input signals were split into time windows and spectrogram are obtained from them. Then these plots are fed to 4-layer GRU network with a softmax FC layer in the classification stage and achieved 98% accuracy. In another study, Roy et al. [eightyTwo] employed a 5-layer GRU network with softmax classifier and achieved remarkable results. Table IV provides the summary of related works done using RNNs. Figure 10 shows the sketch of accuracy (%) obtained by various authors using RNN models for seizure detection.

Work Networks Number of Layers Classifier Accuracy%
[sixtyFive] LSTM 3 sigmoid NA
4
[fiftyOne] GRU 3 sigmoid 96.67
[fiftyOne] LSTM 3 sigmoid 96.82
[fiftyFour] 15-IndRNN 48 NA 87.00
[fiftyFour] LSTM 4 NA 84.35
[fiftyNine] LSTM 6 sigmoid NA
GRU
[eightyThree] RNN NA MLP with 2 layers (logistic sigmoid Classifier) NA
[eightyFour] LSTM 4 softmax 100
[eightyFive] LSTM 2 sigmoid 95.54 Validation
5 91.25 Test
[eightySix] LSTM 4 softmax 100
[eightyEight] LSTM 3 softmax 97.75
[eighty] ADIndRNN-(3,3) 31 NA 88.70
[eightyOne] GRU 4 LR 98.00
[eightyTwo] GRU 5 softmax NA
[hundredTwentyNine] LSTM 4 softmax 100
TABLE IV: Summary of related works done using RNNs.
Fig. 10: Sketch of accuracy (%) obtained by authors using RNN models for seizure detection.

Ii-C3 Autoencoders

Standard Autoencoders

AE is an unsupervised neural network machine learning model for which the input is the same as output [seventeen, eighteen, ninteen, twenty]. Input is compressed to a latent-space representation, and then the output is obtained from the representation. So, in AE, the compression and decompression functions are coupled with the neural network. AE consists of three parts, i.e., encoder, code, and decoder. Autoencoder networks are the most commonly used for feature extraction or dimensionality reduction in brain signal processing. Figure 11 shows a general form of an AE used for epileptic seizure detection. As the first investigation in this section, Rajaguru et al. [eightyNine]

separately surveyed the Multilayer Autoencoders (MAE) and Expectation-Maximization with Principal Component Analysis (EM-PCA) methods to diminish the representation dimensions and then employed the Genetic Algorithm (GA) for classification. They have obtained an average classification accuracy of 93.78% when MAEs were applied for dimensionality reduction and combined with GA as classification. In another research, it was proposed to design an automated system based on AEs for the diagnosis of epilepsy using the EEG signals

[ninty]. First, Harmonic Wavelet Packet Transform (HWPT) was used to decompose the signal into frequency sub-bands, and then fractal features, including Box-Counting (BC), Multi-Resolution BC (MRBC) and Katz Fractal Dimension (KFD) were extracted from each of the sub-bands.

Fig. 11: Sample AE network which may be used for seizure detection.

Other Types of Autoencoders

To create a more robust representation, a number of schemes such as Denoising AE (DAE) (which tries to recreate input from a corrupted form of it) [goodfellow], Stacked AE (SAE) (stacking few autoencoders on top of each other to go deeper) [goodfellow], and Sparse AutoEncoders(SpAE) (which attempts to harness from sparse representations) [goodfellow] have been applied. While these methods might pursue other objectives as well, for example, the DAE can be used to recover the corrupted input.

Works in this section begin with Golmohammadi et al. [sixtyFive], who presented various deep networks, one of which is Stacked Denoising AE (SDAE). Their architecture in this section consists of three layers, and the final results demonstrated good performance of their approach. Qiu et al. [nintyOne] exerted the windowed signal, z-score normalization step of pre-processing EEG signals and imported pre-processed data into the Denoising Sparse AE (DSpAE) network. In their experiment, they achieved an outstanding performance of 100% accuracy. In [nintyTwo]

, a high-performance automated EEG analysis system based on principles of machine learning and big data is presented, which consists of several parts. At first, the signal features are extracted by Linear Predictive Cepstral Coefficients (LPCC) coefficients, then three paths are applied for precise detection. The first pass is sequential decoding using Hidden Markov Models (HMM), the second pass is composed of both temporal and spatial context analysis based on deep learning, in the third pass, a probabilistic grammar is employed.

In another research, Yan et al. [nintyThree]

proposed a feature extraction and classification method based on SpAE and Support Vector Machine (SVM). In this approach, first, the feature extraction of the input EEG signals is performed using SAE and, finally, the classification by SVM. Another SAE architecture was proposed by Yuan et al.

[nintyFour], which is named Wave2Vec. In the pre-processing stage, the signals were first framed, and in the deep network segment, the SAE with softmax was applied and achieved 93.92% accuracy. Following the experiments of Yuan et al., in [nintyFive], different Stacked Sparse Denoising AE (SSpDAE) architectures have been tested and compared. In this work, feature extraction is accomplished by the SSpDAE network and finally classification by softmax. They obtained an accuracy of 93.64%. Table V provides the summary of related works done using AEs. Also, Figure 12 shows the comparison of the accuracies obtained by different researchers.

Work Networks Number of Layers Classifier Accuracy%
[sixtyFive] SDAE 3 NA NA
[eightyNine] MAE NA GA 93.92
[ninty] AE 3 softmax 98.67
[nintySix] AE One layer sigmoid NA
[nintySeven] SSpDAE 2 hidden layers (intra channel) & 3 hidden layer (cross channel) + 2 hidden layer (FC)+ classifier softmax 93.82
[nintyOne] DSpAE 3 LR 100
[nintyTwo] SPSW-SDA Each model has 3 hidden layers LR NA
6W-SDA
EYEM-SDA
[nintyThree] SpAE single-layer SpAE SVM 100
[nintyEight] SSpAE 3-hidden-layer SSpAE softmax 100
[nintyFour] Wave2Vec NA softmax 93.92
[nintyFour] SSpDAE 2 softmax 93.64
[nintyNine] SAE 3 softmax 86.50
[hundred] SSpAE 3 softmax 100
[hundredOne] Deep SpAE 3 softmax 100
[hundredTwo] SAE 3 (2 AE+ classifier) softmax 96.00
[nintyFive] SAE 3 softmax 96.61
[hundredThree] SSpAE 3 (two sparse encoders as hidden layers+ classifier) softmax 94.00
[hundredFour] SAE 3 softmax 88.80
TABLE V: Summary of related works done using autoencoders.
Fig. 12: Sketch of accuracy (%) versus authors obtained using AE models for seizure detection.

Ii-C4 Deep Belief and Boltzmann Networks

Restricted Boltzmann Machines (RBM) is a variant of Deep Boltzmann Machines (DBM) and an undirected graphical model [seventeen]. The unrestricted boltzmann machines may also have connections between the hidden units. Stacking the RBMs forms a DBN; RBM is the building block of DBN. DBNs are unsupervised probabilistic hybrid generative deep learning models comprising of latent and stochastic variables in multiple layers [seventeen, eighteen]. Furthermore, a variation of DBN is called Convolutional DBN (CDBN), which could successfully scale the high dimensional model and uses the spatial information of the nearby pixels [seventeen, eighteen]. Deep Boltzmann machines are probabilistic, generative, unsupervised deep learning model which contains visible and multiple layers of hidden units [seventeen, eighteen].

Xuyen et al. [hundredFive] used DBN to identify epileptic spikes in EEG data. The proposed architecture in their study consisted of three hidden layers and achieved an accuracy of 96.87%. In another study, Turner et al. [hundredsix] applied the DBN network to diagnose epilepsy and found promising results. More information about DBN architecture for epileptic seizures is shown in table VI.

Work Networks Number of Layers Classifier Accuracy%
[hundredFive] DBN 3 hidden layers NA 96.87
[hundredsix] DBN 3 LR NA
TABLE VI: Summary of related works done using DBNs.

Ii-C5 Cnn - Rnn

It is a highly efficient combination of deep learning networks to predict and diagnose epileptic seizures from EEG signals is the CNN-RNN architecture. Adding convolutional layers to RNN helps to find spatially nearby patterns effectively as RNN characteristic is more suitable for time-series data. In [sixtyFive], they applied numerous pre-processing schemes; then, a modified CNN-LSTM architecture is proposed comprising of 13 layers and the sigmoid is used for the last layer. Finally, the proposed approach demonstrated better performance.

Roy et al. [fourtySix] used different CNN-RNN hybrid architectures to improve the experimental results. Their first network comprised a one-dimensional 7-layer CNN-GRU convolution architecture, and the second one is a three-dimensional (3D) CNN-GRU network. In another work, Roy et al. [eightyTwo] concentrated on natural and abnormal brain activities and suggested four different deep learning architectures. The proposed ChronoNet model was developed using previous models. It achieved 90.60% and 86.57% training and test accuracies respectively.

Fang et al. [hundredSeven] used the Inception-V3 network. At the outset, a preliminary training was used on this network. Then, to fine-tune this architecture, an RNN based network called Spatial Temporal GRU (ST-GRU) CNN was applied, and achieved 77.30% accuracy. Choi et al. [hundredEight] proposed a multi-scale 3D-CNN with RNN model for the detection of epileptic seizures. The CNN module output is applied as the input of the RNN module. The RNN module consisting of a unilateral GRU layer that extracted the temporal feature of epileptic seizures and is finally classified using an FC layer. At the end of this section, generalized information from the CNN-RNN research is presented in Table VII and Figure 13, respectively.

Work Networks Number of Layers Classifier Accuracy%
[sixtyFive] 2D-CNN biLSTM 13 sigmoid NA
[fourtySix] 1D CNN-GRU 7 softmax 99.16
[fourtySix] TCNN-RNN 10 softmax 95.22
[fourtyFour] 2D CNN-LSTM VGG-16 sigmoid 95.19
[eightyTwo] C-RNN 8 softmax 83.58
[eightyTwo] IC-RNN 14 softmax 86.93
[eightyTwo] C-DRNN 8 softmax 87.20
[eightyTwo] ChronoNet 14 softmax 90.60
[hundredNine] 2D CNN-LSTM 8 NA NA
[hundredSeven] ST-GRU ConvNets pre-trained Inception V3+ GRU + FC NA 77.30
[hundredEight] 3D-CNN biGRU NA NA 99.40
[hundredTwelve] 2D CNN-LSTM 18 softmax 99.00
[hundredThirteen] 1D CNN-LSTM 7 sigmoid 89.73
8
TABLE VII: Summary of related works done using CNN-RNNs.
Fig. 13: Sketch of accuracy (%) versus different researchers obtained using CNN-RNN models for seizure detection.

Ii-C6 CNN - AEs

In addition to finding nearby patterns, convolutional layers can reduce the number of parameters in structures such as autoencoders. These two reasons make their combination suitable for many tasks like unsupervised feature extraction for epileptic seizure detection. In this section, a novel approach based on CNN-AE was presented by Yuan et al. [hundredFourteen]. At the feature extraction stage, two deep AE and 2D-CNN were used to extract the supervised and unsupervised features respectively. The unsupervised features were obtained directly from the input signals, and the supervised features were acquired from the spectrogram of the signals. Finally, the softmax classifier was utilized for classification and achieved 94.37% accuracy. In another investigation, Yuan et al. [hundredSixteen] proposed an approach called Deep Fusional Attention Network (DFAN) which can extract channel-aware representations from multi-channel EEG signals. They developed a fusional attention layer which utilized a fusional gate to fully integrate multi-view information to quantify the contribution of each biomedical channel dynamically. A multi-view convolution encoding layer, in combination with CNN, has also been used to train the integrated deep learning model. Table VIII provides the summary of related works done using CNNAEs and Figure 14 shows the accuracies (%) obtained by different researchers.

Work Networks Number of Layers Classifier Accuracy%
[hundredFourteen] CNN-AE 10 softmax 94.37
[hundredFifteen] CNN-AE 15 Different Classifiers 92.00
[hundredseventeen] 1D CNN-AE (feature extraction)+ MLP/LSTM/Bi LSTM (classification) 16 + 3/1/1 sigmoid 100 2 classes
softmax 99.33 3 classes
[hundredEighteen] CNN-ASAE 8 LR 66.00
CNN-AAE 7 68.00
[hundredSixteen] CNN-AE NA softmax 96.22
TABLE VIII: Summary of related works done using CNN-AEs.
Fig. 14: Sketch of accuracy (%) versus authors obtained using CNN-AE models for seizure detection.

Iii Non-EEG Based Epileptic Seizure Detection

Iii-a Medical Imaging Methods

Various deep learning models were developed to detect epileptic seizure using MRI, structural MRI (sMRI), functional MRI (fMRI), resting-state fMRI (rs-fMRI) and PET scans with or without EEG signals [hundredNinteen, hundredTwenty, hundredTwentyOne, hundredTwentyTwo, hundredTwentyThree, hundredTwentyFour, hundredTwentyFive, hundredTwentySix]. These models outperformed the conventional models in terms of automatic detection and monitoring of the disease. However, due to the nature and difficulties in using imaging methods, these models are mostly practiced for localization of seizure and detection.

The authors [hundredNinteen] proposed automatic localization and detection of Focal Cortical Dysplasia (FCD) from the MRI scan using a CNN. The FCD detection rate is only 50% despite the progress in the analytics of MRI scans. Gill et al. [hundredTwenty] proposed a CNN based algorithm with feature learning capability to detect FCD automatically. The researchers [hundredTwentyOne]

designed DeepIED based on deep learning and EEG-fMRI scans for epilepsy patients, combining the general linear model with EEG-fMRI techniques to estimate the epileptogenic zone. Hosseini et al.

[hundredTwentyTwo] proposed an edge computing autonomic framework for evaluation, regulation, and monitoring of the epileptic brain. The epileptogenic network estimated the epilepsy using rs-fMRI and EEG. Shiri et al. [hundredTwentySix] presented a technique for direct attenuation correction of PET images by applying emission data via CNN-AE. Nineteen radiomic features from 83 brain regions were evaluated for image quantification via Hammersmith atlas. Finally, the summary of related works done using medical imaging methods and deep learning are shown in Table IX.

Work Networks Number of Layers Classifier Accuracy%
[hundredNinteen] 2D-CNN 30 sigmoid 82.50
[hundredTwenty] 2D-CNN 11 softmax NA
[hundredTwentyOne] ResNet 31 softmax NA
Triplet
[hundredTwentyTwo] 2D-CNN NA SVM NA
[hundredTwentyThree] 2D-CNN 11 softmax 89.80
3D-CNN 82.50
[hundredTwentyFour] 2D-CNN NA NA NA
[hundredTwentyFive] ResNet 14 sigmoid 98.22
VGGNet
Inception-V3
SVGG-C3D
[hundredTwentySix] Deep Direct Attenuation Correction (Deep-DAC) 44 tanh NA
TABLE IX: Summary of related works done using medical imaging methods and deep learning.

Iii-B Other Detection Methods

Ravi Prakash et al. [hundredThirteen] introduced an algorithm based on deep learning for ECoG based Functional Mapping (ECoG-FM) for eloquent language cortex identification. However, the success rate of ECoG-FM is low as compared to Electro-cortical Stimulation Mapping (ESM). In another work, Rosas-Romero et al. [hundredTwentySeven] have used fNIRS to detect epileptic seizure and obtained better performance than using conventional EEG signals.

Iv Hardware And Software Used For The Epileptic Seizure Detection

The high performance and its robustness to noise has made the deep learning algorithms suitable for commercial products. Nowadays various commercial products have been developed in the field of deep learning, one of which is deep learning applications and hardware for diagnosing epileptic seizures. In the first study investigated, the brain-computer interface (BCI) system was developed using an AE for epileptic seizure detection by Hosseini et al. [hundredThree]. In another study, Singh et al. [hundredFour] indicated a utilitarian product for the diagnosis of epileptic seizures, which comprised the user segment and the cloud segment. The block diagram of the proposed system presented by Singh et al. is shown in figure 15.

Fig. 15: Block diagram of proposed epileptic seizure detection system using deep learning methods with EEG signals.

Kiral-Kornek et al. [hundredTwentyEight] demonstrated that deep learning in combination with neuromorphic hardware could help in developing a wearable, real-time, always-on, patient-specific seizure warning system with low power consumption and reliable long-term performance.

V Discussion

Anticipating and timely recognition of epileptic seizures is of the essence, as it directly influences the quality of life of patients with this disease and can enhance their confidence in all life’s stages. Numerous research has been carried out on automated diagnosis of epileptic seizures but without using graphical processing unit (GPU) and hence may not be able to use in the real time applications. So far, no efficient software programs or functional hardware have been implemented to recognize the disease. Until recently many machine learning methods to detect the seizure automatically have been proposed which can not be used in the real time. Recent years of research into the diagnosis of epileptic seizures have led to the appearance of deep learning algorithms, and experts in the areas of artificial intelligence and signal processing reassure that these methods can sketch and lead to implementing concrete and functional tools. Table X in the Appendix shows the overview of works done in this area. It also shows the type of dataset used, implementation tool, preprocessing, deep learning network, and evaluation methods utilized.

As shown in this study, various deep learning structures are applied for epileptic seizure detection, yet none of them has superiority over others. The best structure should be chosen carefully based on the dataset and problem characteristics, such as the need for real-time detection or minimum acceptable accuracy or even the use pre-trained models. There are many databases available with different models. Hence, it is difficult to compare them as they have been developed using different datasets and models. Overall, one of the most important advantages of deep learning algorithms is their high performance. Hence, such models have been widely used for many applications. Another advantage of deep learning methods is that they are robust to noise. So, noise removal can be omitted in many applications. However, they need more data to train and training takes time. So, developing a robust model is time consuming and requires huge data.

Vi Challenges

The challenges in the automated detection of epileptic seizures using deep learning are as follows; firstly, many datasets only contain selected segments of EEG signals, which is not suitable for real-world applications where detection must be done from real-time signals, and clinical datasets are usually not publicly available. Secondly, there are few datasets available in this area with different sampling frequencies. However, the size of the data available to train the model is not sufficient to obtain a robust and accurate model. Hence, huge public datasets need to be available. Lastly, deep learning models require massive computational resources, and these resources are not accessible to everyone as they are expensive. The researchers need to focus on the early detection of epilepsy (interictal) and also seizure prediction. It will significantly improve the quality of life of the patients and also their family members.

Vii Conclusion and Future Works

In this paper, a comprehensive review of works done in the field of epileptic seizure detection using various deep learning techniques such as CNNs, RNNs, and AEs is presented. Various screening methods have been developed using EEG and MRI. We have investigated the deep learning based practical and applied hardware used for diagnosing epileptic seizures. It is very encouraging that much of the future research will concentrate on hardware - practical applications to aid in the accurate detection of such diseases. The functional hardware has also been utilized to boost the performance of detection strategies. Furthermore, the models can be placed in the cloud by hospitals, so handheld applications, mobile or wearable devices, may be equipped with such models and the computations will be performed by cloud servers. The patients may also be benefited from predictive models for the epileptic seizure and take some measures to avert in a timely manner. Alert messages may be generated to the family, relatives, the concerned hospital, and doctor in the detection of epileptic seizures through the handheld devices or wearables, and thus the patient can be provided with proper treatment in time. Moreover, a cap with EEG electrodes in it can obtain the EEG signals and sent to the model kept in the cloud to achieve real-time detection. Additionally, if we can detect early stage of seizure using interictal periods of EEG signals, the patient can take medication immediately and prevent seizure. This field of research requires more research by combining different screening methods for more precise and fast detection of epileptic seizures and also applying semi-supervised and unsupervised methods to further overcome the dataset size limits. Finally, having publicly available comprehensive datasets can help to develop an accurate and robust model which can detect the seizure in the early stage.

Appendix A

Table X shows the detailed summary of deep learning methods employed for automated detection of epileptic seizures.

Work Dataset Tools Preprocessing Network K-fold Classifier Accuracy%
[twentyNine] Clinical NA Spectrogram 2D-CNN NA LR 87.51
[sixtyOne] Clinical MATLAB Normalization 2D-CNN NA softmax NA
[sixtyTwo] Clinical NA Filtering 1D-CNN with 2D-CNN NA sigmoid 90.50
CHB-MIT 85.60
[sixtyFour] Clinical Octave Filtering, Re-referenced, Down Sampling 2D-CNN NA softmax NA
Keras
Theano
[sixtyFive] TUH EEG NA Filtering CNN-RNN NA Different activation functions NA
Clinical
[fourtySix] TUH EEG NA Different methods 1D-CNN-GRU NA softmax 99.16
[thirty] Clinical Keras Down sampling, Z- normalization, augmentation SeizNet NA NA NA
[thirtyOne] Clinical Python 3.6 Z-Score normalization, STFT 1D-CNN NA softmax NA
PyTorch 2D-CNN
[thirtyTwo] CHB-MIT PyTorch Visualization 2D-CNN NA softmax 98.05
[thirtyThree] Clinical NA Filtering, Visualization, Normalization 2D-CNN 10 softmax NA
[thirtyFour] TUH EEG PyTorch DivSpec SeizureNet 5 softmax NA
[fourtyFour] Clinical Caffe Different Methods FRCNN with 2D-CNN 5 SVM 95.19
OpenCV
Keras FRCNN with 2D-CNN-LSTM sigmoid
Theano
[thirtyFive] TUH EEG TensorFlow Feature Extraction 2D-CNN 10 softmax 74.00
[sixtyEight] Bern Barcelona Octave Filtering, EMD, DWT, Fourier 2D-CNN 5 sigmoid 99.50
Clinical Keras softmax
[thirtySix] Bern Barcelona TensorFlow STFT, Z-Score Normalization 2D-CNN 10 softmax 91.80
[thirtySeven] Clinical NA STFT TGCN NA sigmoid NA
[thirtyEight] Bonn NA DWT 2D-CNN 10 softmax 100
[sixtyNine] Bonn Keras CWT 2D-CNN 10 softmax 100
[thirtyNine] Bonn MATLAB Filtering 2D-CNN NA softmax 99.60
90.10
[seventy] CHB-MIT MATLAB Over Sampling Method, FFT, WPD 2D-CNN 5 MV-TSK-FS 98.35
TensorFlow 3D-CNN
[fourty] CHB-MIT NA Spatial Representation 2D-CNN NA softmax 99.48
[fourtyOne] Clinical MATLAB Different Methods 2D-CNN 10 sigmoid NA
RF
[seventyTwo] CHB-MIT NA MAS 2D-CNN 5 KELM 99.33
Clinical
[fourtyNine] Clinical TensorFlow Filtering, Down Sampling 1D-CNN 4 softmax, SVM 83.86
[fourtyTwo] Bern Barcelona Caffe NA AlexNet NA softmax 100
GoogleNet
LeNet
[fourtyThree] UCI PyTorch Signal2Image 2D one Layer CNN NA DenseNet 85.30
[fourtyFive] Clinical Chainer Filtering, Visualization 2D-CNN NA softmax NA
[seventyThree] Bonn TensorFlow Data Augmentation P-1D-CNN 10 Majority Voting 99.10
[fourtySeven] Bonn MATLAB Z-Score Normalization 1D-CNN 10 softmax 86.67
[seventyFour] CHB-MIT NA Filtering, Augmentation MPCNN NA softmax NA
[fourtyEight] Clinical Keras Down-Sampling, Filtering 1D-FCNN 5 softmax NA
[seventySix] TUH EEG Keras Normalization and Standardization 1D-CNN NA softmax 79.34
[seventyFive] Clinical Theano Filtering 1D-CNN NA Binary LR NA
Lasagne Library
[fifty]
CHB-MIT NA DWT, Feature Extraction, Normalization 1D-CNN 10 NA 99.07
[fiftyOne] Bonn NA DWT, Normalization 1D-CNN 5 sigmoid 97.27
[fiftyTwo] Bonn NA Normalization 1D-TCNN NA NA 100
[fiftyThree] Bonn NA EMD, MPF 1D-CNN 10 softmax 98.60
[fiftyFour] CHB-MIT NA Windowing IndRNN 10 NA 87.00
[seventyEight] Bonn TensorFlow Filtering, Z-Score Normalization 1D-CNN NA softmax 99.00
Bern Barcelona 91.80
[fiftyFive] CHB-MIT PyTorch Filtering 1D-PCM-CNN 5 softmax NA
Clinical
[seventyNine] CHB-MIT NA MIDS, WGANs 1D-CNN NA softmax 84.00
[fiftySix] Clinical NA Down Sampling, PSD, FFT 1D-CNN 4 sigmoid 86.29
[fiftySeven] CHB-MIT TensorFlow Filtering 1D-CNN 4 softmax NA
[fiftyEight] NA Keras Down Sampling, Filtering, Data Augmentation CNN-BP 5 sigmoid NA
TensorFlow
MATLAB
[fiftyNine] Clinical NA Filtering, DWT 1D-CNN NA sigmoid NA
LSTM RF
GRU SVM
[eightyThree] CHB-MIT MATLAB Filtering, Montage Mapping DRNN NA MLP NA
[hundredTwentyNine] Bonn NA Filtering LSTM NA softmax 100
[eightyFour] Bonn MATLAB Filtering LSTM 3 softmax 100
Keras 5
TensorFlow 10
[eightyFive] Bonn Keras Windowing LSTM 10 sigmoid 91.25
[eightySix] Bonn MATLAB Filtering LSTM 3 softmax 100
Keras 5
TensorFlow 10
[eightyEight]
Freiburg Anaconda Navigator Normalization, Filtering LSTM 5 softmax 97.75
[eighty] CHB-MIT NA Windowing ADIndRNN 10 NA 88.70
Bonn
[eightyOne] Bonn Keras Auto-Correlation GRU NA LR 98
[eightyTwo] TUH EEG NA TCP ChronoNet NA softmax 90.60
[eightyNine] Clinical NA Windowing AE with EM-PCA NA GA 93.92
[ninty] Bonn MATLAB Filtering, HWPT, FD AE NA softmax 98.67
[nintySix] Clinical TensorFlow Down Sampling, Filtering, Normalization AE NA sigmoid NA
[nintySeven] CHB-MIT NA STFT SSDA NA softmax 93.82
[nintyOne] Bonn MATLAB Z-Score Normalization, Standardization DSAE NA LR 100
[nintyTwo] TUH EEG Open Source Toolkits Different Methods SDA NA LR NA
Theano
[nintyThree] Bonn NA Filtering SAE NA SVM 100
[nintyEight] Bonn NA Normalization SSAE NA softmax 100
[nintyFour] CHB-MIT Theano Scalogram Wave2Vec NA softmax 93.92
[hundredFourteen]
CHB-MIT PyTorch Data Augmentation, STFT CNN-AE 5 softmax 94.37
[nintyNine] Clinical NA Filtering, CWT, Feature Extraction SAE NA softmax 86.50
[hundred] Bonn NA Taguchi Method SSAE NA softmax 100
[hundredOne] Clinical NA Dimension reduction, ESD DeSAE NA softmax 100
[hundredTwo] Bonn NA DWT SAE NA softmax 96.00
[nintyFive] CHB-MIT NA Different Methods mSSDA NA softmax 96.61
[hundredThree] Clinical MATLAB PCA, I-ICA SSAE NA softmax 94
[hundredFour] Bonn MATLAB Windowing SAE NA softmax 88.80
[hundredFive] Clinical MATLAB DWT DBN NA NA 96.87
[hundredsix] Clinical Theano Normalization, Feature Extraction, Standardization DBN NA LR NA
SVM
KNN
[hundredNine] CHB-MIT NA Image Based Representation 2D CNN-LSTM NA NA NA
[hundredSeven] Clinical TensorFlow Filtering ST-GRU ConvNets NA NA 77.30
[hundredEight] CHB-MIT NA STFT, 2D-mapping 3D-CNN with Bi GRU NA NA 99.40
Clinical
[hundredTwelve] CHB-MIT NA Visualization 2D-CNN-LSTM NA softmax 99.00
[hundredThirteen] clinical ECoG NA Filtering 1D-CNN-LSTM 5 sigmoid 89.73
[hundredFifteen] CHB-MIT Scikit-Learn Channel Selection CNN-AE 5 Different Methods 92.00
Bonn 10
[hundredseventeen] Bonn NA Windowing 1D-CNN with Bi LSTM NA softmax 99.33
sigmoid 100
[hundredEighteen] Clinical Theano Mapping ASAE-CNN NA LR 66.00
AAE-CNN 68.00
[hundredSixteen] CHB-MIT PyTorch STFT CNN-AE 5 softmax 96.22
[hundredNinteen]
SCTIMST FSL Noise reduction with BM3D algorithm, Skull-stripping, Segmentation, Postprocessing 2D-CNN 5 sigmoid NA
Keras
TensorFlow
[hundredTwenty] Clinical MRI NA Different Methods 2D-CNN 5 softmax NA
[hundredTwentyOne] Clinical MRI Brain Vision Analyzer Filtering, ICA, BCG, GLM, MCS ResNet NA softmax NA
Triplet
[hundredTwentyTwo] ECoG Dataset GIFT Different Methods 2D-CNN NA SVM NA
Rs-FMRI Dataset FSL
FreeSurfer
[hundredTwentyThree] Clinical MRI NA Scaling Down 3D-CNN 5 softmax 89.80
[hundredTwentyFour] Clinical MRI FSL Connectivity Feature Extracion 2D-CNN NA NA NA
[hundredTwentyFive] ImageNet DPABI ROI, Normalization, AAL, CNNI, Down-sampling, NNI (3D images) 2D-ResNet NA sigmoid 98.22
Pulmonary nodules Kaggle Python 2D-VGGNET
Keras 2D-Inception V3
Clinical PET TensorFlow 3D-SVGG-C3D
[hundredTwentySix] Clinical PET TensorFlow OSEM, Data Augmentation Radionics Features DAC NA tanh NA
TABLE X: Summary of deep learning methods employed for automated detection of epileptic seizures.

References