An End-to-End Deep Learning Approach for Epileptic Seizure Prediction

08/17/2021 ∙ by Yankun Xu, et al. ∙ 15

An accurate seizure prediction system enables early warnings before seizure onset of epileptic patients. It is extremely important for drug-refractory patients. Conventional seizure prediction works usually rely on features extracted from Electroencephalography (EEG) recordings and classification algorithms such as regression or support vector machine (SVM) to locate the short time before seizure onset. However, such methods cannot achieve high-accuracy prediction due to information loss of the hand-crafted features and the limited classification ability of regression and SVM algorithms. We propose an end-to-end deep learning solution using a convolutional neural network (CNN) in this paper. One and two dimensional kernels are adopted in the early- and late-stage convolution and max-pooling layers, respectively. The proposed CNN model is evaluated on Kaggle intracranial and CHB-MIT scalp EEG datasets. Overall sensitivity, false prediction rate, and area under receiver operating characteristic curve reaches 93.5 0.074/h, 0.988 on two datasets respectively. Comparison with state-of-the-art works indicates that the proposed model achieves exceeding prediction performance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Epilepsy is one of the most common neurological diseases worldwide, most patients with epilepsy are treated with long-term drug therapy, but approximately a third of patients are drug refractory[25, 17, 7]. Patient with drug-resistant epilepsy can benefit from intervenes in advance of the seizure onset to reduce the possibility of severe comorbidities and injuries[10, 35]. Therefore, a closed-loop system with accurate seizure forecasting capacity plays an important role in improving the life quality of patients and lower cost of healthcare resources[28, 6]. However, seizure prediction faces many difficulties and challenges, many previous studies even stated that epileptic seizure is hardly to predict [16, 22, 11].

Electroencephalography (EEG) is a kind of electrophysiological technique used to record the electrical activity of the brain. Scalp and intracranial EEG are usually utilized for epileptic seizure monitor. EEG recordings from epileptic patients can be defined as several intervals: ictal (seizure onset), preictal (short time before the seizure), postictal (short time after the seizure), and interictal (between seizures but excluding preictal and postictal). The prediction task is to identify preictal state that can be differentiated from the interictal, ictal, and postictal states, and the major challenge in seizure prediction is to differentiate between the preictal and interictal states[19, 38, 1].

Over the past decade, several research activities were based on EEG recordings to predict seizure[12, 23, 36]

. Most researchers make use of manually selected features within the frequency domain to make prediction

[5, 8, 14]. However, there are some limitations for hand crafted feature engineering. Firstly it cannot be generalized for EEG recording acquired from various devices. Secondly some important information might be lost during the features extraction process. Thirdly feature extraction operation increases computational complexity for real-time applications. Hence recent work relies on deep learning to automatically learn features from EEG recordings[12, 3, 37]. However, these works take EEG as conventional 2-dimensional (2D) signals such as image and fail to consider its unique spatial-temporal characteristics[13, 27].

In this study, an end-to-end patient-specific approach using convolutional neural network (CNN) is proposed to address seizure prediction challenges[26]. The proposed network adopts 1-dimensional (1D) kernel in the early-stage convolution and max-pooling layer to make use of the EEG signal redundancy in the time domain and preserve information in the spatial domain[31, 21]. 2D kernels are used in the late-stage to combine the information from multiple channels to make high accurate prediction.

The remaining parts of this paper are organized as follows. Section II describes the two targeted datasets used for developing the achieved seizure prediction algorithm. Section III introduces the proposed model architecture and its implementation. Performance evaluation and comparison with the state-of-the-art are carried out in Section IV. The last section discusses and concludes our contribution in this paper.

Ii Benchmark EEG Datasets

In this work, our model is implemented and evaluated on two widely used benchmark EEG datasets.

  • Intracranial EEG dataset: The Kaggle dataset consists of 5 dogs and 2 human patients[9]. All five dogs are sampled at 400Hz, four of them are recorded with 16 electrodes and one with 15 electrodes. As to human patients, their sample rate is 5000Hz, one is recorded with 15 electrodes and one with 24 electrodes.

  • Scalp EEG dataset: The CHB-MIT dataset collected 23 subjects from 22 patients for seizure detection purpose originally[15, 33]. There are 637 recordings in total in the dataset, each recording may contain none or several seizures. All subjects are recorded at the same 256Hz sample rate, but in term of electrodes, there are different signal acquisition settings among patients. 15 subjects are recorded from fixed 23-electrode configuration, while one or several changes of electrodes’ setting are implemented for the remaining subjects.

In this study, only lead seizures, which are defined as seizures occurring at least 4h after previous seizures, are considered[19]. Furthermore, in term of Kaggle dataset, two human patients are excluded since they are recorded with significantly different signal acquisition approach. As to CHB-MIT dataset, we only consider subjects with fixed electrodes configuration, and no less than 3 lead seizures, so that there are only 7 subjects suitable for experiments.

Fig. 1: An example of EEG recording from epileptic patient. Different timing definitions are shown. Seizure prediction horizon (SPH) is a interval between preictal interval and seizure onset. Except for intervals of preictal, postictal, ictal, and SPH, the remaining recording belongs to interictal interval.
Fig. 2: Architecture of proposed model. Conv and MP stands for convolution and max-pooling operation respectively. When we feed the input with shape HeightWidth (H

W) to this model, output with softmax function will generate the probability of input belonging to preictal state. This model consists of 5 convolution layers with ReLU activation function and 2 fully-connected (FC) layers with sigmoid activation function. There is no batch normalization and regularization in this model. The description on the top of connection between convolution blocks shows the size of convolution and max-pooling kernel, and the description at the bottom of each convolution block shows the number of kernels and the shape of each layer. The two FC layers contains 256 and 64 nodes respectively.

Figure 1 shows timing definitions of different EEG signal contents. In seizure prediction task, a short interval between end of preictal interval and seizure onset is defined as seizure prediction horizon (SPH). However, preictal interval length (PIL) and SPH are two empirical parameters determining which interval of recording is chosen for experiments. In term of Kaggle dataset, 1h PIL and 5min SPH were already set during dataset preparation, but for CHB-MIT, 30min PIL and 5min SPH are chosen for fair comparison to other state-of-the-art works.

In this study, two categorized samples are extracted from interictal and preictal intervals respectively with fixed 20s time window, as shown by dashed box in Fig. 1. Then the width of input sample is , and height is the number of recording channels. Hence the shape of input from two datasets is 16 (or 15) 8000, and respectively. 16 (or 15) and 23 are the numbers of their channels, and 8000 and 5120 are the numbers of selected data points which equal to the multiplication results of the time window (20s) and their sample rates (400Hz, 256Hz). However, raw EEG recording of epileptic patient contains much longer interictal interval than preictal interval, imbalanced training sample problem may cause trained model to achieve poor performance[4, 20]. To overcome it, we extract interictal samples from EEG recordings without any overlapping, but preictal samples are extracted with 5s overlapping. Thus, according to raw EEG recordings of various cases, different numbers of interictal and preictal samples are extracted.

Iii Proposed CNN Model

Unlike image data that have the vertical and horizontal resolutions in the same magnitude, the formulated EEG signal samples have thousands of elements in the time axis but only few in the channel axis. This unique feature distinguishes EEG from image. The large redundancy in both vertical and horizontal directions of a 2D image makes 2D convolution perfectly suitable for extracting hidden features from image[30]. However, EEG signal is not redundant in the channel axis, because each channel records different area of the brain. Based on the above consideration, the proposed model adopts 1D and 2D kernels for convolution and max-pooling operation in early- and late-stage layers, respectively. The initial 1D convolution kernels are able to learn features within time domain and 1D max-pooling kernels help keep only significant information. The remaining convolution and max-pooling layers utilize 2D kernels to learn features within both spatial and time domains. If only 2D kernels are adopted in the network, the channel dimension will soon shrink which results in a shallow network structure.

Our CNN model is shown in Fig. 2

. Except for input and output layers, proposed model consists of 5 convolution layers and 2 fully-connected (FC) layers. Each convolution layer is followed by a rectified linear unit (ReLU) activation function and one max-pooling layer

[29]. The size of the convolution kernels and max-pooling kernels are also shown in Fig. 2. Convolution and max-pooling kernels with size of and are adopted for first two convolution blocks, for the following third convolution block, convolution kernel with size of and max-pooling kernel with size of are used.

For the remaining two convolution layers, convolution kernel with size of and max-pooling kernel with size of

are implemented. The number of kernels for the five convolution layers is 16, 32, 64, 128, 256 respectively. The two FC layers with 256 and 64 nodes respectively are followed by a sigmoid activation function. The output layer uses Softmax activation function for binary classification and binary cross entropy is used as loss function

[32]. There is only one dropout layer with 0.5 dropout rate settled between the last convolution layer and the first FC layer[34].

We adopted Adam optimizer for loss minimization with learning rate, and of 1e-5, 0.9, 0.999 respectively[24]

. Even though we extracted preictal samples with overlapping, the number of training sample is still imbalanced. To overcome this issue, we randomly feed equal number of interictal and preictal samples from training set to model for each training epoch. Training of each case is stopped after 100 epochs. Our model is implemented in Python 3.6 using Keras 2.2 with Tensorflow 1.13 backend on single NVIDIA 2080Ti GPU

[2]. For one epoch, 6400 samples are trained, and the training process takes 270s and 30s on Kaggle and CHB-MIT dataset respectively.

Subject No. of Sensitivity(%) FPR(/h) AUC
LS/
seizures
Dog 1 4/4
Dog 2 7/7
Dog 3 12/12
Dog 4 16/16
Dog 5 5/5
Overall 44/44
TABLE I:
Subject No. of Sensitivity(%) FPR(/h) AUC
LS/
seizures
chb01 3/7
chb05 3/5
chb06 6/10
chb08 3/5
chb10 5/7
chb14 4/8
chb22 3/3
Overall 27/45

LS: Lead Seizures; FPR: False Prediction Rate; AUC: Area Under Curve.

TABLE II:

Iv Results

We first evaluate our model with standard metrics and then compare it with two other works that achieves state-of-the-art performance. Sensitivity, false prediction rate (FPR), receiver operating characteristic (ROC) curve, and area under curve (AUC) are evaluated in this study[39, 18]

. For each subject, we spilt 20% interictal and preictal samples for validation purpose, and the remaining is used for training the model. 10 independent runs with different initializers are implemented to generate mean and standard deviation of each metric in order to evaluate the robustness of model, and the validation set is chosen randomly for each run.

Tables I and II show results including subject information and measured metrics for the two datasets respectively. We take an average of each metric as an overall measurement.

Performance of proposed model on Kaggle dataset reaches average 93.5% sensitivity, 0.063 FPR, and 0.981 AUC score. The overall standard deviation of three metrics is 3.3%, 0.02/h and 0.002, which indicates the model performance is quite robust on this dataset. For CHB-MIT dataset, overall sensitivity, FPR and AUC reach 98.8%, 0.074/h, and 0.988 respectively. Among all patients, four of them achieves very good results (reach sensitivity and AUC at same time), but the subject chb01 reaches highest performance with 100% sensitivity, 0 FPR and 1 AUC. Low standard deviation is also achieved on this dataset. From Figs. 3 and 4, ROC curves with their AUC from each case shows that our model has good capacity to separate preictal and interictal samples of the two datasets.

We compare our model with two other state-of-the-art works. Some comparison results are summarized in Tables III and IV. Table III

shows comparisons of performance on Kaggle dataset. Truong et al. made use of feature extracted with short-time Fourier transform, then trained a CNN model to classify, except for the subject

Dog 2, they achieved obviously worse performance on the remaining subjects[36]. Eberlein et al. also utilized CNN to process raw EEG signal, two of four dogs only reach around 0.8 AUC, and overall AUC is less than 0.9, which is lower than AUC achieved from our model, while our model is able to achieve much higher AUC[12].

Table IV shows comparison with Truong’s and Khan’s works on the CHB-MIT dataset. Khan et al. combined wavelet transform and CNN to identify preictal interval[23]. Over the four patients listed by the two works, sensitivity is less than 90%. All five patients from their work reach over 0.85 AUC, where chb05 achieves the highest AUC of 0.988, which is still worse than this work.

Fig. 3: ROC curves including AUC of all dog subjects from Kaggle dataset. The dash line represents the performance of random predictor. For each subject, only the best performance from 10 runs is shown.
Fig. 4: ROC curves including AUC of all patients from CHB-MIT dataset. The dash line represents the performance of random predictor. For each subject, only the best performance from 10 runs is shown.
Truong et al. [1] Eberlein et al. [2] This work
(STFT + CNN) (Raw + CNN) (Raw + CNN)
SEN(%) AUC SEN(%) AUC SEN(%) AUC

Dog 1
50 0.798 90.6 0.983
Dog 2 100 0.812 96.8 0.996
Dog 3 58.3 0.844 93.1 0.990
Dog 4 78.6 0.919 88.7 0.941
Dog 5 80 98.6 0.996
TABLE III:
Truong et al. [1] Khan et al. [3] This work
(STFT + CNN) (Wavelet + CNN) (Raw + CNN)
SEN(%) AUC SEN(%) AUC SEN(%) AUC

chb01
85.7 0.943 100 1.000
chb05 80.0 0.988 99.7 0.993
chb08 0.921 99.9 0.998
chb10 33.3 0.855 97.9 0.985
chb14 80.0 98.9 0.983
chb22 0.877 99.5 0.992

STFT: Short-Time Fourier Transform; CNN: Convolution Neural Network

SEN: Sensitivity; AUC: Area Under Curve.

TABLE IV:

V Conclusion

We described an end-to-end CNN architecture for seizure prediction in this paper. Instead of using more commonly seen features from the frequency domain, raw EEG signals are used as input of the CNN model. Taking into account the unique characteristic of the EEG signal, the proposed architecture utilizes 1D convolution and max-pooling kernel to make use of the EEG signal redundancy in the time axis and preserve information in the channel axis for early-stage operation. Experimental results show that the proposed architecture achieves high sensitivity, AUC score and lower FPR on widely adopted benchmark datasets. Comparison result indicates that the model outperforms state-of-the-art works.

In addition, the use of raw signals allowed to reduce the complexity of data processing, which is expected to reduce the execution time, reduce the power consumption and shrink the silicon area in the projected hardware oriented implementation.

Acknowledgements

The authors would like to acknowledge start-up funds from Westlake University to the Cutting-Edge Net of Biomedical Research and INnovation (CenBRAIN) to support this project.

References

  • [1] A. Aarabi, R. Fazel-Rezai, and Y. Aghakhani (2009) EEG seizure prediction: measures and challenges. In 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 1864–1867. Cited by: §I.
  • [2] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al. (2016)

    Tensorflow: a system for large-scale machine learning

    .
    In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pp. 265–283. Cited by: §III.
  • [3] M. T. Avcu, Z. Zhang, and D. W. S. Chan (2019) Seizure detection using least eeg channels by deep convolutional neural network. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1120–1124. Cited by: §I.
  • [4] R. Barandela, R. M. Valdovinos, J. S. Sánchez, and F. J. Ferri (2004) The imbalanced training sample problem: under or over sampling?. In

    Joint IAPR international workshops on statistical techniques in pattern recognition (SPR) and structural and syntactic pattern recognition (SSPR)

    ,
    pp. 806–814. Cited by: §II.
  • [5] E. Bou Assi, D. K. Nguyen, S. Rihana, and M. Sawan (2017) A functional-genetic scheme for seizure forecasting in canine epilepsy. IEEE Transactions on Biomedical Engineering 65 (6), pp. 1339–1348. Cited by: §I.
  • [6] E. Bou Assi, D. K. Nguyen, S. Rihana, and M. Sawan (2017) Towards accurate prediction of epileptic seizures: a review. Biomedical Signal Processing and Control 34, pp. 144–157. Cited by: §I.
  • [7] E. Bou Assi, D. K. Nguyen, S. Rihana, and M. Sawan (2017) Refractory epilepsy: localization, detection, and prediction. In 2017 IEEE 12th International Conference on ASIC (ASICON), pp. 512–515. Cited by: §I.
  • [8] E. Bou Assi, M. Sawan, D. Nguyen, and S. Rihana (2015) A hybrid mrmr-genetic based selection method for the prediction of epileptic seizures. In 2015 IEEE Biomedical Circuits and Systems Conference (BioCAS), pp. 1–4. Cited by: §I.
  • [9] B. H. Brinkmann, J. Wagenaar, D. Abbot, P. Adkins, S. C. Bosshard, M. Chen, Q. M. Tieng, J. He, F. Muñoz-Almaraz, P. Botella-Rocamora, et al. (2016) Crowdsourcing reproducible seizure forecasting in human and canine epilepsy. Brain 139 (6), pp. 1713–1722. Cited by: 1st item.
  • [10] M. P. Canevini, G. De Sarro, C. A. Galimberti, G. Gatti, L. Licchetta, A. Malerba, G. Muscas, A. La Neve, P. Striano, E. Perucca, et al. (2010) Relationship between adverse effects of antiepileptic drugs, number of coprescribed drugs, and drug load in a large cohort of consecutive patients with drug-refractory epilepsy. Epilepsia 51 (5), pp. 797–804. Cited by: §I.
  • [11] C. Di Bonaventura, J. Fattouch, G. Fabbrini, M. Manfredi, M. Prencipe, and T. A. Giallonardo (2007) Switching from branded to generic antiepileptic drugs as a confounding factor and unpredictable diagnostic pitfall in epilepsy management. Epileptic Disorders 9 (4), pp. 465–466. Cited by: §I.
  • [12] M. Eberlein, R. Hildebrand, R. Tetzlaff, N. Hoffmann, L. Kuhlmann, B. Brinkmann, and J. Müller (2018) Convolutional neural networks for epileptic seizure prediction. In 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 2577–2582. Cited by: §I, §IV.
  • [13] W. Freeman and R. Q. Quiroga (2012) Imaging brain function with eeg: advanced temporal and spatial analysis of electroencephalographic signals. Springer Science & Business Media. Cited by: §I.
  • [14] L. Gagliano, E. Bou Assi, D. K. Nguyen, S. Rihana, and M. Sawan (2018) Bilateral preictal signature of phase-amplitude coupling in canine epilepsy. Epilepsy research 139, pp. 123–128. Cited by: §I.
  • [15] A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C. Peng, and H. E. Stanley (2000) PhysioBank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals. Circulation 101 (23), pp. e215–e220. Cited by: 2nd item.
  • [16] A. Golestani and R. Gras (2014) Can we predict the unpredictable?. Scientific reports 4, pp. 6834. Cited by: §I.
  • [17] T. Granata, N. Marchi, E. Carlton, C. Ghosh, J. Gonzalez-Martinez, A. V. Alexopoulos, and D. Janigro (2009) Management of the patient with medically refractory epilepsy. Expert review of neurotherapeutics 9 (12), pp. 1791–1802. Cited by: §I.
  • [18] J. A. Hanley and B. J. McNeil (1982) The meaning and use of the area under a receiver operating characteristic (roc) curve.. Radiology 143 (1), pp. 29–36. Cited by: §IV.
  • [19] R. Hussein, M. O. Ahmed, R. Ward, Z. J. Wang, L. Kuhlmann, and Y. Guo (2019) Human intracranial eeg quantitative analysis and automatic feature learning for epileptic seizure prediction. arXiv preprint arXiv:1904.03603. Cited by: §I, §II.
  • [20] N. Japkowicz and S. Stephen (2002) The class imbalance problem: a systematic study. Intelligent data analysis 6 (5), pp. 429–449. Cited by: §II.
  • [21] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun (2009) What is the best multi-stage architecture for object recognition?. In

    2009 IEEE 12th international conference on computer vision

    ,
    pp. 2146–2153. Cited by: §I.
  • [22] S. Kalitzin and d. S. F. Lopes (2014) Predicting the unpredictable: the challenge or mirage of seizure prediction?. Clinical neurophysiology: official journal of the International Federation of Clinical Neurophysiology 125 (10), pp. 1930. Cited by: §I.
  • [23] H. Khan, L. Marcuse, M. Fields, K. Swann, and B. Yener (2017) Focal onset seizure prediction using convolutional networks. IEEE Transactions on Biomedical Engineering 65 (9), pp. 2109–2118. Cited by: §I, §IV.
  • [24] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §III.
  • [25] P. Kwan, A. Arzimanoglou, A. T. Berg, M. J. Brodie, W. Allen Hauser, G. Mathern, S. L. Moshé, E. Perucca, S. Wiebe, and J. French (2010) Definition of drug resistant epilepsy: consensus proposal by the ad hoc task force of the ilae commission on therapeutic strategies. Epilepsia 51 (6), pp. 1069–1077. Cited by: §I.
  • [26] Y. LeCun, Y. Bengio, et al. (1995) Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks 3361 (10), pp. 1995. Cited by: §I.
  • [27] A. Mognon, J. Jovicich, L. Bruzzone, and M. Buiatti (2011) ADJUST: an automatic eeg artifact detector based on the joint use of spatial and temporal features. Psychophysiology 48 (2), pp. 229–240. Cited by: §I.
  • [28] V. Nagaraj, S. Lee, E. Krook-Magnuson, I. Soltesz, P. Benquet, P. Irazoqui, and T. Netoff (2015) The future of seizure prediction and intervention: closing the loop. Journal of clinical neurophysiology: official publication of the American Electroencephalographic Society 32 (3), pp. 194. Cited by: §I.
  • [29] V. Nair and G. E. Hinton (2010)

    Rectified linear units improve restricted boltzmann machines

    .
    In Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807–814. Cited by: §III.
  • [30] Z. Pan, A. G. Rust, and H. Bolouri (2000) Image redundancy reduction for neural network classification using discrete cosine transforms. In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium, Vol. 3, pp. 149–154. Cited by: §III.
  • [31] M. Ranzato, Y. Boureau, and Y. L. Cun (2008)

    Sparse feature learning for deep belief networks

    .
    In Advances in neural information processing systems, pp. 1185–1192. Cited by: §I.
  • [32] R. Y. Rubinstein and D. P. Kroese (2013)

    The cross-entropy method: a unified approach to combinatorial optimization, monte-carlo simulation and machine learning

    .
    Springer Science & Business Media. Cited by: §III.
  • [33] A. H. Shoeb (2009) Application of machine learning to epileptic seizure onset detection and treatment. Ph.D. Thesis, Massachusetts Institute of Technology. Cited by: 2nd item.
  • [34] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2014) Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15 (1), pp. 1929–1958. Cited by: §III.
  • [35] J. F. Tellez-Zenteno, S. B. Patten, N. Jetté, J. Williams, and S. Wiebe (2007) Psychiatric comorbidity in epilepsy: a population-based analysis. Epilepsia 48 (12), pp. 2336–2344. Cited by: §I.
  • [36] N. D. Truong, A. D. Nguyen, L. Kuhlmann, M. R. Bonyadi, J. Yang, S. Ippolito, and O. Kavehei (2018) Convolutional neural networks for seizure prediction using intracranial and scalp electroencephalogram. Neural Networks 105, pp. 104–111. Cited by: §I, §IV.
  • [37] K. M. Tsiouris, V. C. Pezoulas, M. Zervakis, S. Konitsiotis, D. D. Koutsouris, and D. I. Fotiadis (2018)

    A long short-term memory deep learning network for the prediction of epileptic seizures using eeg signals

    .
    Computers in biology and medicine 99, pp. 24–37. Cited by: §I.
  • [38] Z. Zhang and K. K. Parhi (2015) Seizure prediction using polynomial svm classification. In 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 5748–5751. Cited by: §I.
  • [39] M. H. Zweig and G. Campbell (1993) Receiver-operating characteristic (roc) plots: a fundamental evaluation tool in clinical medicine.. Clinical chemistry 39 (4), pp. 561–577. Cited by: §IV.