Epilepsy has been studied for decades and epilepsy surgery outcomes have not improved over 20 years. One third of 60 million people with epilepsy now have seizures that cannot be controlled with medication. Currently existing neurological data analysis rely on manual inspection and most of automatic analysis approaches still depend on clever constructed features like spectral power, wavelet energy spike rate and so on [3, 18, 6, 8, 20, 4, 23, 26, 1]. These methods focus on electroencephalogram (EEG) or electrocorticographic (ECoG) data with coarse spatial and temporal resolution and predict seizure onset relying on several seconds to minutes recording. Recent development of high resolution micro-electrocorticographic (ECoG)  unveils rich spatial and temporal patterns. It is tempting to try to predict neural activities in the near feature (milliseconds) to provide guidance for responsive stimulation. Since neural activities are highly non-linear, prediction is quite challenging. To the best of our knowledge we are the first to tackle this problem.
Recent advances in deep learning provide useful insights in time series prediction. Models for time series prediction and sequence generation could be divided into two major categories:
1) models that rely on recurrent neural network (RNN) and its variants[7, 15, 29, 24]. 2) models that rely on adversarial training [9, 19, 28]
. In deep learning setup, a model is trained end-to-end with appropriate loss function. Supervised learning has been extremely successful in learning good representations[16, 34] that could be transferred to other dataset. However videos have much longer duration and detailed annotation down to short time horizon is difficult if not impossible. Researchers have been exploring to characterize the spatial temporal informations in video in an unsupervised manner. Like the pioneering unsupervised LSTM encoder-decoder framework proposed in , most of the RNN based approaches have an encoder learning a compact feature representation from an input sequence with a decoder reconstructing the input sequence and a predictor predicting future using feature. Other variants of RNN based approaches have modifications on the computation units. In , the authors propose a convolutional LSTM module to better model spatial relationships. In , the constructed multiplicative units eliminate the distinction between memory and hidden states in LSTM. Models extended from adversarial training  have a generative model and discriminative model. The generative and discriminative model are trained in a combatting manner, with the discriminative model predicting whether one instance (frame) is generated by the generative model or comes from dataset. In , to deal with the blurry predictions resulting from minimizing the mean square error, the authors propose a different loss function and demonstrated adversarial training could be successfully employed for next frame prediction.
The neural activities harvested with ECoG (Fig. 1
) share some common traits as natural videos, yet unlike natural videos the patterns of neural activities in a local brain region are restricted by neuron connectivity. Such restrictions lead to a finite number of typical patterns such as plane waves and spiral waves as observed in. To exploit the multi-cluster nature of such neural activities, we use multiple choice learning (MCL) [17, 11] to predict neural activities and let each model certain patterns without explicitly clustering the data. Unlike most ensemble models that enhance performance by averaging independently trained models with random initializations, our ensemble is trained using an ensemble-awareness loss function , which jointly solve the assignment problem and minimization problem. During training, for each given sample sequence, we calculate the reconstruction error and prediction error using each model, and update only the model that has the lowest reconstruction and prediction error. This updating rule encourages diversities between the trained models. Intrinsically preforms clustering while minimizing the ensemble loss. We demonstrate an ensemble of LSTMs can be trained simultaneously using such loss function in an end-to-end manner and achieve significant higher accuracy in neural activity prediction compare to a single LSTM with similar total number of parameters. In 
, the authors showed the image classification accuracy gain through MCL training of a set of CNN models, yet during testing time selecting exactly which CNN classification model to use is difficult. However in video prediction setup, with the decoder reconstructing the input sequence, we can determine the reconstruction error using each model, and choose the model that yields the least reconstruction error to perform prediction. We show that this model selection criterion could achieve comparable prediction accuracy compared to ”oracle” selection. We also develop a separate classifier that decides which model to use for prediction based on encode features all models. We found that this classifier decision further improve the prediction accuracy.
In this section, we describe the baseline LSTM model we use and the formulation of multiple choice learning.
For general purpose of sequence modeling, LSTM as a special RNN structure has proven to be capable of modeling long range dependency in many applications [10, 30, 25]. The crucial feature of LSTM compared to the classical RNN to RNN is the memory cell noted by , which serves as a conveyor belt connecting time series and acts as an accumulator. The input gate controls the extent that current input and past hidden states have on affecting the current cell state. Simultaneously a sigmoid layer called forget gate decides what information is going to be thrown away or dampened from the current cell state. Finally the output of the LSTM is a filtered version of the current cell state controlled by the output gate and pushed through a tanh function so that the output has values between -1 and 1. The basic LSTM cell structures are summarized as follows, where denotes the Hadamard product:
2.2 LSTM model for video prediction
LSTM based recurrent neural network has been widely applied in the field of neural machine translation[2, 25, 5], video analysis[30, 14, 32, 31], etc. In these tasks formulated as a supervised learning problem, the goals are match a set of observation sequences to the correct target sequences or labels. However in many applications correspondences between videos or detailed labels are not available, exploring the spatial-temporal structure of the raw video sequences would be more appealing.
For ECoG prediction, we used  as the baseline model. The baseline model has a LSTM encoder, a LSTM decoder and a LSTM predictor. The encoder learns a compact representation for a certain number of observed frames, and the decoder reconstructs these observed frames from the encoded feature. The predictor then predicts future frames of the given sequence based on the encoder feature. The entire system can be learnt in an end-to-end manner based on the training sequences. To enhance the performance, instead of using one layer of LSTM for each submodule(encoder/decoder/predictor), multiple LSTMs are stacked to form more complex structures by adding nonlinearity.
Unlike the moving MNIST dataset  used by [7, 15, 29, 24] for video sequence prediction, the ECoG dataset have been observed to form multiple clusters, each with a distinct neural activity pattern, as shown in Fig. 2. One way to exploit this multi-cluster nature of the ECoG videos is by fitting a model for each cluster of sequences. This approach would require one to segment a long video into short sequences and furthermore, classify each sequence to one of predefined clusters. Such an approach is highly limited by the sequence segmentation and clustering. Besides, this pipelined framework is against the common approach of deep learning where one usually trains in an end-to-end manner. Another alternative is by adding more LSTM cells. As the number of parameters grows in , being the number of LSTM cells in each layer, adding more LSTM cells is not efficient to fully exploit the clustered nature of the underlying signals. In the following subsection we propose a new approach to solve the assignment and optimization problem in an end-to-end manner.
2.3 Multiple choice learning (MCL)
Typically, ensemble models are trained independently under different random initializations and prediction results are averaged during test time [12, 17]. These models are commonly viewed as experts or specialists in the literature, although they are rarely trained to encourage diversity and specialization. In 
, the authors distilled information from ensembles of convolutional neural network (CNN) by pre-clustering data. They pre-cluster the images in the dataset based on image categories, and each CNN specialist is only trained on a subset/cluster of images. During inference time, the prediction of label is by having a generalist model first determines the potential subcategory that input image might belong to and letting the ensemble models trained with this sub-category further determine the label of the image. Although this approach is sound for image classification where one typically has many labelled images, such an approach leveraging label-based pre-clustering is not feasible when facing unlabelled video dataset. Instead, we adopt the framework of Multiple Choice Learning (MCL)[11, 17], where the assignment of a training sample to each model is jointly solved with finding the optimal parameters for all models.
In the video prediction setup, we have a set of models such that . are input frames in time 1 to . are the reconstructed frame from the input and are the predicted frames in time to . The loss for a sequence is defined in Eq. 2 , where is the mean square error loss between and . The goal of our MCL setup is to find the assignment variable and parameters for by solving the optimization problem defined in Eq. 2.
Note that in the training stage, at each iteration, we know the reconstruction and prediction accuracy of each current ensemble model on one instance . Therefore, we can assign a training instance to the model that has the minimal reconstruction and prediction error. The optimization problem in Eq. 2 could be solved with a coordinate descent algorithm 
with stochastic gradient descent (SGD) shown below. The solution alternates between finding the assignment and optimizing the corresponding model’s parameter.
We design experiments for ECoG data prediction using multiple choice learning of an ensemble of LSTM. We first performed graph filtering  on ECoG dataset to fill in the missing channels either caused by manufacturing defects or loss of contact on membrane. The graph transform basis for the ECoG dataset is consistent across time so that the training set and testing set can share the same basis. This creates a spatially smoothed dataset and makes unsupervised LSTM prediction acurately. We compare the results obtained using the baseline single LSTM model, the randomly initialized LSTM ensemble model and MCL trained LSTM ensemble. We further improved the prediction accuracy by having another classifier choosing which model to use as predictor.
We analyze ECoG data from an acute in vivo feline model of epilepsy. The 18 by 20 array of high-density active electrodes has 500m spacing between nearby channels. The in vivo recording has a temporal sampling rate of 277.78 Hz and lasts 53 minute. We obtain a total of 894698 sequences each 20 frames long (10 for reconstruction and 10 for prediction, for visual display in the paper it is 20 for reconstruction and 20 for prediction) by applying a sliding window over the original video recording of 7 induced seizures. To get disjoint subset for training and testing, we choose one seizure period, and form the testing set by including all sequences from this seizure and the non-seizure period leading up to this seizure. We form the validation set by choosing another seizure period and including all sequences during that seizure period and the non-seizure period. All remaining sequences are included in the training set. In total we have 788627 training sequences, 64167 validation sequences and 41904 testing sequences.
3.2 Training the LSTM ensemble using MCL
We train a LSTM ensemble with 8 models. For parameter initialization, we first try random initialization for all 8 models. But we find once the gradient descent is made for the first mini-batch, one model is much better updated comparing to the rest and this model would have the lowest error for the majority of the remaining mini-batches. This causes only one model gets updated during training most of time. To overcome such problem, we randomly divide the training set into 8 non-overlapping subsets. Initialize one model with one of subset. We train all models using each subset by minimizing the mean square error loss using back propagation through time and SGD with a learning rate of and momentum of 0.9. Dropout is applied only on non-recurrent connection as suggested 
. We only train one epoch for each to ensure sufficient diversities between models. We then train all 8 models jointly using the MCL method described in Section2.3 and perform early stopping base on error of the validation set.
Each LSTM model has the same structure as , with two LSTM layers each with 1000 nodes. For MCL training, we use 4 Nvidia k80 GPUs in a cluster for training. Since the loss function is coupled with all models and could not be trained in a sequential manner. To enable our experiment scale, we use Message Passing Interface (MPI) standard to enable high speed GPU communication. Each GPU loads two models. As a comparison to MCL training, we also train three benchmark models. The first benchmark model consists of two LSTM layers, each with 1000 nodes. The second benchmark model has 3000 nodes each layer. The second benchmark model has roughly similar amount of parameters as the ensemble with 8 LSTM models. We also train another benchmark of 8 random initialized 1000 nodes LSTMs and use the average of the prediction results by all 8 models as the final predicted signal.
Sample prediction sequences of testing datasets are shown in Fig. 3. Model 8, 7 and 4 are models that have the lowest reconstruction errors on those sequences respectively, and the best model in terms of prediction accuracy also have the lowest reconstruction error in the case shown in here. This shows the model diversity trained with MCL. The prediction accuracy against time comparison is shown in Fig. 4. The PSNR is defined as:
Where MSE is the mean square error of prediction frames against ground truth frames and is the maximum intensity of the dataset. The oracle selection shown in Fig. 4 uses the model that has the lowest prediction error. Since ground truth future frames are not available during inference, such selection mechanism is not practical in reality. The reconstruction-error based model selection chooses the model that has the lowest reconstruction error. The short term prediction accuracy between oracle selection and reconstruction-error based selection are roughly the same, but the accuracy of the latter drops faster than oracle selection as the prediction horizon increases. Even so the reconstruction-error based selection still beats the closest benchmark of average prediction with randomly initialized ensemble by a large margin.
From Table 1, it is clear that the 3000 nodes LSTM model is worse than other benchmarks. Because the model does not have any structure to exploit the multi-cluster nature of neural activities, simply adding more nodes makes the number of parameters to be trained grow in an exponential manner. It is less likely to converge to a good local minimum as such model is prone to overfit the training set.
3.3 Model selection as classification
To further enhance from reconstruction-based selection, we train a multilayer perceptron (MLP) classifier to select which model to use for prediction. The classifier takes the concatenated LSTM hidden features at the last input frame from all models as input and output the probability of the best LSTM model to use as predictor. The input to MLP classifier is 8000 (1000 dimension feature per model). We use batch normalization as regularization and used three fully connected layers. By using the model that is predicted to have the highest probability by the classifier, we obtain a slight improvement compared to using the reconstruction-error based selection shown in Table 1.
|MCL with oracle selection||32.2626|
|MCL with classifier selection||31.2767|
|MCL with reconstruction-error based selection||31.0636|
|Using average of prediction by separately initialized ensemble||29.0722|
|Single model with 1000 nodes||28.3495|
|Single model with 3000 nodes||25.8128|
3.4 Relationship between trained models with neural activity patterns
In this section, we analyze the potential relationship between different models in the learned ensemble and different neural activity patterns during seizure and non-seizure durations. For each testing sequence, we assigned the model based the oracle selection (i.e. the model with the least prediction error). Fig. 5 shows the probability of different models. The difference between seizure and non-seizure stage shows there are essentially different neural activities in these stages. We see that most neural activity patterns during non-seizure periods can be captured by model 3, whereas there are several different clusters of activity patterns during seizure periods, mainly captured by models 3, 4, 6 and 8. We further investigate the types of neuron activities captured by these models. We find that model 3 is good at predicting silent neural activity namely most of neurons will be at resting potential (hence this model is used in both non-seizure and seizure periods). Model 4 is good at predicting neural activities restricted to a small region. Model 6 is good at predicting when most neurons are going into refractory period after action potential. And model 8 is good at predicting moving neural activity patterns. Such patterns are more common in seizure stage, which explains why model 8 are selected more often during seizure stage. Those patterns are shown in Fig. 6.
Table 2 shows the transition probability of models between consecutive time windows for non-seizure stage and seizure stage in test set. The transition probability is defined as:
Where and are sequences corresponding to adjacent sliding windows. The diagonal elements of the transition matrix shows the likelihood that the same model is selected for predicting the next sequence. The high self-transition probability shows each model in the ensemble has quite stable prediction power within a short period. As neural activities get more complex from non-seizure to seizure, the transition between models are more frequent demonstrated by the reduction of self-transition probabilities from non-seizure to seizure stage. The high transition probability between models 4 and 8 in both stages indicates the global wave propagation is highly likely to be followed by another local active potentials (see sequences of model 4 in Fig.6) and vice versa. And high transition probability from model 6 to model 3 demonstrate the transition of neuron from refractory period into resting state.
In this work, we have successfully applied the deep learning approach to the challenging problem of predicting neural activities observed by high resolution ECoG. We formulate the problem as a video prediction problem. Observing that there are multiple clusters of neural activities, we propose an extension of MCL from CNN to LSTM models. The MCL solves the assignment problem jointly with the loss minimization problem. The MCL has enabled a significant improvement in video prediction accuracy compared to averaging the predictions by separately trained LSTM models. Some of the models indeed are found to be able to model different motion patterns in the neural dataset. We find that using the reconstruction error as a guideline to select the model to be used for prediction can yield predictions close to that using an oracle selected model. Using a trained classifier for model selection further improves prediction accuracy slightly. Finally, we conduct an analysis of the association between the models selected and the neural activities of the underlying video sequences. The analysis reveals the differences in the distribution of selected models and the model transition probability matrix between seizure and non-seizure stages.
-  U. R. Acharya, S. V. Sree, and J. S. Suri. Automatic detection of epileptic eeg signals using higher order cumulant features. International journal of neural systems, 21(05):403–414, 2011.
-  D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
-  M. Bandarabadi, C. A. Teixeira, J. Rasekhi, and A. Dourado. Epileptic seizure prediction using relative spectral power features. Clinical Neurophysiology, 126(2):237–248, 2015.
-  K. Chua, V. Chandran, U. Acharya, and C. Lim. Automatic identification of epileptic electroencephalography signals using higher-order spectra. Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine, 223(4):485–495, 2009.
-  J. Chung, K. Cho, and Y. Bengio. A character-level decoder without explicit segmentation for neural machine translation. arXiv preprint arXiv:1603.06147, 2016.
A. Eftekhar, W. Juffali, J. El-Imad, T. G. Constandinou, and C. Toumazou.
Ngram-derived pattern recognition for the detection and prediction of epileptic seizures.PloS one, 9(6):e96235, 2014.
-  C. Finn, I. Goodfellow, and S. Levine. Unsupervised learning for physical interaction through video prediction. arXiv preprint arXiv:1605.07157, 2016.
-  K. Gadhoumi, J.-M. Lina, and J. Gotman. Discriminating preictal and interictal states in patients with temporal lobe epilepsy using wavelet analysis of intracerebral eeg. Clinical neurophysiology, 123(10):1906–1916, 2012.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680, 2014.
-  A. Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
-  A. Guzman-Rivera, D. Batra, and P. Kohli. Multiple choice learning: Learning to produce multiple structured outputs. In Advances in Neural Information Processing Systems, pages 1799–1807, 2012.
-  G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
-  S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
-  J. Johnson, A. Karpathy, and L. Fei-Fei. Densecap: Fully convolutional localization networks for dense captioning. arXiv preprint arXiv:1511.07571, 2015.
-  N. Kalchbrenner, A. v. d. Oord, K. Simonyan, I. Danihelka, O. Vinyals, A. Graves, and K. Kavukcuoglu. Video pixel networks. arXiv preprint arXiv:1610.00527, 2016.
-  A. Karpathy, J. Johnson, and L. Fei-Fei. Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078, 2015.
-  S. Lee, S. Purushwalkam, M. Cogswell, D. Crandall, and D. Batra. Why m heads are better than one: Training a diverse ensemble of deep networks. arXiv preprint arXiv:1511.06314, 2015.
-  S. Li, W. Zhou, Q. Yuan, and Y. Liu. Seizure prediction using spike rate of intracranial eeg. IEEE transactions on neural systems and rehabilitation engineering, 21(6):880–886, 2013.
-  M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440, 2015.
T. Netoff, Y. Park, and K. Parhi.
Seizure prediction using cost-sensitive support vector machine.In 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pages 3322–3325. IEEE, 2009.
D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst.
The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains.IEEE Signal Processing Magazine, 30(3):83–98, 2013.
-  Y. Song, J. Viventi, and Y. Wang. Seizure detection and prediction through clustering and temporal analysis of micro electrocorticographic data. In 2015 IEEE Signal Processing in Medicine and Biology Symposium (SPMB), pages 1–7. IEEE, 2015.
-  T. L. Sorensen, U. L. Olsen, I. Conradsen, J. Duun-Henriksen, T. W. Kjaer, C. E. Thomsen, and H. B. D. Sørensen. Automatic epileptic seizure onset detection using matching pursuit. 2010.
-  N. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsupervised learning of video representations using lstms. CoRR, abs/1502.04681, 2, 2015.
-  I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112, 2014.
-  A. Temko, E. Thomas, W. Marnane, G. Lightbody, and G. Boylan. Eeg-based neonatal seizure detection with support vector machines. Clinical Neurophysiology, 122(3):464–473, 2011.
-  J. Viventi, D.-H. Kim, L. Vigeland, E. S. Frechette, J. A. Blanco, Y.-S. Kim, A. E. Avrin, V. R. Tiruvadi, S.-W. Hwang, A. C. Vanleer, et al. Flexible, foldable, actively multiplexed, high-density electrode array for mapping brain activity in vivo. Nature neuroscience, 14(12):1599–1605, 2011.
-  C. Vondrick, H. Pirsiavash, and A. Torralba. Generating videos with scene dynamics. arXiv preprint arXiv:1609.02612, 2016.
S. Xingjian, Z. Chen, H. Wang, D.-Y. Yeung, W.-k. Wong, and W.-c. Woo.
Convolutional lstm network: A machine learning approach for precipitation nowcasting.In Advances in Neural Information Processing Systems, pages 802–810, 2015.
-  K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. S. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044, 2(3):5, 2015.
L. Yao, A. Torabi, K. Cho, N. Ballas, C. Pal, H. Larochelle, and A. Courville.
Describing videos by exploiting temporal structure.
Proceedings of the IEEE International Conference on Computer Vision, pages 4507–4515, 2015.
-  S. Yeung, O. Russakovsky, G. Mori, and L. Fei-Fei. End-to-end learning of action detection from frame glimpses in videos. arXiv preprint arXiv:1511.06984, 2015.
-  W. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
-  M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In European Conference on Computer Vision, pages 818–833. Springer, 2014.