Our daily lives constantly produce time series data, such as stock prices, weather readings, biological observations, health monitoring data, etc. In the era of big data, there are increasing needs to extract knowledge from time series data, among which a main task is time series classification (TSC), the problem of predicting class labels for time series. It has been a long standing problem with a large scope of real-world applications. For example, there has been active research on clinical prediction, the task of predicting whether a patient might be in danger of certain deterioration based on the patient’s clinical time series such as ECG signals. A real-time deterioration warning system powered by TSC has achieved unprecedented performance compared with traditional clinical approaches and been applied in major hospitals .
Most existing TSC approaches fall into two categories : distance-based methods and feature-based methods.
For distance-based methods, the key part is to measure the similarity between any given two time series. Based on the similarity metrics, the classification can be done using algorithms such as k-nearest neighbors (kNN) or support vector machines (SVM) with similarity-based kernels. The most notable similarity measurement is dynamic time warping (DTW) which aligns two time series with dynamic warping to get the best fit. It could be easily done through dynamic programming.
For feature-based methods, each time series is characterized with a feature vector and any feature-based classifier (e.g. SVM or logistic regression) can be applied to generate the classification results. There have been many hand-crafted feature extraction schemes across different applications. For example, in a clinical prediction application, each time series is divided into several consecutive windows and features are extracted from each window. The final feature vector is a concatenation of feature vectors from all windows
. The features include simple statistics such as mean and variance, as well as complex features from detrended fluctuation analysis and spectral analysis. Another approach extracts features based on shapelets which can be regarded as a signature subsequence of the time series. Typically, potential candidate shapelets are generated in advance and they can be used in different ways. For example, they can be considered as a dictionary and each shapelet is regarded as a word. The time series is then described by a bag-of-word model. A more recent study constructs the feature vector such that the value of each feature is the minimum distance between anywhere in the time series and the corresponding shapelet. A drawback of the shapelet method is that it requires extensive search for the discriminative shapelets from a large space. To bypass the need of trying out lots of shapelet candidates, Grabocka et al.  propose to jointly learn a number of shapelets of the same size along with the classifier. However, their method only offers linear separation ability.
A key reason for the success of CNNs is its ability to automatically learn complex feature representations using its convolutional layers. With the great recent success of deep learning and the presence of so many various handcrafted features in TSC, it is natural to ask a question: is it possible to automatically learn the feature representation from time series? However, there have not been many research efforts in the area of time series to embrace deep learning approaches. In this paper, we advocate a novel neural network architecture, Multi-scale Convolutional Neural Network (MCNN), a convolutional neural network specifically designed for classifying time series.
A distinctive feature of MCNN is that its first layer contains multiple branches that perform various transformations of the time series, including those in the frequency and time domains, extracting features of different types and time scales. Subsequently, convolutional layers apply dot products between the transformed waves and 1-D learnable filters, which is a general way to automatically recognize various types of features from the input. As a single convolutional layer can detect local patterns similar to shapelets, stacking multiple convolutional layers can construct more complex patterns. As a result, MCNN is a powerful general-purpose framework for TSC. Different than traditional TSC methods, MCNN is an end-to-end model without requiring any handcrafted features. We conduct comprehensive experiments and compare with many existing TSC models. Strong empirical results show that MCNN elevates the state-of-the-art of TSC. It gives superior overall performance, surpassing most existing models by a large margin, especially when enough training data is present.
2 Multi-Scale Convolutional Neural Network (MCNN) for TSC
In this section, we formally define the aforementioned time series classification (TSC) problem. Then we describe our MCNN framework for solving TSC problems.
2.1 Notations and Problem Definition
A time series is a sequence of real-valued data points with timestamps. In this paper, we focus on time series with identical interval length. We denote a time series as , where is the value at time stamp and there are timestamps for each time series.
We denote a labelled time series dataset as which contains time series and their associated labels. For each , represents the time series and its label is . For ease of presentation, in this paper we consider classification problems where is a categorical value in where is the number of labels. However, our framework can be easily extended to real-valued regression tasks. The TSC problem is to build a predictive model to predict a class label given an input time series . Unlike some previous works, we do not require all training and testing time series to have the same number of timestamps in our framework.
2.2 MCNN framework
Time series classification is a long standing problem that has been studied for decades. However, it remains a very challenging problem despite great advancement in data mining and machine learning. There are some key factors contributing to its difficulty. First, different time series may require feature representations at different time scales. For example, it is found that certain long-range (over a few hours involving hundreds of time stamps) patterns in body temperature time series have predictive values in forecasting sepsis . Existing TSC features can rarely adapt to the right scales. Second, in real-world time series data, discriminative patterns in the time series is often distorted by high-frequency perturbations and random noises. Automatic smoothing and de-noising procedures are needed to make the overall trend of the time series more clear.
To address these problems for TSC, we propose a multi-scale convolutional neural network (MCNN) framework in which the input is the time series to be predicted and the output is its label. The overall architecture of MCNN is depicted in Figure 1.
The MCNN framework has three sequential stages: transformation, local convolution, and full convolution.
1) The transformation stage applies various transformations on the input time series. We currently include identity mapping, down-sampling transformations in the time domain, and spectral transformations in the frequency domain. Each part is called a branch, as it is a branch input to the convolutional neural network.
2) In the local convolution stage
, we use several convolutional layers to extract the features for each branch. In this stage, the convolutions for different branches are independent from each other. All the outputs will pass through a max pooling procedure with multiple sizes.
3) In the full convolution stage
, we concatenate all extracted features and apply several more convolutional layers (each followed by max pooling), fully connected layers, and a softmax layer to generate the final output. This is an entirely end-to-end system and all parameters are trained jointly through back propagation.
2.3 Transformation stage
Multi-scale branch. A robust TSC model should be able to capture temporal patterns at different time scales. Long-term features reflect overall trends and short-term features indicate subtle changes in local regions, both of which can be potentially crucial to the prediction quality for certain tasks.
In the multi-scale branch of MCNN, we use down-sampling to generate sketches of a time series at different time scales. Suppose we have a time series and the down-sampling rate is , then we will only keep every data points in the new time series:
Using this method, we generate multiple new input time series with different down sampling rates, e.g. .
Multi-frequency branch. In real-world applications, high-frequency perturbations and random noises widely exist in the time series data due to many reasons, which poses another challenge to achieving high prediction accuracy. It is often hard to extract useful information on raw time series data with the presence of these noises. In MCNN, we adopt low frequency filters with multiple degrees of smoothness to address this problem.
A low frequency filter can reduce the variance of time series. In particular, we employ moving average to achieve this goal. Given an input time series, we generate multiple new time series with varying degrees of smoothness using moving average with different window sizes. This way, newly generated time series represent general low frequency information, which make the trend of time series more clear. Suppose the original time series is , the moving average works by converting this original time series into a new time series
where is the window size and . With different , MCNN generates multiple time series of different frequencies, all of which will be fed into the local convolutional layer for this branch. Different from the multi-scale branch, each time series in the multi-frequency branch has the same length, which allows us to assemble them into multiple channels for the following convolutional layer.
2.4 Local convolution stage
Local convolution. After down sampling, we obtain multiple time series with different lengths from a single input time series. We apply independent 1-D local convolutions on each of these newly generated time series. In particular, the filter size of local convolution will be the same across all these time series. Note that, with a same filter size, shorter time series would get larger local receptive field in the original time series. This way, each output from the local convolution stage captures a different scale of the original time series. An advantage of this method is that, by down sampling the time series instead of increasing the filter size, we can greatly reduce the number of parameters in the local convolutional layer.
Max pooling with multiple sizes. Max pooling, a form of non-linear down-sampling, is also performed between successive convolutional layers in MCNN. This can reduce feature maps’ size as well as the amount of following layers’ parameters to avoid overfitting and improve computation efficiency. More importantly, the max pooling operation introduces invariance to spatial shifting, making MCNN more robust.
Instead of using small pooling sizes like or , in MCNN we introduce a variable called the pooling factor, , which is the length after max pooling. Suppose the output time series after convolution has a length of
, then both our pooling size and stride in max pooling are. The pooling size is fairly large since is often chosen from . By doing this, we can have more filters and enforce each filter to learn only a local feature, since in the backpropogation phase, filters will be updated based on those few activated convolution parts.
2.5 Full convolution stage
After extracting feature maps from multiple branches, we concatenate all these features and feed them into other convolutional layers as well as a fully connected layer followed by a softmax transformation. Following  , we adopt the technique of deep concatenation to concatenate all the feature maps vertically.
The output of MCNN will be the predicted distribution of each possible label for the input time series. To train the neural network, MCNN uses the cross-entropy loss defined as:
where is the output of instance
through the neural network, which is the probability of its true label. The parametersand bias b in MCNN are those in local and full convolutional layers, as well as those in the fully connected layers, all of which are learned jointly through back propagation.
2.6 Data augmentation
One advantage for our framework is the ability to deal with large scale datasets. When dealing with smaller datasets, convolutional nets tend to overfit. Currently, most publicly available TSC datasets have limited sizes. To overcome this problem, we propose a data augmentation technique on the original datasets in order to avoid overfitting and improve the generalization ability. For massive datasets with abundant training data, data augmentation may not be needed.
We propose window slicing for the data augmentation. For a time series , a slice is a snippet of the original time series, defined as , . Suppose a given time series is of length , and the length of the slice is , our slicing operation will generate a set of -+1 sliced time series:
where all the time series in have the same label as their original time series does.
We apply window slicing on all time series in a given training dataset. When doing training, all the training slices are considered independent training instances. We also do window slicing when predicting the label of a testing time series. We first use the trained MCNN to predict the label of each of its slices, and then use a majority vote among all these slices to make the final prediction. Another advantage of slicing is that the time series are not required to have equal length since we can always cut all the time series into the same length using window slicing.
In this section, we discuss several properties of the MCNN framework and its relations to some other important works.
3.1 Effectiveness of convolution filters
Convolution has been a well-established method for handling sequential signals . We advocate that it is also a good fit for capturing characteristics in time series. Suppose is a filter of length and is a time series. Let be the result of 1-dimensional discrete convolution. The element of the result is given by
Depending on the filter, the convolution is capable of extracting many insightful information from the original time series. For example, if , the result of the convolution would be the gradient between any two neighboring points. However, is MCNN able to learn such kind of filters? The answer is yes. To show this, we train MCNN on a real-world dataset Gun_Point with a filter whose size is 15 (). For illustration, we pick one of the filters learned by MCNN as well as 3 time series from the dataset.
We show the shape of this selected filter and these 3 time series on the left of Figure 2, and the shape after convolution with the filter. Here, the two blue curves belong to one class and the red curve belongs to a different class. The learned filter (shown in the left figure) may look random at the first glance. However, a closer examination shows that it makes sense.
First, we can observe from the left figure that each time series has a upward part and a downward part no matter which label it has. After convolution with the filter, all three new signals form a valley at the location of the upward part and a peak at the location of the downward part. Second, since in MCNN we use max pooling right after convolution, the learned filter correctly finds that the downward part is more important, as max pooling only picks the maximum value from each convolved signal. As a result, the convolution and max pooling correctly differentiate the blue curves and red curve, since the maximum values of the two blue curves after convolution is greater than that of the red curve. By this visualization, MCNN also offers certain degree of interpretability as it tells us the characteristic found by MCNN. Third, these three time series have similar overall shapes but different time scales, as the “plateau" on the top have different lengths. It is very challenging for other methods such as DTW or shapelet to classify them. However, a single filter learned by MCNN coupled with max pooling can classify them well.
To further demonstrate the power of convolution filters for TSC, we compute the max pooling result of all times series in the train set convolving with the filter shown in Figure 2, and show all of them in Figure 3. Here, each point corresponds to a time series in the dataset. The blue and red points correspond to two different classes, respectively. The x-axis is the max-pooling value of each point, and y-axis is the class label. We can see from Figure 3 that, if we set the classification threshold at around , one single convolution filter can already achieve very high accuracy to classify the dataset.
3.2 Relation to learning shapelets
A major class of TSC methods are based on shapelet analysis which assumes that time series are featured by some common subsequences. Shapelet can either be extracted from existing time series, or learned from the data. A recent study  proposes a learning time series shapelet (LTS) method which achieves unprecedented performance improvement over simple extraction. In the LTS method, each time series can be represented by a feature vector in which each feature is the similarity between the time series and a shapelet. A logistic regression is applied on this new representation of time series to get the final prediction. Both the shapelets and parameters in the logistic regression model are jointly learned.
There is a strong relevance between the LTS method and MCNN, as both learn the parameters of the shapelets or filters jointly with a classifier. In fact, LTS can be viewed as a special case of MCNN. To make this more clear, let us first consider a simpler architecture, a special case of MCNN, where there is only one identity branch, and the input time series is processed by a 1-D convolutional layer followed by a softmax layer. The 1-D convolutional filter in the model can be regarded as a shapelet. The second layer (after convolution) is the new representation of the input time series. In this case, each neuron in the second layer is a inner product between the filter (or shapelet) and the corresponding window of the input time series. From this, we can see that MCNN model adopts inner product as the similarity measurement while LTS employs the Euclidean distance.
To further show the relationship between inner product in convolution and Euclidean distance, we can actually express the Euclidean distance in the form of convolution. Let be the Euclidean distances between a time series and a filter , its element is:
From Eq. (5), the Euclidean distance is nothing but the combination of convolution (after flipping the sign of ) and the norms of and a part of . The first term in Eq. (5) is a constant for each time series, and therefore can be regarded as a bias which MCNN has incorporated in the model. We can thus see that learning shapelets is a special case of learning convolution filters when the filters are restricted to have the same norm. Moreover, if we consider the full MCNN framework, its multi-scale and multi-frequency branches make it even more general to handle different time scales and noises.
Eq. (5) also gives us a hint on how to use convolution neural networks to implement Euclidean distances. By doing this, the Euclidean distance of between the time series and the shapelets can be efficiently computed leveraging deep learning packages and the speedups from their GPU implementation.
4 Related Work
TSC has been studied for long time. A plethora of time series classification algorithms have been proposed. Most traditional TSC methods fall into two categories: distance based methods that use kNN classifiers on top of distance measures between time series, and feature based classifiers that extract or search for deterministic features in the time or frequency domain and then apply traditional classification algorithms. In recent years, some ensemble methods that collect many TSC classifiers together have also been studied. A full review of these methods is out of the scope here but we will do a comprehensive empirical comparison with leading TSC methods in the next section. Below, we review some works that are most related to MCNN.
that can combine hierarchical feature extraction and classification together. Extensive comparison has shown that convolution operations in CNN have better capability on extracting meaningful features than ad-hoc feature selection. However, applications of CNN to TSC have not been studied until recently.
A multi-channel CNN has been proposed to deal with multivariate time series . Features are extracted by putting each time series into different CNNs. After that, they concatenate those features together and put them into a new CNN framework. Large multivariate datasets are needed in order to train this deep architecture. While for our method, we focus on univariate time series and introduce two more branches that can extract multi-scale and multi-frequency information and further increase the prediction accuracy.  feeds CNN with variables post-processed using an input variable selection (IVS) algorithm. The key difference compared with MCNN is that they aim at reducing the input size with different IVS algorithms. In contrast, we are exploring more raw information for CNN to discover.
In addition to classification, CNN is also used for time series metric learning. In , Zheng et al. proposed a model called convolutional nonlinear neighbourhood components analysis that preforms CNN based metric learning and uses 1-NN as the classifier in the embedding space.
Shapelets attract lots of attention because people can detect shapes that are crucial to TSC, providing insights and interpretability. However, searching shapelets from all the time series segmentations is time consuming and some stoping methods are proposed to accelerate this procedure. In , Grabocka et al. proposed a model that can learn global shapelets automatically instead of searching. As discussed in Section 3.2, MCNN is general enough to be able to learn shapelets.
CNN can achieve scale invariance to some extent by using the pooling operation. Thus, it is beneficial to introduce a multi-scale branch to extract short term as well as long term features. In image recognition, CNNs keep feature maps in each stage and feed those feature maps altogether to the final fully connected layer  . By doing this, both short term and higher level features are preserved. For our model, we down sample the raw data into different time scales which provides low level features of different scales and higher level features at the same time.
5 Experimental Results
In this section, we conduct extensive experiments on various benchmark datasets to evaluate MCNN and compare it against many leading TSC methods. We have made an effort to include the most recent works.
5.1 Experimental setup
We first describe the setup for our experiments.
Baseline methods. For comprehensive evaluation, we evaluate two classical baseline methods: 1-NN with Euclidean distance (ED)  and 1-NN DTW . We also select 11 existing methods with state-of-the-art results published within the recent three years, including: DTW with a warping window constraint set through cross validation (DTW CV) , Fast Shapelet (FS)  , SAX with vector space model (SV) , Bag-of-SFA-Symbols (BOSS) , Shotgun Classifier (SC) , time series based on a bag-of-features (TSBF) , Elastic Ensemble (PROP) , 1-NN Bag-Of-SFA-Symbols in Vector Space (BOSSVS) , Learn Shapelets Model(LTS) , and the Shapelet Ensemble (SE) model .
We also test standard convolutional neural network with the same number of parameters as in MCNN to show the benefit of using the proposed multi-scale transformations and local convolution. For reference, we also list the results of flat-COTE (COTE), an ensemble model proposed by Bagnall et al. , which uses the weighted votes over 35 different classifiers. MCNN is orthogonal to flat-COTE and can be incorporated as a constituent classifier.
Datasets. We evaluate all methods thoroughly on the UCR time series classification archive , which consists of 46 datasets selected from various real-world domains. We omit Car and Plane because a large portion of baseline methods do not provide related results. All the datasets in the archive are publicly available111http://www.cs.ucr.edu/~eamonn/time_series_data/. Following the suggestions in , we z-normalize the following datasets during preprocessing: Beef, Coffee, Fish, OSULeaf and OliveOil.
All the experiments use the default training and testing set splits provided by UCR, and the results are rounded to three decimal places. For authoritative comparison, we adopt the experimental results collected by Bagnall et al.  and Schafer  for the baseline methods.
Configuring MCNN. For MCNN, we conduct the experiments on all the datasets with the same network architecture as in Figure 1. Since most of the datasets in the UCR archive are not large enough, we first use window slicing to increasing the size of the training size. For window slicing, we set the length of slices to be where is the original length of the time series. We set the number of filters to be 256 for the convolutional layers and include 256 neurons in the fully connected layer.
We use mini-batch stochastic gradient with momentum to update parameters in MCNN. We adopt the grid search for hyper-parameter tuning based on cross validation. The hyper parameters MCNN include the filter size, pooling factor, and batch size.
In particular, the search space for the filter size is , which denotes the ratio of the filter length to the original time series length; the search space for the pooling factor is
, which denotes the number of outputs of max-pooling. Early stopping is applied for preventing overfitting. Specifically, we use the error on the validation set to determine the best model. When the validation error does not get reduced for a number of epochs, the training terminates.
MCNN is implemented based on theano and run on NVIDIA GTX TITAN graphics cards with 2688 cores and 6 GB global memory. For full replicability of the experiments, we will release our code and make it available in public222Source codes of the programs developed by our lab are published at http://www.cse.wustl.edu/~ychen/psd.htm..
CNN vs. MCNN. Before comparing against other TSC classifiers, we first compare MCNN with standard CNN. We test a CNN that has the same architecture and number of parameters as our MCNN but does not have the multi-scale transformation and local convolutions. Figure 4 shows the scatter plot of the test accuracies of CNN and MCNN on the 44 datasets. We can see that MCNN achieves better results on 41 out of 44 datasets. A binomial test confirms that MCNN is significantly better than CNN at the 1% level.
5.2 Comprehensive evaluation
Table 1 shows a comprehensive evaluation of all methods on the UCR datasets. For each dataset, we rank all the 15 classifiers. The last row of Table 1 shows the mean rank for each solver (lower is better). We see that MCNN is very competitive, achieving the highest accuracy on 10 datasets. MCNN has a mean rank of 3.95, lower than all the state-of-the-art methods except for COTE, which is an ensemble of 35 classifiers.
To further analyze the performance, we make pairwise comparison for each algorithm against MCNN. Binomial test (BT) and the Wilcoxon signed rank test (WSR) are used to measure the significance of difference. Corresponding -values are listed in table 2, indicating that MCNN is significantly better than all the other methods except for BOSS and COTE at the 1% level (). Moreover, it shows that the differences between COTE, BOSS, and MCNN are not significant.
Figure 5 shows the critical difference diagram, as proposed in . The values shown on the figure are the average rank of each classifier. Bold lines indicate groups of classifiers which are not significantly different. The critical difference (CD) length is shown on the graph. Figure 5 is evaluated on MCNN, all baseline methods and COTE. MCNN is among the most accurate classifiers and its performance is very close to COTE. It is quite remarkable that MCNN, a single algorithm, obtains the same state-of-the-art performance as an ensemble model consisting of 35 different classifiers. Note that MCNN is orthogonal to flat-COTE as MCNN can also be included as a predictor in flat-COTE to further improve the performance. There are obvious margins between MCNN and other baseline classifiers.
We now group these classifiers into three categories and provide mode detailed analysis.
Distance based classifiers. These classifiers use nearest neighbor algorithms based on distance measures between time series. The simplest distance measure is the Euclidean distance (ED). Dynamic time warping (DTW) is proposed to extract the global similarity while addressing the phase shift problem. DTW with 1-NN classifier has been hard to beat for a long time and now become a benchmark method. DTW with warping set through cross-validation (DTWCV) is also a traditional bench mark. k-NN classifiers also uses transformed features. Fast shapelet (FS) search shapelet on a lower transformed space. Bag-of-SFA-Symbols (BOSS)  proposes a distance based on histograms of symbolic Fourier approximation words. The BOSSVS model combines the BOSS model with the vector space model to reduce the time complexity. They are all combine with 1-NN for final prediction.
By grouping distance based classifiers together, we can compare their average performance with MCNN. In order to illustrate the overall performance of different algorithms, we plot accumulated rank on all the tested datasets in Figure 6. We order all the datasets alphabetically by their name and show the accumulated rank. For example, if a method is always ranked #1, its accumulated rank is for the dataset. From Figure 6, we see that MCNN has the lowest accumulated rank, outperforming all the distance based classifiers.
Feature based classifiers.
For feature based classifiers, we selected SAX-VSM, TSF, TSBF, LTS. Symbolic aggregate approximation (SAX) has become a classical method to discretize time series based on piecewise mean value., SAX-VSM achieved state-of-the-art classification accuracy on UCR dataset by combining vector space Model (VSM) with SAX. Time series forest (TSF) divide time series into different intervals and calculate the mean, standard deviation and slop as interval features. Instead of using traditional entropy gain, TSF proposed a new split criteria by adding an addition term measuring the nearest distance between interval features and split threashold to the entropy and achieved better results than traditional random forests. The bag-of-features framework (TSBF) also extracts interval features with different scales. The features from each interval form an instance, and each time series forms a bag. Random forest is used to build a supervised codebook and classify time series. Finally, learning time series shapelets (LTS) provides not only competitive results, but also the ability to learn shapelets directly. Classification is made based on logistic regression.
The middle plot of figure 6 compares the performance of MCNN against some on feature based classifiers, including SV, TSBF, TSF, and LTS. It is clearly that MCNN is substantially better than these feature based classifiers, as its accumulated rank is consistently the lowest by a large margin.
Ensemble based classifiers There is a growing trend in ensembling different classifiers together to achieve higher accuracy. The Elastic Ensemble (PROP) combined 11 distinct classifiers based on elastic distance measures through a weighted ensemble scheme. This was the first classifier that significantly outperformed DTW at that time. Shapelet ensemble (SE) combines shapelet transformations with a heterogeneous ensemble method. The weight of each classifier is assigned based on the cross validation accuracy. The flat collective of transform-based ensembles (flat-COTE) is an ensemble of 35 different classifiers based on features from time and frequency domains and has achieved state-of-the-art accuracy performance. Despite its high testing accuracy, ensemble methods suffer high complexity during the training process as well as testing. From the third plot in figure 6, we can observe that MCNN is very close to COTE and much better than SE and PROP. Critical difference analysis in Figure 5 also confirms that there is no significant difference between COTE and MCNN. It is in fact quite remarkable that a single algorithm in MCNN can match the performance of the COTE ensemble. The performance of MCNN is likely to improve further if it is trained with larger datasets, since convolutional neural networks are known to be able to absorb huge training data and make improvements.
We have presented Multi-scale Convolutional Neural Network(MCNN), a convolutional neural network tailored for time series classification. MCNN unifies feature extraction and classification, and jointly learns the parameters through back propagation. It leverages the strength of CNN to automatically learn good feature representations in both time and frequency domains. In particular, MCNN contains multiple branches that perform various transformations of the time series, which extract features of different frequency and time scales, addressing the limitation of many previous works that they only extract features at a single time scale. We have also discussed the insights that learning convolution filters in MCNN generalizes shapelet learning, which in part explains the excellent performance of MCNN.
We have conducted comprehensive experiments and compared with leading time series classification models. We have demonstrated that MCNN achieves state-of-the-art performance and outperforms many existing models by a large margin, especially when enough training data is present.
More importantly, an advantage of CNNs is that they can absorb massive amount of data to learn good feature representations. Currently, all the TSC datasets we have access to are not very large, ranging from a training size of around 50 to a few thousands. We envision that MCNN will show even greater advantages in the future when trained with much larger datasets. We hope MCNN will inspire more research on integrating deep learning with time series data analysis. For future work, we will investigate how to augment MCNN for time series classification by incorporating other side information from multiple sources, such as text, image and speech.
The authors are supported in part by the IIS-1343896, DBI-1356669, and III-1526012 grants from the National Science Foundation of the United States, a Microsoft Research New Faculty Fellowship, and a Barnes-Jewish Hospital Foundation grant.
I. Arel, D. C. Rose, and T. P. Karnowski.
Deep machine learning-a new frontier in artificial intelligence research [research frontier].Computational Intelligence Magazine, IEEE, 5(4):13–18, 2010.
-  A. Bagnall, J. Lines, J. Hills, and A. Bostrom. Time-series classification with cote: The collective of transformation-based ensembles.
-  M. G. Baydogan, G. Runger, and E. Tuv. A bag-of-features framework to classify time series. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(11):2796–2802, 2013.
-  Y. Bengio. Learning deep architectures for ai. Foundations and trends® in Machine Learning, 2(1):1–127, 2009.
-  J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y. Bengio. Theano: a cpu and gpu math expression compiler. In Proceedings of the Python for scientific computing conference (SciPy), volume 4, page 3. Austin, TX, 2010.
-  D. J. Berndt and J. Clifford. Using dynamic time warping to find patterns in time series. In KDD workshop, volume 10, pages 359–370. Seattle, WA, 1994.
-  Y. Chen, E. Keogh, B. Hu, N. Begum, A. Bagnall, A. Mueen, and G. Batista. The ucr time series classification archive, July 2015. www.cs.ucr.edu/~eamonn/time_series_data/.
-  M. Dalto. Deep neural networks for time series prediction with applications in ultra-short-term wind forecasting. Rn (1), 1:2.
-  J. Demšar. Statistical comparisons of classifiers over multiple data sets. The Journal of Machine Learning Research, 7:1–30, 2006.
-  A. Drewry, B. Fuller, T. Bailey, and R. Hotchkiss. Body temperature patterns as a predictor of hospital-acquired sepsis in afebrile adult intensive care unit patients: a case-control study. Critical Care, 17(5):doi: 10.1186/cc12894, 2013.
-  C. Faloutsos, M. Ranganathan, and Y. Manolopoulos. Fast subsequence matching in time-series databases, volume 23. ACM, 1994.
-  J. Grabocka, N. Schilling, M. Wistuba, and L. Schmidt-Thieme. Learning time-series shapelets. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 392–401. ACM, 2014.
-  G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
H. Lee, P. Pham, Y. Largman, and A. Y. Ng.
Unsupervised feature learning for audio classification using convolutional deep belief networks.In NIPS, 2009.
-  J. Li, C. Niu, and M. Fan. Multi-scale convolutional neural networks for natural scene license plate detection. In Advances in Neural Networks–ISNN 2012, pages 110–119. Springer, 2012.
-  J. Lines and A. Bagnall. Time series classification with ensembles of elastic distance measures. Data Mining and Knowledge Discovery, 29(3):565–592, 2015.
-  J. Lines, L. M. Davis, J. Hills, and A. Bagnall. A shapelet transform for time series classification. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 289–297. ACM, 2012.
-  S. Mallat. A wavelet tour of signal processing. Academic press, 1999.
-  Y. Mao, W. Chen, Y. Chen, C. Lu, M. Kollef, and T. Bailey. An integrated data mining approach to real-time clinical monitoring and deterioration warning. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1140–1148. ACM, 2012.
-  H. P. Martinez, Y. Bengio, and G. N. Yannakakis. Learning deep physiological models of affect. Computational Intelligence Magazine, IEEE, 8(2):20–33, 2013.
-  J. Paparrizos and L. Gravano. k-shape: Efficient and accurate clustering of time series. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, pages 1855–1870. ACM, 2015.
-  T. Rakthanmanon, B. Campana, A. Mueen, G. Batista, B. Westover, Q. Zhu, J. Zakaria, and E. Keogh. Searching and mining trillions of time series subsequences under dynamic time warping. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 262–270. ACM, 2012.
-  T. Rakthanmanon and E. Keogh. Fast shapelets: A scalable algorithm for discovering time series shapelets. In Proceedings of the thirteenth SIAM conference on data mining (SDM), pages 668–676. SIAM, 2013.
Towards time series classification without human preprocessing.
Machine Learning and Data Mining in Pattern Recognition, pages 228–242. Springer, 2014.
-  P. Schäfer. The boss is concerned with time series classification in the presence of noise. Data Mining and Knowledge Discovery, 29(6):1505–1530, 2015.
-  P. Schäfer. Scalable time series classification. Data Mining and Knowledge Discovery, pages 1–26, 2015.
-  F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A unified embedding for face recognition and clustering. In arXiv preprint arXiv:1503.03832, 2015.
-  P. Senin and S. Malinchik. Sax-vsm: Interpretable time series classification using sax and vector space model. In Data Mining (ICDM), 2013 IEEE 13th International Conference on, pages 1175–1180. IEEE, 2013.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan,
V. Vanhoucke, and A. Rabinovich.
Going deeper with convolutions.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1–9, 2015.
-  Z. Xing, J. Pei, and E. Keogh. A brief survey on sequence classification. ACM SIGKDD Explorations Newsletter, 12(1):40–48, 2010.
-  Y. Zheng, Q. Liu, E. Chen, Y. Ge, and J. L. Zhao. Time series classification using multi-channels deep convolutional neural networks. In Web-Age Information Management, pages 298–310. Springer, 2014.
-  Y. Zheng, Q. Liu, E. Chen, J. L. Zhao, L. He, and G. Lv. Convolutional nonlinear neighbourhood components analysis for time series classification. In Advances in Knowledge Discovery and Data Mining, pages 534–546. Springer, 2015.