I Introduction
Nowadays, a growing number of data centres have been built to support complicated computation and massive storage required by various blooming applications [1]. Each data center is typically equipped with hundreds of thousands servers and requires many megawatts electricity to power its hosted servers and the auxiliary facilities [2]. An essential problem is to monitor such a large amount of servers for energy saving and maintaining the business continuity.
Monitoring technologies [3] can be divided into two categories: intrusive and nonintrusive. Intrusive technologies require the install of certain monitoring software which requires the administration role of the system. Compared to the intrusive methods, nonintrusive methods are more flexible, which only require limited data for the monitoring and analysis.
In this paper, for the purpose of energy consumption monitoring, we propose to detect the running program in a server by analysing the observed power consumption series. The power data can be measured without the administration right of the server, which can be useful in collecting the power related information of the servers for the purpose of energy consumption analysis. The proposed classification analysis can only gain the type of the running program, avoiding any possibility in accessing the privacyrelated contents in the server.
The proposed program detecting problem falls into the field of time series classification. As a time series classification problem, the power data classification problem can be challenging as the power series collected in detection may be only a small piece of the whole power series of a program, with incomplete and limited information. For this problem, the key is to design an accurate and fast classification algorithm.
Currently there are a few similar works on classifying signals (like the power consumption signals studied here) such as [4] [5] [6]. However, the technologies applied in these literature are based on common spectral or statistical features with classifiers such as nearest neighbour or neural network. In a more general aspect, the time series classification problem has been extensively studied [7], among which the most popular method is 1nearest neighbour with dynamic time warping (DTW). The major research line in time series classification has been the developing of various DTW based distance measurements (variants such as [8] and enhancers [9]); yet we find that even though these measurements can be better than the original DTW by certain degree for certain cases, these variants all have been designed to incorporate the dynamic programming idea of DTW (except some lower bound methods like LB_Keogh [10]) and all designed to be commutative. Another line of research has also become notable recently, i.e. the long short term memory (LSTM) neural network, which shows great modelling ability for sequential data. In this work, we propose a novel classifier with much higher accuracy and based on the great efforts in the current literature.
In this research, firstly, we propose a Local Time Warping (LTW) time series distance measurement, which is a light weight DTW variant that does not need the dynamic programming procedure and is designed to be noncommutative. LTW can be set to a linear runtime algorithm which can perform almost as good as the DTW on our data set. Secondly, instead of further enhancing the distance measurement which can be much more complicated and time consuming, we look into a less expensive solution, which is to develop a hybrid algorithm of the 1nearest neighbour with LTW (1NNLTW) and the recent deep learning model for time sequential modelling. To do so, we first utilize the stateofart sequential data modelling neural network LSTM [11] [12] to classify the power series. Then we propose a new hybrid algorithm of the proposed 1NNLTW and the LSTM. Our study shows that both 1NNLTW and LSTM can outperform the 1NNDTW with similar accuracy; however, these two algorithms have their unique different natures and the accurately classified samples of these two algorithms have significant differences. The hybrid algorithm of the two classifiers, termed as LSTM/LTW, improves the classification accuracy further easily.
The main contributions of this paper are summarized as follows:

We propose a new distance measurement LTW. LTW has two unique features which are different from the existing DTW variants: 1) LTW is based on simple “local warping”, no dynamic programming procedure is needed; 2) LTW is noncommutative and is flexible for the nearest neighbour classifier for the time series classification problem. Our experimental results show that for our problem, the proposed LTW can perform better than DTW and its several different variants. Also our experiment shows that the noncommutative feature of LTW is beneficial. Furthermore, the linear version of LTW can perform almost as good as DTW on our data set. These results show that for certain cases, a light weight local warping distance measurement (such as the LTW) may be good enough for the classification task; however, this does not mean that the proposed LTW can work for all kinds of time series data sets.

For the first time, we develop a hybrid algorithm of 1NNDTW and LSTM termed as LSTM/LTW. The hybrid algorithm is based on a well trained LSTM neural network. Although the training procedure of the LSTM can be time consuming, the classification process can be fast in testing with the LTW distance.

Numerical experiments show that for the power data classification problem, with the LTW distance measurement, the accuracy of the 1NNLTW classifier can be improved from about 84% to about 90% compared to the 1NNDTW. With the hybrid algorithm LSTM/LTW, we achieve the power consumption series classification accuracy upto about 93%, which proves that using the power consumption series to detect the type of the running programs in a server can be very accurate.
The remainder of this paper is organized as follows. In Section II, we briefly introduce the stateofart time series classification algorithms. In Section III, we introduce the experimental data collection design and some preliminary analysis on the data. In Section IV, we introduce the new proposed algorithm and in Section V we show the numerical evaluation results and the analysis. In Section VI we conclude the whole paper and introduce the future works.
Ii Related Works
The power data classification problem studied in this paper can be taken as a time series classification problem, which has been studied extensively for the past decades. For this problem, common classifiers like support vector machine (SVM),
nearest neighbour (KNN) with Euclidean distance have been proved to be noncompetitive to the DTW distance measurement based method like 1NNDTW
[13]. Recently there have been a lot of new methods which have been proved to be as competitive as 1NNDTW. On one hand, there are many nonneural network based methods like Shapelet based method, dictionary based methods, interval based methods and ensembles of these methods. We will brief these methods below. On the other hand, recently with the fast development of deep learning [14], LSTM neural network has also been proved to hold high modelling ability for sequential data. In the following we will briefly introduce LSTM.Iia NonNeural Network Approaches
The most popular nonneural network time series classifiers are nearest neighbour based method with various different distance measurements. The most popular method is the 1NNDTW, which is a special nearest neighbour classifier with and a special DTW distance measurement. For the 1NN classifier, the common standard procedure to label of a test sample given a set of training samples is as follows. First the distances of the test sample to all the training samples are computed; then the training sample that has the smallest distance to the test sample is chosen and its label is assigned to the test sample as the classification result. In the above procedure, the key is to utilize a proper distance measurement. For 1NNDTW, the DTW distance is used, which has superior performance for time series data.
The DTW calculates the distance of two sequences and in a manner of finding the best match between them, as shown in Fig. 1. The idea is that sequential data often contain similar fluctuation patterns, however, a same pattern, when existed in different sequences as subsequences, may be stretched, shrank or delayed in the time axis. In this case, the DTW distance measurement aims to warp the time axis nonlinearly and finds the best match between the two samples such that when a same pattern exists in both sequences, the distance is smaller.
Mathematically, the DTW distance is computed by the following dynamic programming process. Denote as the DTW distance between subsequences and , then the DTW distance between and can be computed by the dynamic programming process with the following iterative equation:
(1) 
The time complexity to compute the DTW distance is , where and are the length of and respectively. The DTW distance measurement actually realign the time step index pairs in the computing of the distance. In practice, usually a threshold is used to restrict the index offset in the alignment, which can be critical to the classification results [15]. Also there are many study [16] working on accelerating the computing speed of DTW, which results in the fast DTW that can be computed in linear time of the length of the sequences. In this paper, we follows the idea of DTW but propose a new distance measurement, which can be computed with a local warping index set without a dynamic programming process and has a special noncommunicative nature that can be helpful.
There are many DTW variants proposed. We name only a few here for the space constraint; one can refer to [7] for a more complete review and comparison of the existing methods. MoveSplitMerge [8] introduces move and split operation in dynamic warping. Complexity Invariant distance (CID) [9] is a weight modifier which can be used to enhance any kind of distance measurement, and is proved to be very useful when using with DTW.
Besides DTW based method, there have been many new different methods which look into the pattern of the time series for classification. For example, Shapelet [17] based methods utilize the subsequences that can differentiate different classes to do the classification. Dictionary based methods [18] transforms the series into discrete words in a dictionary and then do the classification. Interval Based Classifiers [19] try to extract the feature from intervals in each time series for the classification. In this paper, we will focus on DTW based methods and will not compare with these methods, as proved in [7], these methods perform similar with DTW based methods unless they are ensemble methods. Ensemble methods are the kind of classifier that combine multiple simple classifiers which can be better than any single classifier. Currently, the existed ensemble methods are mainly based the above listed classifiers, and we have not seen any work on ensemble method of the above methods and neural network. In this paper, we will thus propose a simple hybrid algorithm of a nearest neighbour classifier and neural network. The neural network classifier used in this paper is introduced below.
IiB Lstm
LSTM is first proposed by Hochreiter and Gers et al. as an upgrade of the recurrent neural network (RNN) [20]. RNN is used to handle sequential data with a special calculation process following the time step increment, while traditional neural network simply treats the sequence as a plain vector. With such nature, RNN is suitable for modelling sequential data. However, it suffers from a problem called diminishing gradient, which is caused by the iterative process on the time axis and makes the gradient used in the training process extremely small and causes training failure. To solve the problem, the LSTM is proposed and it utilizes a memory core to avoid the diminishing gradient. The details of the LSTM neural network will be introduced in Section IV.
LSTM has shown great modelling power for sequential data and has been successfully applied in various machine learning fields like natural language process (NLP)
[21], video analysis [22] and etc. It is also noted that LSTM can be both discriminative and generative. By discriminative, LSTM can be used for classification tasks while by generative, LSTM can be used to generate similar sequences like the training samples [23]. In this paper, we utilize the discriminative ability of LSTM for our power data classification task.Iii Power Series Data Collection and Preliminary Analysis
In this section we present the power series data we collected followed by some preliminary analysis on the data. We will detail the simulation design rules for the data collection and the data samples collected with some pretreatment. The proposed preliminary analysis includes data visualization with different dimension reduction methods, classification results with some canonical classifiers, and feature study.
Iiia Power Series Data Collection
We first introduce the designing rules of the simulation for data set collection. As a datadriven study on the power series classification methodology, we need to collect a set of sample power data. The data collection should be designed carefully to make sure that the classification problem is neither trivial nor impossible to accomplish. In this sense, our guiding line for power data collection is to collect “different” and “similar” power series: By “different”, the power series must be generated by different programs. By “similar”, the different programs can have some similar features so that the classification algorithms need to be really discriminative.
Program  Number of sequences  Class label  
MapReduce  Spark  Word Count  100  0  
Sorting  100  1  
PI  100  2  
MLlib  CrossValidator  40  3  
Kmean  40  4  
LR  40  5  
SVM  40  6  
Cosine similarity  40  7  
PCA  40  8  
Hadoop  Word Count  100  9  
Sorting  100  10  
PI  100  11  
Web server data  40  12 
Follow the above guideline, we collected in total 13 classes of power data as shown in Table I (for convenience they are labelled as 0,1,…,12 respectively). These data fall into two major categories:

Web server power data: usually fluctuate in a continuous pattern;

Spark/Hadoop MapReduce programs: usually show stagepattern, e.g. the Map stage and the Reduce stage.
For the Hadoop/Spark category, we test different programs on these platforms, some are the same for both platforms, such as the “Word Count” program; some only exist on one platform, for example, the “MLlib” programs on the Spark platform. With such design, we can achieve the proposed “different” and “similar” design goal.
Note that the collected data series are of different lengths as the running duration can vary among different programs. Although classification methods like 1NNDTW can deal with power series of different lengths, to apply other canonical methods, in the following we cut the collected series into fixed length sequences. It is also reasonable to label subsequences instead of the complete power sequences of the programs as in a blind test, we have no information about the start/end point of a program. The detailed cutting method we utilize here is as shown below.
The goal is to cut the power sequences into length 200 samples. To do so, first we discard sequences with length smaller than 200 time slots (time unit: 3 seconds). The left number of sequences for each class is: [77, 31, 30, 35, 28, 7, 40, 14, 5, 100, 58, 36, 40]. Although some power data are discarded, the total duration of the left sequences is about 199 hours and with the time unit being 3 seconds, the amount of the data are still adequate for the study. Then these sequences are further cut into length 200 subsequences in the following way: For each sequence of length , we cut it into multiple sequences , , …. , . We obtain 3200 test sequences in this cutting procedure, which are used as the power data in our classification study. Note that these sequences are overlapped, as indicated by the cutting method.
Furthermore, for the purpose of multifold tests, we divide these samples into five folds . Note that to avoid the overlapping of the training data and the test data, the fold partition is done before the sequence cutting. For each fold of test, we use , as the test data, and the left folds as the training data.
IiiB Preliminary Analysis
We do some preliminary analysis on the pretreated data. The following analysis are meant to provide a basic understanding of the power data in view of classification.
IiiB1 Basic Characteristic Analysis Based on Visualization
We use various dimension reduction methods to visualize the data, which can help to identify if the power series can be successfully classified to a certain degree. We utilize eight different dimension reduction methods with scikitlearn [24] and project the original fixed length power sequences into a 2dimensional space. These dimension reduction methods are PCA, LDA, LLE, modified LLE, Isomap, MDS, Spectral Embedding and tSNE, which are widely adopted dimension reduction methods. The 2dimensional codes of the power data generated by these methods are shown in Fig. 2. We use different colors to show samples from different classes.
From the Fig. 2 we can observe that the power series data are not easy to distinguish after the dimension reduction. This may be due to the short length (2 here) of the embedding code; however, it still shows that the power series classification task cannot be easily done.
IiiB2 Tackling the Classification Problem with the Canonical Classifiers
We test some canonical classifiers to tackle the power series classification problem. The canonical classifiers tested here are listed as follows: Nearest Neighbours, Linear SVM, RBF SVM, Decision Tree, Random Forest, AdaBoost, Naive Bayes, LDA and QDA
[24] . Parameter settings for these classifiers are tuned manually. The classification results of these methods are shown in Table II.Test case  Nearest Neighbours  Linear SVM  RBF SVM  Decision Tree  Random Forest  AdaBoost  Naive Bayes  LDA  QDA 
Fold 0  0.4346  0.5187  0.3175  0.5747  0.5891  0.4593  0.4406  0.4380  0.4092 
Fold 1  0.4427  0.5105  0.3063  0.5607  0.6050  0.4192  0.4360  0.4377  0.4276 
Fold 2  0.4420  0.5071  0.3283  0.5901  0.6133  0.4121  0.4016  0.4256  0.4248 
Fold 3  0.4475  0.5004  0.3094  0.5835  0.6378  0.3813  0.4123  0.3707  0.4574 
Fold 4  0.4673  0.5383  0.3301  0.5610  0.6174  0.3834  0.4278  0.4407  0.4786 
From the results we can observe that for a 13classes classification problem, the highest accuracy achieved by these methods are about 60% (by Random forest). The classification accuracy is not promising (when compared to the 1NNDTW shown below), which actually proves that our power series labelling problem is a typical time series classification problem, as stated in [13], for such problem, canonical Euclidean distance metric based classifiers cannot achieve good results usually.
IiiB3 Feature Based Classification Study
In general, as a signal classification problem, the power series labelling problem can be solved by first extracting certain features from the raw power series and then carry out the classification with these features. In this subsection, we study such possibility and test power series classification with the DFT [25] feature of the original power sequences. With DFT, each power sequence can be transformed into the spectrum space resulting a new representation. The spectrum representation can be aligned as a vector as the input to the classifiers. We test the classification result of 1NNDTW with the raw data compared to with the DFT feature. The classification results are shown in Table III. Note that for the 1NNDTW, the maximum offset is set to 15% of the sample length, which is manually tuned in the experiment.
Test case 




Fold 0  0.8548  0.7122  
Fold 1  0.8393  0.7029  
Fold 2  0.8369  0.6761  
Fold 3  0.8329  0.6998  
Fold 4  0.8514  0.6885 
From Table III we can observe that the DFT features are not helping. The reason is that classification with the original data can maximize the information used in classification, while the DFT feature is less informative.
To summary, we find that the power series classification problem is not easy to tackle especially with the canonical classifiers and with some common used features. In the following, we will propose a new distance measurement inspired from DTW and combine it with the stateofart sequence modelling neural network LSTM.
Iv The Proposed Power Series Classification Algorithm
In this section we present the proposed new power series classification algorithm which hybridizes a nearest neighbour classifier with a novel distance measurement and a LSTM classifier. In the following we first introduce the two components respectively and then present the hybrid algorithm.
Iva Nearest Neighbour with the Local Time Warping (LTW)
We propose a new classifier which utilize a novel distance measurement to compute the distance between two sequences which we termed as Local Time Warping (LTW) as its warping computation for each time step is done in a local window without a dynamic programming procedure like DTW. The LTW is developed to replace the DTW distance measurement in the 1NNDTW classifier.
The idea behind LTW is as follows. Comparing the algorithms of DTW and the Euclidean distance, the major difference in between is that there are a lot of “min” operators in DTW. Such “min” operator actually is the key to the “warping” map between the two time series. Despite the warping operation, DTW utilizes a dynamic programming procedure to optimize the mapping. Note that dynamic programming is slow and it is not directly optimizing the classification accuracy. In this case, what if we do not use the dynamic programming procedure? We may try some low cost warping operations; is it possible that such a distance measurement can be as good as DTW? Here we propose the LTW to answer this question. Also, note that the DTW is computed by a beautiful symmetric formula which makes it commutative for the two time series in computing the distance. What if we do not need the distance measurement to be commutative? Can it be better with the noncommutative feature? Our proposed LTW will also answer this problem. The detailed design is shown below.
The LTW distance measurement is computed in the following way. Suppose we have two sequences and , both of length . We define the LTW distance between and as:
(2) 
(3) 
As shown in (3), LTW works in the following manner. In computing the distance between and (when we want to find a nearest neighbour of ), we set as the base sequence and test the similarity of to in the following way: with a warping index set , for , for time step in , we compute the minimum absolute distance between and one of , , ; then we add these distance measures for and for up, which is the LTW distance from to with warping index . Note that (2) is a linear algorithm (only three items to compare no matter how large is).
Note that the distance is noncommutative, which means that can be true. We use to compute the nearest neighbour of sequence , in a sense that to find the best match of among the other samples such as . For comparison, the DTW distance is obviously commutative. The noncommutative feature of LTW can be beneficial, as our target is to find the nearest neighbour for each . A noncommutative distance measurement is enough to serve the purpose and can provide more flexibility by enforcing less constraints to the distance measurement.
IvB Long Short Term Memory Neural Network
We utilize the LSTM classifier following [26]
for our power series classification problem. The LSTM neural network consists of an input layer, a LSTM layer and a logistic regression layer as depicted in Fig.
3. The three layers function in the following way respectively:
Input layer: The input data sample, which is a length vector , is firstly discretized into range [0, ]. Such an operation is a smoothing operation to the original power series, which can affect the performance of the LSTM. Then each time step , is enriched into a dimensional vector which can ease the following computation, i.e. , where is a dimensional vector with all entries equal to 1. After the above process, the new sequence is used as the input to the LSTM layer.

LSTM layer: The LSTM layer contains LSTM node, where each LSTM node can output an dimensional code . The operation inside the LSTM node is shown below. First, for each time step , the LSTM node needs to compute a new state denoted by . To compute , a candidate state is firstly computed as:
(4) Then two gates, an input gate and a forget gate are computed to update the new state:
(5) (6) Then the new state of the LSTM node is computed as:
(7) With the node state, to further compute the output, an output gate is firstly computed as:
(8) Finally the output of a LSTM node is computed as:
(9) The output of all LSTM nodes are then added together as the output of the LSTM layer:
(10) 
Logistic regression layer: In this layer the output of the LSTM layer is used to compute the label of the test sample in the following way. First we use the [27] function to compute the probability vector with its each entry representing the probability of the test sample belonging to a class:
(11) Then the prediction is the class which achieves the largest probability:
(12)
To train the LSTM classifier, the loss function is defined as the negative loglikelihood function with the label of the training data
:(13) 
where is the set of all the weight and bias parameters in the LSTM neural network (which are adjusted in the training process); is a batch of training samples. Size of can be important for the performance of of the classifier.
IvC Hybridization of LSTM and 1NNLTW
In this subsection we propose to combine the 1NNLTW classifier and the LSTM classifier. The underlying rational is that both classifier can achieve high classification accuracy for our problem but in very different manners: the 1NNLTW is a nearest neighbour classifier, which is a databased classifier without a training process; while LSTM is a training based classifier in which the training data are firstly used to build a model and then the model is used to classify the test data. In our experiments, both classifiers can perform promisingly individually; however, our numerical simulation shows that the accurately classified samples by the two classifiers have significant differences. In such sense, we propose to combine the two algorithms to construct a even stronger classifier.
The hybrid algorithm is designed in the following way. Considering that in practice, the training of LSTM and the computing of the distance matrix for 1NNLTW can be both time consuming, the hybrid algorithm is designed to be as simple as possible. We first obtain the two individual classifiers (the nearest neighbour classifier with CID enhanced LTW) and (the trained LSTM classifier). For each classifier, we obtain the probability vector when classifying some test time series : and , where is defined as:
(14) 
where is a all zero vector except its value at the index of the class of the neighbour obtained from is to be 1. is a normalization vector which is used to make sure that the summation of the obtained probability vector equal to 1. is obtained equation (11).
With this two probability vector, we simply add them up and the test series will be classified to be the class index with the maximum probability, i.e.:
(15) 
The detailed algorithm is shown in Algorithm 1.
V Numerical Evaluation and Analysis
In this section we present the experimental results of the above proposed algorithms and the analysis. We first conduct experiments to investigate the proposed LTW and compraed it with various variants of DTW. Then we compare the classification accuracy of the proposed 1NNLTW, LSTM and their hybrid algorithm LSTM/LTW with the baseline algorithm 1NNDTW. For the base line algorithm 1NNDTW, the maximum warping offset is manually tuned and set to be 30. For the 1NNLTW, the warping index set of the LTW distance is set to , and we will show analysis on the affects of the set
. For the LSTM neural network, we set the maximum number of epoch to be 50. For some key parameters which can affect the performance of LSTM ,we give a detailed discussion in the following parameter settings study. Test data and codes for the LTW tests are available at
https://www.dropbox.com/s/lylsece6xysayw8/DataAndCode.zi p?dl=0. In presenting the classification results, for convenience, we will simply use test Fold to denote the test with test samples in and .Va Experimental Study on LTW
In this subsection, we conduct experiments to prove that the proposed LTW is indeed a different distance measurement from the existed DTW variants, and prove that it works better or nearly the same(for the linear version) to DTW and its various stateofart variants. We will also prove that the noncommunicative feature is indeed beneficial. Note that we will not try to use massive experimental data to prove that the proposed LTW is superior than other DTW variants, which is indeed not true not only because of the No Free Launch theory, but also because these distance measurements are mostly designed in a way without a training objective to directly optimize the classification accuracy: for example, in DTW, the dynamic programming process can optimize the match; however, such optimization goal is different from the classification accuracy. In such sense, all these distance measurements can suffer from model bias and when they are applied to different data sets, their performance will definitely vary. As proved in [7], only ensemble based methods can significantly outperform 1NNDTW by more than 3%. In this section, we will only compare LTW with the DTW with Manhattan distance (DTWm), DTW with Euclidean distance(we will use this as the default DTW in this paper, as it has better performance), and DTW variants MSM, LB_Keogh, and the enhancer CID as these methods performs good in general as shown in [7]. Details are shown below.
First, we present the comparison of 1NNLTW () with 1NNDTWm (warping window ), 1NNDTW (warping window ), MSM (with its threshold parameter ). The results on our data set are shown in Table IV. Parameter settings for 1NNDTW and MSM are tuned by manually. Clearly the performance of 1NNLTW is better.
Second, we compare the fast linear LTW with with the linear runtime lower bound method LB_Keogh. We test LB_Keogh with window size and respectively. The results are shown in Table V and LTW with outperforms LB_Keogh significantly on our data set. This proves that the fast linear version of LTW can still perform quite good.
Third, we present the classification with the CID enhanced distance measurement. We show the result of CID(DTW) and CID(LTW) with and in Table VI. Clearly, the CID can improve the performance of both DTW and LTW for our data set. With CID modifier, the LTW is still slightly better than DTW; however, the advantage of CID(LTW) over CID(DTW) becomes smaller, we believe that it is reasonable as the accuracy is upper bounded and it will be much more difficult to further improve the accuracy when the algorithm is already very accurate.
At last, we present the experimental results to show that the noncommutative feature of LTW is indeed beneficial. To do so, we implement a simple commutative version of LTW defined as:
(16) 
The experimental results compare the LTW with the is shown in Table VII. Clearly, LTW outperforms its commutative version significantly. This proves that the noncommutative feature of LTW is indeed beneficial.
Test case  1NNLTW({1,…,10})  1NNDTWm  1NNDTW  MSM 
Fold 0  0.8862  0.7988  0.8480  0.8294 
Fold 1  0.8854  0.7958  0.8351  0.8310 
Fold 2  0.8661  0.8220  0.8347  0.8699 
Fold 3  0.8809  0.8118  0.8379  0.8682 
Fold 4  0.8894  0.8168  0.8442  0.8515 
1NNLTW()  LB_Keogh ()  LB_Keogh ()  

Fold 0  0.8829  0.5866  0.4542 
Fold 1  0.8762  0.5791  0.4753 
Fold 2  0.8444  0.6081  0.4592 
Fold 3  0.8696  0.6018  0.4616 
Fold 4  0.8838  0.6053  0.4326 
1NNCID(DTW)  1NNCID(LTW{1,…,10})  1NNCID(LTW{10})  

Fold 0  0.8829  0.9041  0.8837 
Fold 1  0.8854  0.8904  0.8912 
Fold 2  0.8833  0.8684  0.8579 
Fold 3  0.8950  0.9013  0.8887 
Fold 4  0.8999  0.9031  0.8846 
1NNLTW  1NN  

Fold 0  0.8862  0.7385 
Fold 1  0.8853  0.7029 
Fold 2  0.8661  0.7218 
Fold 3  0.8809  0.7294 
Fold 4  0.8894  0.7692 
VB The Classification Accuracy Rate Comparison
The fivefold classification accuracy results for the hybrid algorithm LSTM/LTW are shown in Table VIII,compared with the nonhybrid classifiers. Results of LSTM and LSTM/LTW are based on 10 independent runs.
Test case  1NNDTW  1NNCID(LTW)  LSTM  LSTM/LTW 

Fold 0  0.8548  0.9040  0.87720.0153  0.92670.0030 
Fold 1  0.8393  0.8903  0.87800.0130  0.91150.0046 
Fold 2  0.8369  0.8683  0.85470.0171  0.89230.0038 
Fold 3  0.8329  0.9013  0.87720.0077  0.91670.0021 
Fold 4  0.8514  0.9031  0.87780.0061  0.92560.0051 
From Table VIII we can observe that:

The proposed LSTM classifier shows similar accuracy compared to 1NNLTW and it also outperforms 1NNDTW on our data set.

The hybrid algorithm LSTM/LTW can achieve higher accuracy compared to 1NNLTW and LSTM by an increment of about 3%, which proves that the hybrid algorithm can indeed improve the classification accuracy.
For the first observation, we can see that LSTM, as a neural network, can significantly outperform the other canonical classifiers like SVM, which proves its strong modelling ability for sequential data. Note that a common neural network like multilayer perceptron (MLP) can not perform as good as LSTM. The performance of LSTM can be seriously affected by the training settings, which we will discuss below.
For the second observation, we can see that the improvement is small, which is reasonable as the baseline algorithms already achieve a high accuracy individually, making it difficult to achieve large improvement for the hybrid algorithm. The improvement caused by the hybrid algorithm will be shown clearly in the following detailed analysis.
VC Analysis on the Accurately Classified Samples
In this subsection we analyse the accurately classified samples of the power series and study the difference between different classifiers. In doing so we will be able to identify why and how the hybrid algorithm works.
Fig. 4 shows the predicted labels for the test samples in Fold 0 of the 1NNDTW, 1NNLTW, LSTM and the hybrid algorithm LSTM/LTW. Fig. 5 shows the accurately classified samples for each class and for each algorithm. From Fig. 4 and 5 we can observe that:

The proposed 1NNLTW method performs similarly to 1NNDTW, although 1NNLTW can accurately predict more test samples. This is reasonable as the two classifiers are both nearestneighbour classifiers and they have similar measurement definition.

The proposed LSTM classifier shows certain degree of difference compared to the other two nearest neighbour classifiers. One example, the LSTM classifier cannot predict any test samples from the SparkMLlibLR (class label 5) and the SparkMLlibPCA (class label 8) classes, while both 1NNDTW and 1NNLTW can; however, LSTM performs better than the other algorithms on classes SparkMLlibSVM (class label 6) and HadoopWordCount (class label 9).

The proposed LSTM/LTW classifier can successfully combine the advantages of LSTM and 1NNLTW. Such as for the SparkMLlibLR class and HadoopWordCount classes, the hybrid algorithm achieve similar performance to the better one of LSTM and 1NNLTW.

All the classifiers can successfully classify the test samples of the web server class, which is reasonable as the web server program is of a completely different kind from the other MapReduce programs.
The above results show the difference of the 1NNLTW and the LSTM classifier which makes the hybrid algorithm work. Although 1NNLTW and the LSTM can achieve similar accuracy, their accurately classified samples have significant differences. To make this more clearly, we compute the unionaccuracy of the two classifiers as follows:
(17) 
where and are the sets of the accurately classified samples by LSTM and 1NNLTW respectively; is the total number of test samples in this test fold. The unionaccuracy of the five fold tests are shown in Table IX. It can be seen that the unionaccuracy is between 94%96%. It shows the potentiality of a hybrid algorithm of the two classifiers. Note that the hybrid algorithm can only achieve accuracy smaller than the unionaccuracy, as the unionaccuracy is computed in an ideal manner.
Test case  

Fold 0  0.9482 
Fold 1  0.9489 
Fold 2  0.9468 
Fold 3  0.9541 
Fold 4  0.9612 
VD Discussion on the Parameter Settings
In this subsection we discuss the parameters settings in the above algorithms. First we study the parameter used in the LTW measurement, the warping index set . The test results with different settings are shown in Table X. It can be seen that a proper setting is needed as a set G too small can deteriorate the performance. In our experiments we find that with a larger set , the performance of LTW is more stable. Note that increasing the size of can cause higher computing cost.
Fold 0  0.8166  0.8591  0.8727  0.8846 
Fold 1  0.8075  0.8552  0.8728  0.8904 
Fold 2  0.8138  0.8601  0.8684  0.8661 
Fold 3  0.8132  0.8668  0.8760  0.8802 
Fold 4  0.8103  0.8571  0.8878  0.8902 
Second, we discuss the parameter settings for the LSTM classifier. Tuning of the hyperparameters of the LSTM network is critical. In our experiments, we find that an improper setting can result a bad performance with accuracy lower than 50% for the LSTM. We find the following key settings in the LSTM classifier, which we have tested and find the proper setting, although detailed experimental results are omitted here. The hyperparameter settings: three parameters are specially tuned in our experiments, which are the batch size (we set to 60), the dimension of the LSTM node (we set to 90), and the discretized range parameter (we set to 100). We also tested two more different implementation variations of LSTM: 1)Adding a dropout layer, which is tested and not helpful in our case. 2) More than one LSTM layers, which has been tested and is also not helpful.
Vi Conclusion and Future Works
In this research, we study the server power consumption series classification problem used as a nonintrusive method for data centre energy monitoring. First we propose a new time series distance measurement termed as local time warping (LTW) and build a hybrid algorithm of the 1nearest neighbour with LTW and the LSTM neural network. The proposed LTW distance measurement is designed to be a light weight time series measurement with local warping operations within a predefined warping index set, and it is designed to be noncommutative. LTW can be taken as the simplified version of DTW with only the warping operation (a series of “min” operations). The LTW is proved to be better than DTW on our data set and its noncommutative feature is proved to be beneficial. Also a linear version of LTW can perform almost as good as the DTW on our data set. The proposed LTW shows that for a certain time series classification problem, it is possible to use some light weight time series distance measurement to achieve quite good classification accuracy.
Second we apply the stateofart sequential data modelling neural network LSTM to classify the power series. Our study show that LSTM can perform well compared to 1NNLTW with similar accuracy; however, these two algorithms have their unique different natures and the accurately classified samples of these two algorithms have significant difference. In this sense, we propose a hybrid algorithm of the two classifiers termed as LSTM/LTW, which further improves the accuracy. The proposed hybrid algorithm can achieve classification accuracy as high as 93% in our experiments.
For the future work, one interesting problem is to study the case that the power series generated by multiple programs thus with multiple labels. The problem is especially interesting when we have the test data being the combination of different programs (such as a pair of programs (A, B)) where this special pair may not be seen in the training data, for example, the training data may only contain samples generated by program pairs like (B,C) and (A,C). In this case, the classifier should be able to recognize the new pair (A, B). Also, one can try more complicated ensemble algorithms with LSTM and other existed time series classification algorithms.
References
 [1] “America’s data centers consuming and wasting growing amounts of energy,” http://www.nrdc.org/energy/datacenterefficiencyassessment.asp, accessed: 20150701.
 [2] L. A. Barroso and U. Holzle, “The case for energyproportional computing,” Computer, vol. 40, no. 12, pp. 33–37, 2007.
 [3] J. Moore, J. Chase, K. Farkas, and P. Ranganathan, “Data center workload monitoring, analysis, and emulation,” in Eighth Workshop on Computer Architecture Evaluation using Commercial Workloads, 2005.
 [4] A. Fehske, J. Gaeddert, and J. H. Reed, “A new approach to signal classification using spectral correlation and neural networks,” in New Frontiers in Dynamic Spectrum Access Networks. IEEE, 2005, pp. 144–150.

[5]
S. S. Soliman and S.Z. Hsue, “Signal classification using statistical moments,”
IEEE Transactions on Communications, vol. 40, no. 5, pp. 908–916, 1992.  [6] M. Reaz, M. Hussain, and F. MohdYasin, “Techniques of EMG signal analysis: detection, processing, classification and applications,” Biological procedures online, vol. 8, no. 1, pp. 11–35, 2006.
 [7] A. Bagnall, A. Bostrom, J. Large, and J. Lines, “The great time series classification bake off: An experimental evaluation of recently proposed algorithms. extended version,” arXiv preprint arXiv:1602.01711, 2016.
 [8] A. Stefan, V. Athitsos, and G. Das, “The movesplitmerge metric for time series,” IEEE Transactions on Knowledge and Data Engineering, vol. 25, no. 6, pp. 1425–1438, 2013.
 [9] G. E. Batista, X. Wang, and E. J. Keogh, “A complexityinvariant distance measure for time series.” in SDM, vol. 11. SIAM, 2011, pp. 699–710.
 [10] T. Rakthanmanon, B. Campana, A. Mueen, G. Batista, B. Westover, Q. Zhu, J. Zakaria, and E. Keogh, “Searching and mining trillions of time series subsequences under dynamic time warping,” in Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2012, pp. 262–270.
 [11] S. Hochreiter and J. Schmidhuber, “Long shortterm memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
 [12] F. A. Gers, J. Schmidhuber, and F. Cummins, “Learning to forget: Continual prediction with LSTM,” Neural computation, vol. 12, no. 10, pp. 2451–2471, 2000.
 [13] X. Xi, E. Keogh, C. Shelton, L. Wei, and C. A. Ratanamahatana, “Fast time series classification using numerosity reduction,” in Proceedings of the 23rd international conference on Machine learning. ACM, 2006, pp. 1033–1040.
 [14] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, 2006.
 [15] C. A. Ratanamahatana and E. Keogh, “Everything you know about dynamic time warping is wrong.” Citeseer, 2004.
 [16] S. Salvador and P. Chan, “Toward accurate dynamic time warping in linear time and space,” Intelligent Data Analysis, vol. 11, no. 5, pp. 561–580, 2007.
 [17] T. Rakthanmanon and E. Keogh, “Fast shapelets: A scalable algorithm for discovering time series shapelets,” in Proceedings of the 13th SIAM international conference on data mining. SIAM, 2013, pp. 668–676.
 [18] J. Lin, E. Keogh, L. Wei, and S. Lonardi, “Experiencing sax: a novel symbolic representation of time series,” Data Mining and knowledge discovery, vol. 15, no. 2, pp. 107–144, 2007.

[19]
H. Deng, G. Runger, E. Tuv, and M. Vladimir, “A time series forest for classification and feature extraction,”
Information Sciences, vol. 239, pp. 142–153, 2013.  [20] T. Mikolov, M. Karafiát, L. Burget, J. Cernockỳ, and S. Khudanpur, “Recurrent neural network based language model.” in INTERSPEECH, vol. 2, 2010, p. 3.

[21]
J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan,
K. Saenko, and T. Darrell, “Longterm recurrent convolutional networks for
visual recognition and description,” in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, 2015, pp. 2625–2634.  [22] J. YueHei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici, “Beyond short snippets: Deep networks for video classification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 4694–4702.
 [23] T.H. Wen, M. Gasic, N. Mrksic, P.H. Su, D. Vandyke, and S. Young, “Semantically conditioned LSTMbased natural language generation for spoken dialogue systems,” arXiv:1508.01745, 2015.
 [24] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg et al., “Scikitlearn: Machine learning in Python,” The Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.
 [25] C. Burrus and T. W. Parks, DFT/FFT and Convolution Algorithms: theory and Implementation. John Wiley & Sons, Inc., 1991.

[26]
“LSTM networks for sentiment analysis,”
http://deeplearning.net/tutorial/lstm.html, accessed: 20160501.  [27] D. W. Hosmer Jr and S. Lemeshow, Applied logistic regression. John Wiley & Sons, 2004.
Comments
There are no comments yet.