Combining Static and Dynamic Features for Multivariate Sequence Classification

12/20/2017 ∙ by Anna Leontjeva, et al. ∙ University of Tartu 0

Model precision in a classification task is highly dependent on the feature space that is used to train the model. Moreover, whether the features are sequential or static will dictate which classification method can be applied as most of the machine learning algorithms are designed to deal with either one or another type of data. In real-life scenarios, however, it is often the case that both static and dynamic features are present, or can be extracted from the data. In this work, we demonstrate how generative models such as Hidden Markov Models (HMM) and Long Short-Term Memory (LSTM) artificial neural networks can be used to extract temporal information from the dynamic data. We explore how the extracted information can be combined with the static features in order to improve the classification performance. We evaluate the existing techniques and suggest a hybrid approach, which outperforms other methods on several public datasets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

When it comes to a classification task it is quite common to think about two different feature categories: sequential, where each data sample is represented by one or many features with their values changing over time (time-series or dynamic features) and static data with each sample described by a set of features, each having a fixed value. Those values do not change in time and are fixed for each sample. We will refer to them as static features.

For almost any sequential dataset it is possible to extract static features out of temporal data. One of the examples of such an approach would be Fourier analysis on EEG signals [1] that transforms signals of arbitrary length to a fixed-size frequency domain. Moreover, in many real-life applications a dataset can already consist of instances that have features of both categories. For example, consider hospital data, where the age and gender of a patient are static features, while heartbeats recorded from the electrodes over some period of time are dynamic features.

Despite the fact that both static and dynamic features may contribute to the classification [2]

, they are rarely used together. One of the reasons is that most machine learning methods are not suitable for processing static and dynamic data simultaneously. Such discriminative algorithms as Random Forest 

[3]

, Support Vector Machine (SVM) 

[4]

, feed-forward neural networks 

[5] take static features as an input. For sequence classification, common methods are variations of Hidden Markov Models (HMM) [6], Dynamic Time Warping (DTW) [7]

and Recurrent Neural Networks 

[8]. It is also possible to tackle sequential data by transforming sequences into feature vectors that can be fed into a discriminative method. Ensemble [9] methods provide another way to address the issue: predictions made by a temporal

model on dynamic data are combined with the predictions of a discriminative classifier on static data.

In this paper, we investigate whether there is a better way for extracting useful information from both data modalities to improve the overall classification performance. We devise a data augmentation technique where static features are concatenated with the data representation provided by a dynamic model. We refer to such an approach as hybrid and show that the hybrid way of stacking models [10] is in general more beneficial than ensemble methods. We summarize our main contributions as follows:

  • We postulate an idea that combining temporal and static information can boost classification performance.

  • We compare different ways to combine static and dynamic features and propose a hybrid approach that employs an unconventional way of concatenating features.

  • We empirically demonstrate that a hybrid method outperforms ensemble and other baseline methods on several public datasets.

  • We perform a controlled experiment on a synthetic dataset to investigate how dataset characteristics affect the baseline, ensemble and hybrid methods’ performance.

Ii Preliminaries

In this section, we introduce background information on the algorithms and techniques, which will serve as the building blocks for the more complex architectures.  

Random Forest (RF)

There is a vast amount of discriminative algorithms available. For our experiments we have chosen Random Forest as it provides close to the state of the art performance on many practical problems [11, 12]. We employ Random Forest in two different ways: as a stand-alone discriminative classifier and as a final predictor, which combines the lower layer features in the ensemble and hybrid architectures. The Scikit-learn [13] implementation of Random Forest was used in our experiments.  

Hidden Markov Models (HMM)

HMM belongs to the class of probabilistic graphical generative approaches, which means they are used to generate samples from a joint distribution of observed and unobserved features. They have been the most frequently used technique for modeling sequence data since about the 80s

[14]

. Despite many existing variations of HMM, the most commonly used is the 1-order Markov process with discrete hidden states: the probability of a given state depends only on the previous state, while ignoring the rest. If observed values are discrete, HMM is described via three components: an initial distribution of hidden states, a matrix of transition probabilities between the states and a matrix with observation probability distributions for each of the hidden states, which is usually referred to as an

emission matrix. If observations are not discrete, Gaussian HMMs [15]

are used for the parametrization, so that emission probabilities are described using means and covariance matrix of Gaussian distributions. We use the

hmmlearn [16] implementation of Gaussian HMM.  

Long Short-Term Memory (LSTM)

LSTM recurrent neural network [17] is a subclass of Recurrent Neural Networks family. It has gained popularity by showing the state of the art performance in many fields [18, 19, 20, 21]

. The core improvement of LSTM over vanilla RNN lies in replacing a usual artificial neuron by a more complicated structure, called an

LSTM unit. This improvement allows the LSTM network to learn long-term dependencies in the data making the LSTM a perfect fit for the sequential data.

Fig. 1: LSTM network architecture for sequence generation. We predict value at time step from the time steps . Therefore, the output sequence is the lagged version of the input sequence. The activations of the LSTM units at the last iteration (shown in bold) are used as the features in the model (see Table I).

In order to capture temporal dynamics in the data we train a LSTM network to predict each value in the sequence given all the previous ones. In the case of multivariate sequences the input size of the network is equal to the number of dynamic features ; the number of nodes

in the network is estimated separately for each dataset. We train a single layer of LSTM followed by a fully connected layer of size

(see Figure 1 for the visual explanation).

There are many other possible LSTM architectures, but it is out of the scope of this paper to compare them all. We chose the architecture described above mainly for two reasons: 1) in the current formulation it is more comparable with HMM-based generative models and, 2) among other architectures we tried, this one provided best or comparable results.

In all LSTM models we use mean squared error (MSE) between the true test sequence and the generated sequence as our error function, RMSProp

[22]

acts as the optimization method. The Keras Deep Learning library

[23] provides the implementation of LSTM.  

Time Spatialization

To be able to apply Random Forest on dynamic data we need to transform the raw data into an appropriate feature space. Let be the number of sequential features in a data sample, each of them of length . In this case, every sample can be represented by a matrix of size :



 

By flattening the matrix we obtain a feature vector of length thus transforming a data sample from the time domain to the feature space:

Data represented in such a form can be used to train Random Forest classifier [24].  

Generative Models for Classification

Some of the approaches that we consider use a generative model (HMM or LSTM) as a discriminative classifier. We consider the binary classification task, however, it can be easily extended to multiple classes. First, we split the training set into two subsets: a subset of all samples with a positive class label and another subset of all samples with a negative class label. Next, we train two generative models: on the positive samples and on the negative samples. In order to classify a new sample , we compute the log-likelihood of and , afterwards we use a binary decision rule

to assign the class label to the sample .

An important benefit of a generative model is the possibility to use sequences of a varying length. Discriminative methods, such as Random Forest, are not applicable in such cases, thus making a temporal model the only suitable approach. In our experiments we use only datasets with the sequences of fixed length for the compatibility reasons. It is straightforward to extend the code for sequences of a varying length.  

Log-likelihoods and Ratios as Features

Once we estimate log-likelihood for a sample , as described in the previous paragraph, we can use and estimations as features. This is the approach we take in ensemble methods and . These two additional features indicate how likely it is that the sample is generated by the positive generative model and what is the likelihood of it being generated by the negative generative model.

Another derivative feature is the ratio of those two measures. In case of HMM it is

and for LSTM

where and are the mean squared errors between the true output sequence and the generated sequence. Our experiments have shown that combining static features with ratios yields better performance than combining static features with raw log-probabilities or MSE scores.

Iii Methods

In this section, we first describe the approaches that can be thought of as competitors to the hybrid models, then we explain our main contribution — the concept of a hybrid model and its versions.

Subsection III-A describes the methods that use only static or only dynamic features (unimodal) with a single discriminative algorithm. Subsection III-B makes a step forward by using a feature space that includes both data modalities (bimodal) transformed to make the dataset suitable for the algorithm at hand. For example, if we apply Random Forest to sequential data we employ spatialization; if we need to apply a temporal model to static data we transform its feature values into “fake” sequences: let the value of a static feature of a sample be , will be represented by a sequence of length with value at each time step. Subsection III-C discusses what are the options to construct feature spaces that combine static and dynamic features effectively and describes ensemble and hybrid methods applied to these feature spaces.

Iii-a Stand-alone Models on Unimodal Data

In this section we describe the simplest baseline approaches. These are the models that are fit for only one data modality: either static or dynamic.  

Random Forest on Static or Dynamic Features

The most straightforward way to handle a multimodal dataset is to build a model on static features only. In this work such an approach is referred to as 111The number in brackets stands for the model ID we use throughout the text.. We denote the number of static features in a dataset as

. If the dynamic data has the sequences of a fixed length, then it is straightforward to apply Random Forest on this data — we use time spatialization to represent time series as one long feature vector. One obvious drawback of such an approach is low performance due to the curse of dimensionality when sequences are extremely long

[25]. We use this baseline to estimate if the use of a temporal model is justified. Our intuition is that if Random Forest is able to achieve the same performance on the dynamic data as temporal models do, then the particular dynamic dataset does not have a strong temporal component. We denote this method as and use it for the comparison with temporal models such as HMM and LSTM.  

Hidden Markov Models on Dynamic Features

The method denoted as is a direct application of HMM to sequential data. We use HMM as a classifier as described in the paragraph “Generative Models for Classification” of Section II.  

Long Short-Term Memory on Dynamic Features

The method denoted as uses the same technique as to act as a classifier. The architecture of both and networks are shown in Figure 1.

Iii-B Stand-alone Models on Bimodal Data

The second class of baseline approaches utilizes both static and dynamic data by concatenating them in such a way that a single classification method is applicable. This is the most naïve way of using both data modalities simultaneously, and, as it will be discussed later, has obvious limitations.  

Random Forest on Static and Dynamic Features

The method under the name transforms dynamic data to static, concatenates it with the original static features and employs Random Forest on the resulting feature set.  

Hidden Markov Model on Static and Dynamic

The method implemented in transforms static features into “fake” sequences: for the dataset with dynamic features of length it produces additional dynamic features of length . All of the values along these sequences are constant and equal to the original value of the static feature. Using this trick we extend dynamic feature set from features to features and apply HMM on it similarly to .  

Long Short-Term Memory on Static and Dynamic Features

Using the trick from we obtain the dynamic features from the static features, concatenate them with the original dynamic features and train an LSTM classifier on the combined feature set. The learning algorithm itself is analogous to . We refer to this approach as .

Iii-C Multiple Models on Bimodal Data

Raw features Predictions Ratios Activations
Nr. Model name Static Dynamic RF HMM LSTM HMM LSTM LSTM
Stand-alone 1
2
3
4
5
6
7
Ensemble 8
9
Hybrid 10
11
12
TABLE I: List of models and the feature sets they operate upon.

Dealing with bimodal data is the main focus of this work. In this section we look into different ways of combining static and dynamic features.  

Iii-C1 Ensemble Models

With an ensemble approach one can train different models for different data modalities (for example, Random Forest for static features and LSTM for dynamic features) and combine their predictions using a linear model or another layer of Random Forest (or any other discriminative method).

In this work we have two methods based on the ensemble approach: , which takes the predictions made by Random Forest and the predictions made by HMM classifier, and the that combines Random Forest predictions with the predictions of LSTM classifier. In both these models the final prediction is obtained by training an additional Random Forest model using predictions as features.

Fig. 2: Architecture of an ensemble. The first half of the training set is used to create models according to the data modality: Random Forest for static data and a generative model for the dynamic data. These models are applied to the second half of the training set and to the test set to extract predictions and form a new feature space. Random Forest is trained on the enriched second half of the training set and evaluated on the enriched test set.
Ensemble of HMM and RF

The ensemble method has two stages. First stages works with the first half of the training set — Random Forest is trained on the static features and HMM classifier on the dynamic ones. In the second stage we use the samples from the second half of the training set and estimate class probabilities for each sample using the models trained in the first stage. In case of a binary classification problem each sample is represented by 4 features. We obtain a new dataset where each sample from the second half of the original training set is represented by: log probabilities that and provided by Random Forest, and , log-likelihoods provided by HMM. In the similar way we feed samples from the original test set into these models and obtain a test set with the same 4-dimensional feature space. Finally, we train Random Forest on 4-dimensional training set and evaluate it on the corresponding test set. For the detailed explanation of the experimental pipeline see Section IV-A.  

Ensemble of LSTM and RF

LSTM ensemble builds a new feature set in a similar to way. The first two features are the same as in the case of . The second two features are obtained with POS and NEG LSTM networks. Namely, an input sequence of a data sample is fed into each of the networks, and the corresponding output sequences are generated. Per class mean squared errors and between the true output sequence and the generated sequences are used as the new features for the sample . The number of such features is equal to the total number of classes.


Iii-C2 Hybrid Models

Datasets Samples Train set Test set
Static
features
Dynamic
features
Sequence
length
Source of
benchmark
ECoG 10584 5-fold CV 320 64 300 [26]
FordA 4291 1320 3601 500 1 500 [27]
FordB 4446 810 3636 500 1 500 [27]
Phalanges 2658 1800 858 80 1 80 [28]
Yoga 3300 300 3000 426 1 426 [24]
TABLE II: Descriptions of the real-life datasets.

The general idea of the hybrid approach is to employ generative models such as HMM or LSTM to act as feature extractors from dynamic data. As generative models are able to generate sequences from the training data distribution, it is reasonable to assume that these models can capture temporal dynamics in the data. Therefore, the features extracted using these models can act as an approximation for temporal information contained in the data. These features are concatenated with the static features and a discriminative classifier (Random Forest) is used to build the final predictor.

Since naïve ways of combining dynamic data with static features give poor performance (see Figure 6) we use the data representation provided by temporal models to obtain a fixed-size feature set that contains knowledge extracted from the temporal component of dynamic data.

There are different features that can be extracted from generative models, with one such example being the Fisher kernels [29]. In our experiments we use log-likelihood ratios, MSE ratios or LSTM activations as features for the enrichment of the static feature set. In the following subsections we go through the hybrid architectures we have explored.

Fig. 3: Architecture of the hybrid model. First half of the training set is used to create a model which will act as a feature extractor. Feature extractor is applied to enrich the feature set of the second half of the training set and the test set with additional features. Random Forest is trained on the second half of the training set to create a final classifier, which is evaluated on the test set.
Hybrid of Static Features and HMM Ratios

In the method two generative HMM models are built on the first half of the training set: one for the samples with the positive class labels and one for the samples with the negative class labels. These models are used to enrich both static features of the second half of the training set with ratios as well as the static features of the test set. Finally, Random Forest is trained on the enriched feature set and evaluated on the enriched test set.  

Hybrid of Static Features and LSTM Ratios

In a very similar fashion we can use LSTM to build generative models. extracts MSE ratios and enriches the set of static features on the second half of the training set. Random Forest is trained on the enriched training set and evaluated on the enriched test set.  

Hybrid of Static Features and LSTM Activations

Depending on how detailed information we want to give to the final classifier we can choose which features to extract from a generative model. The log-likelihood ratios are almost the most compressed form of the information about the data samples. Less compressed features depend on the inner workings of a particular generative model. For a LSTM network we can take activations of the last LSTM layer at the last iteration and use those activations to enrich the set of static features. The model that does that is denoted as . The final step is similar to all other hybrid architectures: Random Forest is trained on the enriched training set (in this case it consists of static features concatenated with LSTM activations) and evaluates the performance on the test set. In our experiments, however, such an approach is less accurate than the one based on the log-likelihoods.

In Table I we summarize all the models and mark the feature sets used by each model. Note that in case of univariate datasets some of the models are virtually the same: model (5) is the same as the model (1), model (7) as the same as the model (3) and the model (6) is the same as the model (2). The reason is that for univariate datasets we use spatialized dynamic data as static features.

Iv Evaluation

In this section we explain the experimental setup and the model performance estimation process.

Iv-a Experimental Pipeline

In the case of hybrid methods the splitting strategy and feature handling are not straightforward. In this subsection we explain the full pipeline both in the case of a training-test split and in the case of cross-validation.  

Training / test split

Splitting the data entails non-trivial steps in order to perform feature enrichment without introducing bias into the models. Methods (1)–(7) that do not make use of feature enrichment employ the standard machine learning pipeline where a training set is divided into training and validation subsets. Model hyperparameters are estimated on the validation set and the final model is trained on the whole training data and tested on the test data.

In case of ensembles and hybrids, however, the pipeline differs. First, the training set is divided into two equal halves, let us call them and . is used to train the first tier of models, which we call feature extractors — given a data sample their purpose is to output additional features that describe that sample. In case of ensembles these features are log-likelihoods of class labels as discussed in subsection III-C1, for hybrids the extracted features can take any of the forms discussed in III-C2. Hyperparameters of the feature extractor models are estimated on a validation subset of .

Once feature extractors are fully trained we use them to enrich and test sets: dynamic data of each sample from those sets is fed into a feature extractor. The extracted features are concatenated with the static features of that data sample. For example in case of , if we had a dataset with dynamic features and static features, and we use LSTM network with 128 LSTM units as the feature extractor, then the new feature space would be of size .

The final step of the pipeline is to train a second tier — a discriminative model (Random Forest) — on the new feature space. Hyperparameters are estimated on the validation subset of enriched , after that a final model is trained on the whole enriched set and tested on the enriched test set.

The complete process is depicted in Figures 2 and 3.  

Cross-validation

In the case of proposed hybrid architecture, cross-validation allows for a more efficient use of data.

Instead of dividing the data set into two parts as it was done for the training/test split case, we split data set into chunks, where is the number of cross-validation folds. We train a feature extractor model using the data samples from chunks and apply that model to enrich the samples from the chunk. This process is repeated for every chunk and as a result the whole dataset is enriched without introducing overfitting bias.

The next iteration is to train a second tier model (Random Forest) on the enriched data. This is once again done using cross-validation and exactly on the same chunks of data we used before. The process is no different from the classical application of cross-validation: a model is trained on chunks and evaluated on the chunk. The reported accuracy is the average accuracy over folds.  

Hyperparameter Optimization

In order to find the best hyperparameter combination for a model, we apply Spearmint [30] to search through the parameter space. The method behind the tool is Bayesian optimization and it has been shown to be able to find hyperparameters that yield performance equal or superior to that achieved using other hyperparameter optimization techniques. In our experiments every dataset has its own set of parameters, see Table III for the details.

LSTM HMM RF
Dataset S D O B E S I T
ECoG 2000 0.5 rmsprop 32 50 6 50 500
FordA 512 0.0 rmsprop 1 20 2 50 500
FordB 512 0.0 rmsprop 1 20 2 50 500
Phalanges 128 0.0 rmsprop 1 10 2 50 500
Yoga 256 0.0 rmsprop 1 10 2 50 500
TABLE III:

Estimated hyperparameters. Column names stand for: LSTM Size, Dropout, Optimization method, Batch size, number of Epochs; number of HMM States, number of Iterations; number of RF Trees.

Iv-B Datasets

We compare the described approaches on several datasets from different domains as well as on a simulated data. In this section we describe the datasets and their properties.  

Synthetic ARMA Dataset

In order to be able to compare results with the ground truth and form the intuition how much information can be extracted from different types of features, we have generated a synthetic dataset with specific properties. Namely, one of the posed questions of this work is whether combining static and dynamic features can boost the overall performance on a given dataset. We model the required conditions by splitting the data into four blocks in the way explained in Table IV. Block 1 has samples with positive labels and values for the static features are generated from model, while the dynamic features are generated from the model. In block 2 the situation is reversed. The two last blocks have correct labels for both parts. Therefore, models that do not use information from both sources should be in a worse position. Indeed, models (1)–(4) cannot achieve accuracy of more than as can be seen in Figure 7.

Block Classifiability Label Static Dynamic
1 Classifiable by discriminative model as T, but generative model will confuse it for F T
2 Classifiable by generative model as T, but discriminative model will confuse it for F T
3, 4 Classifiable by both as F F
TABLE IV: Synthetic dataset is designed in a specific way. Each block contains static and dynamic data, however the dynamic data in block 1 and static data in block 2 are useless, making it impossible for a model that operates only on one data modality to classify the whole dataset correctly.

The dynamic features are simulated with process [31]

, where the orders of autoregressive (AR) and moving average (MA) parts are drawn from a uniform distribution,

, while coefficients of AR and MA processes are , respectively. All values of the static features in the synthetic dataset are drawn from the Gaussian distribution, , where and .

For illustration purposes Figure 4 depicts eight randomly chosen timeseries from the created synthetic dataset. The difference between classes is not obvious, and, therefore, is not overly simple as a classification task.

Fig. 4: Example of generated synthetic timeseries.
Fig. 5: Performance of 12 models on the synthetic dataset with varying parameters. The ratio between the number of dynamic features and the total number of features plotted on the horizontal axis. The vertical axis corresponds to the size of the dataset on the left plot and to the length of the sequence on the right plot.
Real-life datasets

We use datasets with different aspects: few univariate time-series widely used in the literature — FordA and FordB [28] and a multivariate dataset from a particular domain — classification of electrocorticography (ECoG) recordings from BCI competition III [32]. Also, we show the results on the Phalanges and Yoga datasets [28], where the baseline methods perform as well as ensemble and hybrid approaches, and we discuss why it is the case. For the characteristics of the chosen datasets the reader is referred to Table II.

Benchmarks from the literature exist for all of the datasets except for the ECoG dataset. To the best of our knowledge we compare our scores with the highest reported results and follow the same data splitting strategies. Namely, we demonstrate the results on the train and test sets of the same size as used in the literature. The sources of the benchmarks are provided in Table II. It is worth mentioning that despite the fact that the best result gained for the ECoG dataset during the BCI Competition has accuracy of [26], the authors use elaborate hand-crafted features extraction methods such as combination of bandpower, CSSD/Waveform Mean and Fisher Discriminant Analysis. Since we do not have access to the features they have used, we limit ourselves to classical Fourier analysis of ECoG signal. Due to the differences in preprocessing the fair comparison cannot be easily drawn. For ECoG dataset we apply 5-fold cross-validation and compute the mean of accuracies over the folds.

Iv-C Results

In this section we report accuracies achieved by all of the approaches on synthetic and real-life datasets and discuss our findings.  

Fig. 6: Performance on the real-life datasets.
Insights from synthetic ARMA Dataset

As can be seen from the results on the synthetic dataset (Figure 7) RF is far from good when dealing with dynamic data, thus it confirms the intuition that models designed to work with dynamic data have merit. Stand-alone models on bimodal data (models (5)–(7), see section III-B) fail to capture the information from the both sources. This observation is also generally true for the real-life datasets (see Figure 6). Both the ensemble and hybrid methods achieve almost perfect accuracy, and thus are good at extracting information from temporal and static sources simultaneously.

Fig. 7: Performance on the synthetic dataset. The vertical dashed lines show the maximal achievable level of performance.

Next, we investigate various dataset characteristics with respect to the methods’ performance by generating datasets with varying size, sequence length (), number of static () and dynamic features (). The resulting heatmap is shown in Figure 5. Note that the ratio on the horizontal axis is represented in discrete form and the intervals are not equal due to a limited variation of the parameters.

There are a few observations that may be of interest for practitioners:

  • Both ensemble and hybrid HMM (models , ) show superior performance on longer sequences and when the number of dynamic and static features is balanced.

  • Hybrid and ensemble models based on LSTM (, ) are less affected by the length of the sequence and the ratio between static and dynamic features. They perform very well across the whole range of those parameters, except for very short sequences.

  • All methods show lower performance on the datasets with very short sequences and on the datasets with high imbalance towards dynamic features. The decrease is visible when the proportion of dynamic features reaches .

  • Noteworthy is also the overall similarity between the patterns of ensembles and hybrids, though latter have a bit higher accuracy.

  • Hybrid model with LSTM activations seems to perform well only for a bigger dataset size, longer sequences and with only a few dynamic features.

Results on real-life datasets

The results across all real-life datasets are shown in Figure 6. In general, every dataset has its own specifics and thus, no single method performs best on all of them. For example, on the ECoG dataset the model is almost as good as hybrid methods in spite of showing poor performance on all the other datasets. On the Yoga and Phalanges datasets Random Forest on the static features performs as well as the hybrid and ensemble methods. One possible reason why on some datasets combination of dynamic and static features does not give higher performance could be the absence of temporal dynamics in the data. In such case the use of temporal models is irrelevant. The degree of temporal connection can be estimated by comparing stand-alone models and with . If Random Forest performs on dynamic data better or similar to HMM and LSTM, then the temporal aspect is not present (or temporal models fail to grasp it).

Hybrid model improved on the best result from the literature on the datasets FordA and FordB. On the ECoG dataset both hybrids and outperformed the other methods. Moreover, despite the fact that the hybrid methods on Phalanges and Yoga datasets do not beat the accuracy from the literature, in all of the datasets they are either the best or very close to the best results.

It is also interesting to notice that the observations obtained from the analysis of synthetic data are in line with the performance on the real datasets. Namely, if a dataset has fewer samples or the sequences are rather short (as in the case of Yoga and Phalanges datasets), then the hybrid approach does not provide a performance boost. However, if a dataset is large and has long sequences as in the case of ECoG, FordA and FordB (see table II for the dataset characteristics), then hybrids outperform other methods.

V Conclusion

We propose and explore an unorthodox way of combining dynamic and static features in order to make it possible for a classification model to capture temporal dynamics and static information simultaneously. Previous approaches to this problem relied on ensemble methods where different models operate on different data modalities and their predictions are combined. The hybrid approach we propose goes to a lower level and explores the possibility of combining models not at the level of predictions, but earlier — it concatenates static features with either predictions of generative models, ratios of class probabilities, or, when possible, with inner representation of the data provided by a generative model. We demonstrate that this approach outperforms other approaches and report results on several public datasets. Additionally we explore the behaviour of 12 different models on synthetic data and describe how performance depends on such properties of a dataset as the number of samples, number of static and dynamic features and the length of the sequence, providing the guidelines for practitioners.

Acknowledgements

We would like to thank Marlon Dumas, Raul Vicente, Tambet Matiisen, Elena Sügis and Dmytro Fishman for their comments on the manuscript. This research was supported by ERDF via the STACC Competence Centre and the Estonian Research Council via the grant PUT438. All experiments were carried out in the High Performance Computing Center of University of Tartu.

References

  • [1]

    J. R. Knott, F. A. Gibbs, and C. E. Henry, “Fourier transforms of the electroencephalogram during sleep.”

    Journal of Experimental Psychology, vol. 31, no. 6, p. 465, 1942.
  • [2] G. Mesnil, T. Mikolov, M. Ranzato, and Y. Bengio, “Ensemble of generative and discriminative techniques for sentiment analysis of movie reviews,” arXiv preprint arXiv:1412.5335, 2014.
  • [3] L. Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, 2001.
  • [4] B. Schölkopf and A. Smola, “Support vector machines,” Encyclopedia of Biostatistics, 1998.
  • [5] K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2, no. 5, pp. 359–366, 1989.
  • [6] L. R. Rabiner, “A tutorial on hidden markov models and selected applications in speech recognition,” Proceedings of the IEEE, vol. 77, no. 2, pp. 257–286, 1989.
  • [7] H. Ding, G. Trajcevski, P. Scheuermann, X. Wang, and E. Keogh, “Querying and mining of time series data: experimental comparison of representations and distance measures,” Proceedings of the VLDB Endowment, vol. 1, no. 2, pp. 1542–1552, 2008.
  • [8] S. Hochreiter, “Untersuchungen zu dynamischen neuronalen netzen,” Master’s thesis, Institut fur Informatik, Technische Universitat, München, 1991.
  • [9] T. G. Dietterich, “Ensemble methods in machine learning,” in Multiple classifier systems.   Springer, 2000, pp. 1–15.
  • [10] D. H. Wolpert, “Stacked generalization,” Neural networks, vol. 5, no. 2, pp. 241–259, 1992.
  • [11] E. Scornet, G. Biau, and J.-P. Vert, “Consistency of random forests,” arXiv preprint arXiv:1405.2881, 2014.
  • [12] M. Denil, D. Matheson, and N. De Freitas, “Narrowing the gap: Random forests in theory and in practice,” arXiv preprint arXiv:1310.1415, 2013.
  • [13] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.
  • [14] I. Goodfellow, Y. Bengio, and A. Courville, “Deep learning,” 2016, book in preparation for MIT Press. [Online]. Available: {}{}}{http://goodfeli.github.io/dlbook}{cmtt}
  • [15]

    D.~Reynolds, ``Gaussian mixture models,''

    Encyclopedia of Biometrics, pp. 827--832, 2015.
  • [16] e.~a. Sergei~Lebedev, ``hmmlearn,'' https://github.com/hmmlearn/hmmlearn, 2015.
  • [17] S.~Hochreiter and J.~Schmidhuber, ``Long short-term memory,'' Neural Computation, vol.~9, no.~8, pp. 1735--1780, 1997.
  • [18] A.~Graves, A.-R. Mohamed, and G.~Hinton, ``Speech recognition with deep recurrent neural networks,'' in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on.   IEEE, 2013, pp. 6645--6649.
  • [19] I.~Sutskever, O.~Vinyals, and Q.~V. Le, ``Sequence to sequence learning with neural networks,'' in Advances in neural information processing systems, 2014, pp. 3104--3112.
  • [20] N.~Srivastava, E.~Mansimov, and R.~Salakhutdinov, ``Unsupervised learning of video representations using lstms,'' arXiv preprint arXiv:1502.04681, 2015.
  • [21] M.~Längkvist, L.~Karlsson, and A.~Loutfi, ``A review of unsupervised feature learning and deep learning for time-series modeling,'' Pattern Recognition Letters, vol.~42, pp. 11--24, 2014.
  • [22] Y.~N. Dauphin, H.~de~Vries, J.~Chung, and Y.~Bengio, ``Rmsprop and equilibrated adaptive learning rates for non-convex optimization,'' arXiv preprint arXiv:1502.04390, 2015.
  • [23] e.~a. François~Chollet, ``Keras,'' https://github.com/fchollet/keras, 2016.
  • [24] M.~G. Baydogan and G.~Runger, ``Learning a symbolic representation for multivariate time series classification,'' Data Mining and Knowledge Discovery, vol.~29, no.~2, pp. 400--422, 2015.
  • [25] E.~Keogh and A.~Mueen, ``Curse of dimensionality,'' in Encyclopedia of Machine Learning.   Springer, 2011, pp. 257--258.
  • [26] M.~Schröder, T.~Hinterberger, T.~Navin~Lal, G.~Widman, and N.~Birbaumer, ``BCI III Competition Results,'' http://www.bbci.de/competition/iii/results/index.html, 2005, [Online; accessed 11-February-2016].
  • [27] A.~Bagnall, L.~M. Davis, J.~Hills, and J.~Lines, ``Transformation based ensembles for time series classification.'' in SDM, vol.~12.   SIAM, 2012, pp. 307--318.
  • [28] Y.~Chen, E.~Keogh, B.~Hu, N.~Begum, A.~Bagnall, A.~Mueen, and G.~Batista, ``The ucr time series classification archive,'' July 2015, http://www.cs.ucr.edu/~eamonn/time_series_data.
  • [29] T.~S. Jaakkola, D.~Haussler et~al., ``Exploiting generative models in discriminative classifiers,'' Advances in neural information processing systems, pp. 487--493, 1999.
  • [30] J.~Snoek, H.~Larochelle, and R.~P. Adams, ``Practical bayesian optimization of machine learning algorithms,'' in Advances in neural information processing systems, 2012, pp. 2951--2959.
  • [31] J.~D. Hamilton, Time series analysis.   Princeton university press Princeton, 1994, vol.~2.
  • [32] T.~N. Lal, T.~Hinterberger, G.~Widman, M.~Schröder, N.~J. Hill, W.~Rosenstiel, C.~E. Elger, N.~Birbaumer, and B.~Schölkopf, ``Methods towards invasive human brain computer interfaces,'' in Advances in neural information processing systems, 2004, pp. 737--744.

Appendix A Source Code

The implementation of all the methods described in the paper and the code for the exploratory analysis is available at the public repository at https://github.com/annitrolla/Generative-Models-in-Classification.