Time Series Cluster Kernel for Learning Similarities between Multivariate Time Series with Missing Data

04/03/2017 ∙ by Karl Øyvind Mikalsen, et al. ∙ University of Tromsø the Arctic University of Norway 0

Similarity-based approaches represent a promising direction for time series analysis. However, many such methods rely on parameter tuning, and some have shortcomings if the time series are multivariate (MTS), due to dependencies between attributes, or the time series contain missing data. In this paper, we address these challenges within the powerful context of kernel methods by proposing the robust time series cluster kernel (TCK). The approach taken leverages the missing data handling properties of Gaussian mixture models (GMM) augmented with informative prior distributions. An ensemble learning approach is exploited to ensure robustness to parameters by combining the clustering results of many GMM to form the final kernel. We evaluate the TCK on synthetic and real data and compare to other state-of-the-art techniques. The experimental results demonstrate that the TCK is robust to parameter choices, provides competitive results for MTS without missing data and outstanding results for missing data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Time series analysis is an important and mature research topic, especially in the context of univariate time series (UTS) prediction BoxJenkins ; Chatfield ; CryerChan ; shumway . The field tackles real world problems in many different areas such as energy consumption iglesias2013analysis , climate studies ji2013dynamic , biology 10.1371/journal.pone.0084955 , medicine Hayrinen2008 ; soguero2015data ; Soguero2016predicting and finance hsu2014clustering . However, the need for analysis of multivariate time series (MTS) tsay2013multivariate is growing in modern society as data is increasingly collected simultaneously from multiple sources over time, often plagued by severe missing data problems AnovaHazanZeevi ; BashirWei . These challenges complicate analysis considerably, and represent open directions in time series analysis research. The purpose of this paper is to answer such challenges, which will be achieved within the context of the powerful kernel methods scholkopf2001learning ; shawe2004kernel for reasons that will be discussed below.

Time series analysis approaches can be broadly categorized into two families: (i) representation methods, which provide high-level features for representing properties of the time series at hand, and (ii) similarity measures, which yield a meaningful similarity between different time series for further analysis Wang:2013 ; Aghabozorgi201516 .

Classic representation methods are for instance Fourier transforms, wavelets, singular value decomposition, symbolic aggregate approximation, and piecewise aggregate approximation,

Faloutsos:1994:FSM:191843.191925 ; chan1999efficient ; korn1997efficiently ; lin2007experiencing ; keogh2001dimensionality . Time series may also be represented through the parameters of model-based methods such as Gaussian mixture models (GMM) Marlin:2012:UPD:2110363.2110408 ; bashir2005automatic ; bashir2007object

, Markov models and hidden Markov models (HMMs) 

Ramoni:2002:BCD:584647.584652 ; panuccio2002hidden ; knab2003model , time series bitmaps kumar2005time and variants of ARIMA Corduas20081860 ; xiong2002mixtures

. An advantage with parametric models is that they can be naturally extended to the multivariate case. For detailed overviews on representation methods, we refer the interested reader to

Wang:2013 ; Aghabozorgi201516 ; fu2011review .

Of particular interest to this paper are similarity-based approaches. Once defined, such similarities between pairs of time series may be utilized in a wide range of applications, such as classification, clustering, and anomaly detection 

han2011data . Time series similarity measures include for example dynamic time warping (DTW) Berndt:1994:UDT:3000850.3000887 , the longest common subsequence (LCSS) vlachos2003indexing , the extended Frobenius norm (Eros) yang2007efficient , and the Edit Distance with Real sequences (EDR) Chen:2005:RFS:1066157.1066213 , and represent state-of-the-art performance in UTS prediction Wang:2013 . However, many of these measures cannot straightforwardly be extended to MTS such that they take relations between different attributes into account banko2012correlation . The learned pattern similarity (LPS) is an exception, based on the identification of segments-occurrence within the time series, which generalizes naturally to MTS baydogan2016time by means of regression trees where a bag-of-words type compressed representation is created, which in turn is used to compute the similarity.

A similarity measure that also is positive semi-definite (psd) is a kernel shawe2004kernel . Kernel methods shawe2004kernel ; Jenssen2010 ; Jenssen2013 have dominated machine learning and pattern recognition over two decades and have been very successful in many fields scholkopf2004kernelC ; camps2009kernel ; SogueroJBHI . A main reason for this success is the well understood theory behind such methods, wherein nonlinear data structures can be handled via an implicit or explicit mapping to a reproducing kernel Hilbert space (RKHS) scholkopf2001generalized ; berlinet2011

defined by the choice of kernel. Prominent examples of kernel methods include the support vector machine (SVM) 

steinwart2008support

and kernel principal component analysis (kPCA) 

scholkopf1997kernel .

However, many similarities (or equivalently dissimilarities) are non-metric as they do not satisfy the triangle-inequality, and in addition most of them are not psd and therefore not suited for kernel methods haasdonk2004learning ; marteau2015recursive . Attempts have been made to design kernels from non-metric distances such as DTW, of which the global alignment kernel (GAK) is an example cuturi2011fast

. There are also promising works on deriving kernels from parametric models, such as the probability product kernel 

jebara2004probability , Fisher kernel jaakkola1999using , and reservoir based kernels chen2013model

. Common to all these methods is however a strong dependence on a correct hyperparameter tuning, which is difficult to obtain in an unsupervised setting. Moreover, many of these methods cannot naturally be extended to deal with MTS, as they only capture the similarities between individual attributes and do not model the dependencies between multiple attributes 

banko2012correlation . Equally important, these methods are not designed to handle missing data, an important limitation in many existing scenarios, such as clinical data where MTS originating from Electronic Health Records (EHRs) often contain missing data Hayrinen2008 ; soguero2015data ; Soguero2016predicting ; liu2016learning .

In this work, we propose a new kernel for computing similarities between MTS that is able to handle missing data without having to resort to imputation methods 

donders2006review . We denote this new measure as the time series cluster kernel (TCK). Importantly, the novel kernel is robust and designed in an unsupervised manner, in the sense that no critical hyperparameter choices have to be made by the user. The approach taken is to leverage the missing data handling properties of GMM modeling following the idea of Marlin:2012:UPD:2110363.2110408 , where robustness to sparsely sampled data is ensured by extending the GMM using informative prior distributions. However, we are not fitting a single parametric model, but rather exploiting an ensemble learning approach Dietterich2000 wherein robustness to hyperparameters is ensured by joining the clustering results of many GMM to form the final kernel. This is to some degree analogous to the approaches taken in cuturi2011autoregressive and IzquierdoVerdiguier20151299 . More specifically, each GMM is initialized with different numbers of mixture components and random initial conditions and is fit to a randomly chosen subsample of the data, attributes and time segment, through an embarrassingly parallel procedure. This also increases the robustness against noise. The posterior assignments provided by each model are combined to form a kernel matrix, i.e. a psd similarity matrix. This opens the door to clustering, classification, etc., of MTS within the framework of kernel methods, benefiting from the vast body of work in that field. The procedure is summarized in Fig. 1.

Figure 1: Schematic depiction of the procedure used to compute the TCK.

In the experimental section we illustrate some of the potentials of the TCK by applying it to classification, clustering, dimensionality reduction and visualization tasks. In addition to the widely used DTW, we compare to GAK and LPS. The latter inherits the decision tree approach to handle missing data, is similar in spirit to the TCK in the sense of being based on an ensemble strategy 

baydogan2016time , and is considered the state-of-the-art for MTS. As an additional contribution, we show in A that the LPS is in fact a kernel itself, a result that to the authors best knowledge has not been proven before. The experimental results demonstrate that TCK is very robust to hyperparameter choices, provides competitive results for MTS without missing data and outstanding results for MTS with missing data. This we believe provides a useful tool across a variety of applied domains in MTS analysis, where missing data may be problematic.

The remainder of the paper is organized as follows. In Section 2 we present related works, whereas in Section 3, we give the background needed for building the proposed method. In Section 4 we provide the details of the TCK, whereas in Section 5 we evaluate it on synthetic and real data and compare to LPS and DTW. Section 6 contains conclusions and future work.

2 Related work

While several (dis)similarity measures have been defined over the years to compare time series, many of those measures are not psd and hence not suitable for kernel approaches. In this section we review some of the main kernels functions that have been proposed for time series data.

The simplest possible approach is to treat the time series as vectors and apply well-known kernels such as a linear or radial basis kernel scholkopf2001learning . While this approach works well in some circumstances, time dependencies and the relationships among multiple attributes in the MTS are not explicitly modeled.

DTW Berndt:1994:UDT:3000850.3000887 is one of the most commonly used similarity measures for UTS and has become the state-of-the-art in many practical applications Cai2015181 ; ratanamahatana2004everything ; Lines . Several formulations have been proposed to extend DTW to the multidimensional setting shokoohi2015generalizing ; banko2012correlation . Since DTW does not satisfy the triangle inequality, it is not negative definite and, therefore, one cannot obtain a psd kernel by applying an exponential function to it berg1984harmonic . Such an indefinite kernel may lead to a non-convex optimization problem (e.g., in an SVM), which hinders the applicability of the model haasdonk2004learning . Several approaches have been proposed to limit this drawback at the cost of more complex and costly computations. In wu2005learning ; Chen:2009:SCC:1577069.1577096 ad hoc spectral transformations were employed to obtain a psd matrix. Cuturi et al. cuturi2011fast designed a DTW-based kernel using global alignments (GAK). Marteau and Gibet proposed an approach that combines DTW and edit distances with a recursive regularizing term marteau2015recursive .

Conversely, there exists a class of (probabilistic) kernels operating on the configurations of a given parametric model, where the idea is to leverage the way distributions capture similarity. For instance, the Fisher kernel assumes an underlying generative model to explain all observed data jaakkola1999using . The Fisher kernel maps each time series into a feature vector , which is the gradient of the log-likelihood of the generative model fit on the dataset. The kernel is defined as , where is the fisher information matrix. Another example is the probability product kernel jebara2004probability , which is evaluated by means of the Bhattacharyya distance in the probability space. A further representative is the marginalized kernel tsuda2002marginalized , designed to deal with objects generated from latent variable models. Given two visible variables, and and two hidden variables, and , at first, a joint kernel is defined over the two combined variables and . Then, a marginalized kernel for visible data is derived from the expectation with respect to hidden variables:

. The posterior distributions are in general unknown and are estimated by fitting a parametric model on the data.

In several cases, the assumption of a single parametric model underlying all the data may be too strong. Additionally, finding the most suitable parametric model is a crucial and often difficult task, which must be repeated every time a new dataset is processed. This issue is addressed by the autoregressive kernel cuturi2011autoregressive

, which evaluates the similarity of two time series on the corresponding likelihood profiles of a vector autoregressive model of a given order, across all possible parameter settings, controlled by a prior. The kernel is then evaluated as the dot product in the parameter space of such profiles, used as sequence representations. The reservoir based kernels 

chen2013model , map the time series into a high dimensional, dynamical feature space, where a linear readout is trained to discriminate each signal. These kernels fit reservoir models sharing the same fixed reservoir topology to all time series. Since the reservoir provides a rich pool of dynamical features, it is considered to be “generic” and, contrarily to kernels based on a single parametric model, it is able to represent a wide variety of dynamics for different datasets.

The methodology we propose is related to this last class of kernels. In order to create the TCK, we fuse the framework of representing time series via parametric models with similarity and kernel based methods. More specifically, the TCK leverages an ensemble of multiple models that, while they share the same parametric form, are trained on different subset of data, each time with different, randomly chosen initial conditions.

3 Background

In this section we provide a brief background on kernels, introduce the notation adopted in the remainder of the paper and provide the frameworks that our method builds on. More specifically, we introduce the diagonal covariance GMM for MTS with missing data, the extended GMM framework with empirical priors and the related procedure to estimate the parameters of this model.

3.1 Background on kernels

Thorough overviews on kernels can be found in steinwart2008support ; scholkopf2001learning ; berg1984harmonic ; shawe2004kernel . Here we briefly review some basic definitions and properties, following steinwart2008support .

Definition 1.

Let be a non-empty set. A function is a kernel if there exists a -Hilbert space and a map such that ,

From this definition it can be shown that a kernel is symmetric and psd, meaning that , , , . Of major importance in kernel methods are also the concepts of reproducing kernels and reproducing kernel Hilbert spaces (RKHS), described by the following definition.

Definition 2.

Let be a non-empty set, a Hilbert space and a function. is a reproducing kernel, and a RKHS, if , , and (reproducing property).

These concepts are highly connected to kernels. In fact reproducing kernels are kernels, and every kernel is associated with a unique RKHS (Moore-Aronszajn theorem), and vice-versa. Moreover, the representer theorem states that every function in an RKHS that optimizes an empirical risk function can be expressed as a linear combination of kernels centered at the training points. These properties have very useful implications, e.g. in an SVM, since an infinite dimensional empirical risk minimization problem can be simplified to a finite dimensional problem and the solution is included in the linear span of the kernel function evaluated at the training points.

3.2 MTS with missing data

We define a UTS, , as a sequence of real numbers ordered in time, The independent time variable, , is without loss of generality assumed to be discrete and the number of observations in the sequence, , is the length of the UTS.

A MTS is defined as a (finite) sequence of UTS, where each attribute, , is a UTS of length . The number of UTS, , is the dimension of . The length of the UTS is also the length of the MTS . Hence, a –dimensional MTS, , of length can be represented as a matrix in .

Given a dataset of MTS, we denote the -th MTS. An incompletely observed MTS is described by the pair , where is a binary MTS with entry if the realization is missing and if it is observed.

3.3 Diagonal covariance GMM for MTS with missing data

A GMM is a mixture of

components, with each component belonging to a normal distribution. Hence, the components are described by the mixing coefficients

, means and covariances . The mixing coefficients satisfy and .

We formulate the GMM in terms of a latent random variable

, represented as a -dimensional one-hot vector, whose marginal distribution is given by The conditional distribution for the MTS , given , is a multivariate normal distribution,

Hence, the GMM can be described by its probability density function (pdf), given by

(1)

The GMM described by Eq. (1) holds for completely observed data and a general covariance. However, in the diagonal covariance GMM considered in this work, the following assumptions are made. The MTS are characterized by time-dependent means, expressed by , where is a UTS, whereas the covariances are constrained to be constant over time. Accordingly, the covariance matrix is , being

the variance of attribute

. Moreover, the data is assumed to be missing at random (MAR), i.e. the missing elements are only dependent on the observed values.

Under these assumptions, missing data can be analytically integrated away, such that imputation is not needed rubin1976inference , and the pdf for the incompletely observed MTS is given by

(2)

The conditional probability of given

, can be found using Bayes’ theorem,

(3)

can be thought of as the prior probability of

belonging to component , and therefore Eq. (3

) describes the corresponding posterior probability.

To fit a GMM to a dataset, one needs to learn the parameters

. The standard way to do this is to perform maximum likelihood expectation maximization (EM) 

bilmes1998gentle . However, to be able to deal with large amounts of missing data, one can introduce informative priors for the parameters and estimate them using maximum a posteriori expectation maximization (MAP-EM) Marlin:2012:UPD:2110363.2110408 . This ensures each cluster mean to be smooth over time and clusters containing few time series, to have parameters similar to the mean and covariance computed over the whole dataset. We summarize this procedure in the next subsection (see Ref. Marlin:2012:UPD:2110363.2110408 for details).

3.4 MAP-EM diagonal covariance GMM augmented with empirical prior

To enforce smoothness, a kernel-based Gaussian prior is defined for the mean, are the empirical means and the prior covariance matrices, , are defined as where

are empirical standard deviations and

is a kernel matrix, whose elements are ,

are user-defined hyperparameters. An inverse Gamma distribution prior is put on the standard deviation

, where is a user-defined hyperparameter. We denote the set of hyperparameters. Estimates of parameters are found using MAP-EM  dempster1977maximum ; mclachlan2007algorithm , according to Algorithm 1.

1:Dataset , hyperparameters and number of mixtures .
2:Initialize the parameters .
3:E-step. For each MTS , evaluate the posterior probabilities using current parameter estimates,
4:M-step. Update parameters using the current posteriors
5:Repeat step 2-3 until convergence.
6:Posteriors and mixture parameters .
Algorithm 1 MAP-EM diagonal covariance GMM

4 Time series cluster kernel (TCK)

Methods based on GMM, in conjunction with EM, have been successfully applied in different contexts, such as density estimation and clustering statlearning . As a major drawback, these methods often require to solve a non-convex optimization problem, whose outcome depends on the initial conditions mclachlan2007algorithm ; wu1983convergence . The model described in the previous section depends on initialization of parameters and the chosen number of clusters  Marlin:2012:UPD:2110363.2110408 . Moreover, three different hyper-parameters, , have to be set. In particular, modeling the covariance in time is difficult; choosing a too small hyperparameter leads to a degenerate covariance matrix that cannot be inverted. On the other hand, a too large value would basically remove the covariance such that the prior knowledge is not incorporated. Furthermore, a single GMM provides a limited descriptive flexibility, due to its parametric nature.

Ensemble learning has been adopted both in classification, where classifiers are combined through e.g. bagging or boosting 

breiman1996bagging ; freund1996experiments , and clustering Fred02evidenceaccumulation ; Monti2003 ; Strehl:2003 . Typically, in ensemble clustering one integrates the outcomes of the same algorithm as it processes different data subsets, being configured with different parameters or initial conditions, in order to capture local and global structures in the underlying data Monti2003 ; vega2011survey and to provide a more stable and robust final clustering result. Hence, the idea is to combine the results of many weaker models to deliver an estimator with statistical, computational and representational advantages Dietterich2000 , which are lower variance, lower sensitivity to local optima and a broader span of representable functions, respectively.

We propose an ensemble approach that combines multiple GMM, whose diversity is ensured by training the models on subsamples of data, attributes and time segments, using different numbers of mixture components and random initialization of and hyperparameters. Thus, we generate a model robust to parameters and noise, also capable of capturing different levels of granularity in the data. To ensure robustness to missing data, we use the diagonal covariance GMM augmented with the informative priors described in the previous section as base models in the ensemble.

Potentially, we could have followed the idea of glodek2013ensemble to create a density function from an ensemble of GMM. Even though several methods rely on density estimation statlearning , we aim on deriving a similarity measure

, which provides a general-purpose data representation, fundamental in many applications in time-series analysis, such as classification, clustering, outlier detection and dimensionality reduction 

han2011data .

Moreover, we ensure the similarity measure to be psd, i.e. a kernel. Specifically, the linear span of posterior distributions , formed as -vectors, with ordinary inner product, constitutes a Hilbert space. We explicitly let the feature map be these posteriors. Hence, the TCK is an inner product between two distributions and therefore forms a linear kernel in the space of posterior distributions. Given an ensemble of GMM, we create the TCK using the fact that the sum of kernels is also a kernel.

4.1 Method details

To build the TCK kernel matrix, we first fit different diagonal covariance GMM to the MTS dataset. To ensure diversity, each GMM model uses a number of components from the interval . For each number of components, we apply different random initial conditions and hyperparameters. We let be the index set keeping track of initial conditions and hyperparameters (), and the number of components (). Moreover, each model is trained on a random subset of MTS, accounting only a random subset of variables , with cardinality , over a randomly chosen time segment . The inner products of the posterior distributions from each mixture component are then added up to build the TCK kernel matrix, according to the ensemble strategy ensemble . Algorithm 2 describes the details of the method.

1:Training data , initializations, maximal number of mixture components.
2:Initialize kernel matrix .
3:for  do
4:     Compute posteriors , , by applying Algorithm 1 with clusters and by randomly selecting,
  • hyperparameters ,

  • a time segment of length ,

  • a subset of attributes, , with cardinality ,

  • a subset of MTS, , with cardinality ,

  • initialization of the mixture parameters .

5:     Update kernel matrix, , .
6:end for
7: TCK kernel matrix, time segments , subsets of attributes , subsets of MTS , GMM parameters and posteriors .
Algorithm 2 TCK kernel. Training phase.

In order to be able to compute similarities with MTS not available at the training phase, one needs to store the time segments , subsets of attributes , GMM parameters and posteriors . Then, the TCK for such out-of-sample MTS is evaluated according to Algorithm 3.

1

Algorithm 3 TCK kernel. Test phase.
1:Test set , time segments , subsets of attributes , subsets of MTS , GMM parameters and posteriors .
2:Initialize kernel matrix .
3:for  do
4:     Compute posteriors , by applying Eq. (3) with mixture parameters .
5:     Update kernel matrix, , , .
6:end for
7: TCK test kernel matrix

4.2 Parameters and robustness

The maximal number of mixture components in the GMM, , should be set high enough to capture the local structure in the data. On the other hand, it should be set reasonably lower than the number of MTS in the dataset in order to be able to estimate the parameters of the GMM. Intuitively, a high number of realizations improves the robustness of the ensemble of clusterings. However, more realizations comes at the expense of an increased computational cost. In the end of next section we show experimentally that it is not critical to correctly tune these two hyperparameters as they just have to be set high enough.

Through empirical evaluations we have seen that none the other hyperparameters are critical. We set default hyperparameters as follows. The hyperparameters are sampled according to a uniform distribution from pre-defined intervals. Specifically, we let

, and . The subsets of attributes are selected randomly by sampling according to a uniform distribution from . The lower bound is set to two, since we want to allow the algorithm to learn possible inter-dependencies between at least two attributes. The time segments are sampled from and the length of the segments are allowed to vary between and . In order to be able to capture some trends in the data we set . We let the minimal size of the subset of MTS be 80 percent of the dataset.

We do acknowledge that for long MTS the proposed method becomes computationally demanding, as the complexity scales as . Moreover, there is a potential issue in Eq. (3) since multiplying together very small numbers both in the nominator and denominator could yield to numerically unstable expressions close to . While there is no theoretical problem, since the normal distribution is never exactly zero, the posterior for some outliers could have a value close to the numerical precision. In fact, since the posterior assignments are numbers lower than , the value of their product can be small if and are large. We address this issue by putting upper thresholds on the length of the time segments, , and number of attributes, , which is justified by the fact that the TCK is learned using an ensemble strategy. Moreover, to avoid problems for outliers we put a lower bound on the value for the conditional distribution for at . In fact, it is very unlikely that a data point generated from a normal distribution is more than three standard deviations away from the mean.

4.3 Algorithmic complexity

Training complexity

The computational complexity of the EM procedure is dominated by the update of the mean, whose cost is . Hence, for components and iterations, the total cost is . The computation of the TCK kernel involves both the MAP-EM estimation and the kernel matrix generation for each , whose cost is upper-bounded by . The cost of a single evaluation is therefore bounded by . We underline that the effective computational time can be reduced substantially through parallelization, since each instance can be evaluated independently. As we can see, the cost has a quadratic dependence on , which becomes the dominating term in large datasets. We note that in spectral methods the eigen-decomposition costs with a consequent complexity higher than TCK for large .

Testing complexity

For a test MTS one has to evaluate posteriors, with a complexity bounded by . The complexity of computing the similarity with the training MTS is bounded by . Hence, for each , the testing complexity is . Note that also the test phase is embarrassingly parallelizable.

4.4 Properties

In this section we demonstrate that TCK is a proper kernel and we discuss some of its properties. We let be the space of -variate MTS of length and be the TCK.

Theorem 1.

is a kernel.

Proof.

According to the definition of TCK, we have where . Since the sum of kernels is a kernel, it is sufficient to demonstrate that is a kernel. We define Since

is the linear span of posterior probability distributions, it is closed under addition and scalar multiplication and therefore a vector space. Furthermore, we define an inner product in

as the ordinary dot-product in , .

Lemma 1.

with is a Hilbert space.

Proof.

is equipped with the ordinary dot product, has finite dimension and therefore is isometric to . ∎

Lemma 2.

is a kernel.

Proof.

Let be the mapping given by . It follows that

Now, let be the Hilbert space defined via direct sum, . consists of the set of all ordered tuples . An induced inner product on is If we let be the mapping given by , it follows that

This result and its proof unveil important properties of TCK. (i) is symmetric and psd; (ii) the feature map is provided explicitly; (iii) is a linear kernel in the Hilbert space of posterior probability distributions ; (iv) the induced distance , given by

is a pseudo-metric as it satisfies the triangle inequality, takes non-negative values, but, in theory, it can vanish for .

5 Experiments and results

The proposed kernel is very general and can be used as input in many learning algorithms. It is beyond the scope of this paper to illustrate all properties and possible applications for TCK. Therefore we restricted ourselves to classification, with and without missing data, dimensionality reduction and visualization. We applied the proposed method to one synthetic and several benchmark datasets. The TCK was compared to three other similarity measures, DTW, LPS and the fast global alignment kernel (GAK) cuturi2011fast . DTW was extended to the multivariate case using both the independent (DTW i) and dependent (DTW d) version shokoohi2015generalizing . To evaluate the robustness of the similarity measures, they were trained unsupervisedly also in classification experiments, without tuning hyperparameters by cross-validation. In any case, cross-validation is not trivial in multivariate DTW, as the best window size based on individual attributes is not well defined baydogan2016time .

For the classification task, to not introduce any additional, unnecessary parameters, we chose to use a nearest-neighbor (1NN) classifier. This is a standard choice in time series classification literature Kate2016 . Even though the proposed method provides a kernel, by doing so, it is easier to compare the different properties of the similarity measures directly to each other. Performance was measured in terms of classification accuracy on a test set.

To perform dimensionality reduction we applied kPCA using the two largest eigenvalues of the kernel matrices. The different kernels were visually assessed by plotting the resulting mappings with the class information color-coded.

The TCK was implemented in R and Matlab, and the code is made publicly available at MikalsenTCK . In the experiments we used the same parameters on all datasets. We let and . For the rest of the parameters we used the default values discussed in Section 4.2. The only exception is for datasets with less than 100 MTS, in that case we let the maximal number of mixtures be . The hyperparameter dependency is discussed more thoroughly in the end of this section.

For the LPS we used the Matlab implementation provided by Baydogan Baydogan . We set the number of trees to 200 and number of segments to 5. Since many of the time series we considered were short, we set the minimal segment length to 15 percent of the length of MTS in the dataset. The remaining hyperparameters were set to default. For the DTW we used the R package dtw Giorgino . The GAK was run using the Matlab Mex implementation provided by Cuturi Cuturi . In accordance with Cuturi we set the bandwidth to two times the median distance of the MTS in the training set, scaled by the square root of the median length of the MTS. The triangular parameter was set to 0.2 times the median length.

In contrast to the TCK and LPS, the DTW and GAK do not naturally deal with missing data and therefore we imputed the overall mean for each attribute and time interval.

5.1 Synthetic example: Vector autoregressive model

We first applied TCK in a controlled experiment, where we generated a synthetic MTS dataset with two classes from a first-order vector autoregressive model, VAR(1) shumway , given by

(4)

To make and correlated with , we chose the noise term s.t., For the first class, we generated 100 two-variate MTS of length 50 for the training and 100 for the test, from the VAR(1)-model with parameters and . Analogously, the MTS of the second class were generated using parameters , and

. On these synthetic data, in addition to dimensionality reduction and classification with and without missing data, we also performed spectral clustering on the TCK matrix in order to be able to compare TCK directly to a single diagonal covariance GMM optimized using MAP-EM.

Clustering

Clustering performance was measured in terms of adjusted rand index (ARI) Hubert1985 and clustering accuracy (CA). CA is the maximum bipartite matching () between cluster labels () and ground-truth labels (), defined as where is the Kronecker delta and is computed with the Hungarian algorithm Kuhn55thehungarian .

TCK GMM TCK TCK
5pt. CA 0.990 0.910 0.775 0.800
ARI 0.961 0.671 0.299 0.357
Table 1: Clustering performance, measured in terms of CA and ARI, on simulated VAR(1) datasets for TCK and GMM.

The single GMM was run with , and . Tab. 1 show that spectral clustering on the TCK achieves a considerable improvement compared to GMM clustering and verify the efficacy of the ensemble and the kernel approach with respect to a single GMM. Additionally, we evaluated TCK by concatenating the MTS as a long vector and thereby treating the MTS as an UTS (TCK) and on a different VAR(1) dataset with the attributes uncorrelated (TCK). The superior performance of TCK with respect to these two approaches illustrates that, in addition to accounting for similarities within the same attribute, TCK also leverages interaction effects between different attributes in the MTS to improve clustering results.

Dimensionality reduction and visualization

Figure 2: Projection of the VAR(1) dataset to two dimensions using kPCA with the TCK and a linear kernel. The different colors indicate the true labels of the MTS.

To evaluate the effectiveness of TCK as a kernel, we compared kPCA with TCK and kPCA with a linear kernel (ordinary PCA). Fig. 2 shows that TCK maps the MTS on a line, where the two classes are well separated. On the other hand, PCA projects one class into a compact blob in the middle, whereas the other class is spread out. Learned representations like these can be exploited by learning algorithms such as an SVM. In this case, a linear classifier will perform well on the TCK representation, whereas for the other representation a non-linear method is required.

Classification with missing data

To investigate the TCK capability of dealing with missing data in a classification task, we removed values from the synthetic dataset according to three missingness patterns: missing completely at random (MCAR), missing at random (MAR) and missing not at random (MNAR) rubin1976inference . To simulate MCAR, we uniformly sampled the elements to be removed. Specifically, we discarded a ratio of the values in the dataset, varying from to . To simulate MAR, we let have a probability of being missing, given that , . Similarly, for MNAR we let have a probability of being missing, given that . We varied the probabilities from 0 to 0.5 to obtain different fractions of missing data.

Figure 3: Classification accuracy on simulated VAR(1) dataset of the 1NN-classifier configured with a (dis)similarity matrix obtained using LPS, DTW (d), DTW (i), GAK and TCK. We report results for three different types of missingness, with an increasing percentage of missing values.

For each missingness pattern, we evaluated the performance of a 1NN classifier configured with TCK, LPS, DTW (d), DTW (i) and GAK. Classification accuracies are reported in Fig. 3. First of all, we see that in absence of missing data, the performance of TCK and LPS are approximately equal, whereas the two versions of DTW and GAK yield a lower accuracy. Then, we notice that the accuracy for the TCK is quite stable as the amount of missing data increases, for all types of missingness patterns. For example, in the case of MCAR, when the amount of missing data increases from 0 to 50%, accuracy decreases to from 0.995 to 0.958. Likewise, when increases from 0 to 0.5, accuracy decreases from 0.995 to 0.953. This indicates that our method, in some cases, also works well for data that are MNAR. On the other hand, we notice that for MCAR and MAR data, the accuracy obtained with LPS decreases much faster than for TCK. GAK seems to be sensitive to all three types of missing data. Performance also diminishes quite fast in the DTW variants, but we also observe a peculiar behavior as the accuracy starts to increase again when the missing ratio increases. This can be interpreted as a side effect of the imputation procedure implemented in DTW. In fact, the latter replaces some noisy data with a mean value, hence providing a regularization bias that benefits the classification procedure.

5.2 Benchmark time series datasets

Datasets Attributes Train Test Classes Source
5pt. ItalyPower 1 67 1029 2 24 24 24 UCR
Gun Point 1 50 150 2 150 150 150 UCR
Synthetic control 1 300 300 6 60 60 60 UCR
5pt. PenDigits 2 300 10692 10 8 8 8 UCI
Libras 2 180 180 15 45 45 23 UCI
ECG 2 100 100 2 39 152 22 Olszewski
uWave 3 200 4278 8 315 315 25 UCR
Char.Traj. 3 300 2558 20 109 205 23 UCI
Robot failure LP1 6 38 50 4 15 15 15 UCI
Robot failure LP2 6 17 30 5 15 15 15 UCI
Robot failure LP3 6 17 30 4 15 15 15 UCI
Robot failure LP4 6 42 75 3 15 15 15 UCI
Robot failure LP5 6 64 100 5 15 15 15 UCI
Wafer 6 298 896 2 104 198 25 Olszewski
Japanese vowels 12 270 370 9 7 29 15 UCI
ArabicDigits 13 6600 2200 10 4 93 24 UCI
CMU 62 29 29 2 127 580 25 CMU
PEMS 963 267 173 7 144 144 25 UCI
Table 2: Description of benchmark time series datasets. Column 2 to 5 show the number of attributes, samples in training and test set, and classes, respectively. is the length of the shortest MTS in the dataset and the longest MTS. is the length of the MTS after the transformation.

We applied the proposed method to multivariate benchmark datasets from the UCR and UCI databases UCRArchive ; Lichman:2013 and other published work CMU ; Olszewski , described in Tab. 2. In order to also illustrate TCK’s capability of dealing with UTS, we randomly picked three univariate datasets from the UCR database; ItalyPower, Gun Point and Synthetic control. Some of the multivariate datasets contain time series of different length. However, the proposed method is designed for MTS of the same length. Therefore we followed the approach of Wang et al. Wang2016237 and transformed all the MTS in the same dataset to the same length, , determined by where is the length of the longest MTS in the dataset and is the ceiling operator. We also standardized to zero mean and unit standard deviation. Since decision trees are scale invariant, we did not apply this transformation for LPS (in accordance with baydogan2016time ).

Classification without missing data

Initially we considered the case of no missing data and applied a 1NN-classifier in combination with the five different (dis)similarity measures. Tab. 3 shows the mean classification accuracies, evaluated over 10 runs, obtained on the benchmark time series datasets. Firstly, we notice that the dependent version of DTW, in general, gives worse results than the independent version. Secondly, TCK gives the best accuracy for 8 out of 18 datasets. LPS and GAK are better than the competitors for 8 and 3 datasets, respectively. The two versions of DTW achieve the highest accuracy for Gun Point. On CMU all methods reach a perfect score. We also see that TCK works well for univariate data and gives comparable accuracies to the other methods.

Datasets TCK LPS DTW (i) DTW (d) GAK
5pt. ItalyPower 0.922 0.933 0.918 0.918 0.950
Gun Point 0.923 0.790 1.000 1.000 0.900
Synthetic control 0.987 0.975 0.937 0.937 0.870
5pt. Pen digits 0.904 0.928 0.883 0.900 0.945
Libras 0.799 0.894 0.878 0.856 0.811
ECG 0.852 0.815 0.810 0.790 0.840
uWave 0.908 0.945 0.909 0.844 0.905
Char. Traj. 0.953 0.961 0.903 0.905 0.935
Robot failure LP1 0.890 0.836 0.720 0.640 0.720
Robot failure LP2 0.533 0.707 0.633 0.533 0.667
Robot failure LP3 0.703 0.687 0.667 0.633 0.633
Robot failure LP4 0.848 0.914 0.880 0.840 0.813
Robot failure LP5 0.596 0.688 0.480 0.430 0.600
Wafer 0.982 0.981 0.963 0.961 0.967
Japanese vowels 0.978 0.964 0.965 0.865 0.965
ArabicDigits 0.945 0.977 0.962 0.965 0.966
CMU 1.000 1.000 1.000 1.000 1.000
PEMS 0.878 0.798 0.775 0.763 0.763
Table 3: Classification accuracy on different UTS and MTS benchmark datasets obtained using TCK, LPS, DTW (i), DTW (d) and GAK in combination with a 1NN-classifier. The best results are highlighted in bold.

Classification with missing data

We used the Japanese vowels and uWave datasets to illustrate the TCKs ability to classify real-world MTS with missing data. We removed different fractions of the values completely at random (MCAR) and ran a 1NN-classifier equipped with TCK, LPS, DTW (i) and GAK. We also compared to TCK and LPS with imputation of the mean. Mean classification accuracies and standard deviations, evaluated over 10 runs, are reported in Fig. 4.

Figure 4: Classification accuracies with different proportions of MCAR data for Japanese vowels and uWave. uWave long represent the uWave dataset where the MTS have their original length (). Shaded areas represent standard deviations calculated over 10 independent runs.

On the Japanese vowels dataset the accuracy obtained with LPS decreases very fast as the fraction of missing data increases and is greatly outperformed by LPS imp. The performance of GAK also diminishes quickly. The accuracy obtained with DTW (i) decreases from 0.965 to 0.884, whereas TCK imp decreases from 0.978 to 0.932. The most stable results are obtained using TCK: as the ratio of missing data increases from 0 to 0.5, the accuracy decreases from 0.978 to 0.960. We notice that, even if TCK imp yields the second best results, it is clearly outperformed by TCK.

Also for the uWave dataset the accuracy decreases rapidly for LPS, DTW and GAK. The accuracy for TCK is 0.908 for no missing data, is almost stable up to 30% missing data and decreases to 0.868 for 50% missing data. TCK imp is outperformed by TCK, especially beyond 20% missingness. We notice that LPS imp gives better results than LPS also for this dataset. For ratios of missing data above 0.2 TCK gives better results than LPS imp, even though in absence of missingness the accuracy for LPS is 0.946, whereas TCK yields 0.908 only.

To investigate how TCK works for longer MTS, we classified the uWave dataset with MTS of original length, 315. In this case the LPS performs better than for the shorter MTS, as the accuracy decreases from 0.949 to 0.916. We also see that the accuracy decreases faster for LPS imp. For the TCK the accuracy increased from 0.908, obtained on uWave with MTS of length 25, to 0.914 on this dataset. TCK still gives a lower accuracy than LPS when there is no missing data. However, we see that TCK is very robust to missing data, since the accuracy only decreases to 0.912 when the missing ratio increases to 0.5. TCK imp performs equally well up to 30% missing data, but performs poorly for higher missing ratios.

These results indicate that, in contrast to LPS, TCK is not sensitive to the length of the MTS. It can deal equally well with short MTS and long MTS.

Dimensionality reduction and visualization

Figure 5: Projection of three MTS datasets onto the two top principal components when different kernels are applied. The different colors indicate true class labels.

In Fig. 5 we have plotted the two principal components of uWave, Japanese vowels and Character trajectory, obtained with kPCA configured with TCK, LPS and a linear kernel. We notice a tendency in LPS and linear kernel to produce blob-structures, whereas the TCK creates more compact and separated embeddings. For example, for Japanese vowels TCK is able to isolate two classes from the rest.

5.3 Sensitivity analysis

The hyperparameters in the TCK are: maximum number of mixtures , number of randomizations , segment length, subsample size , number of attributes, hyperparameters and initialization of GMM parameters . However, all of them except and , are chosen randomly for each . Hence, the only hyperparameters that have to be set by the user are and .

Figure 6: Accuracies for (left) and varying , and (right) and varying , over three datasets. Shaded areas represent standard deviations calculated over 10 replications.

We have already argued that the method is robust and not sensitive to the choice of these hyperparameters. Here, we evaluate empirically TCK’s dependency on the chosen maximum number of mixture components and of randomizations , on the three datasets Japanese vowels, Wafer and Character trajectories. Fig. 6 (left) shows the classification accuracies obtained using TCK in combination with a 1NN-classifier on the three datasets by fixing and varying from 5 to 50. We see that the accuracies are very stable for larger than 15-20. Even for , the accuracies are not much lower. Next, we fixed and varied from 5 to 50. Fig. 6 (right) shows that the accuracies increase rapidly from , but also that the it stabilizes quite quickly. It appears sufficient to choose

, even if the standard errors are a bit higher for lower

. These results indicate that it is not critical to tune the hyperparameters and correctly, which is important if the TCK should be learned in an unsupervised way.

5.4 Computational time

All experiments were run using an Ubuntu 14.04 64-bit system with 64 GB RAM and an Intel Xeon E5-2630 v3 processor. We used the low-dimensional uWave and the high-dimensional PEMS dataset to empirically test the running time of the TCK. To investigate how the running time is affected by the length and number of variables of the MTS, for the PEMS dataset we selected attributes, while for the uWave dataset we let . Tab. 4 shows the running times (seconds) for TCK, LPS, GAK and DTW (i) on these datasets. We observe that the TCK is competitive to the other methods and, in particular, that its running time is not that sensitive to increased length or number of attributes.

PEMS
5pt. TCK 3.6 (116) 3.5 (115) 2.5 (84) 1.2 (31)
LPS 22 (269) 3.3 (33) 1.3 (4.5) 0.9 (2.9)
GAK 514 52 5.8 1.6
DTW (i) 1031 119 13 3.5
uWave
5pt. TCK 42 (46) 39 (45) 41 (46) 27 (35)
LPS 26 (17) 17 (11) 11 (7) 6.6 (2.5)
GAK 28 25 21 20
DTW (i) 506 244 110 59
Table 4: Running times (seconds) for computing the similarity between the test and training set for two datasets. The time in brackets represents time used to train the models for the methods that need training. For the PEMS dataset we used the original 963 attributes, but also ran the models on subsets consisting of 100, 10 and 2 attributes, respectively. For the uWave dataset we varied the length from to .

6 Conclusions

We have proposed a novel similarity measure and kernel for multivariate time series with missing data. The robust time series cluster kernel was designed by applying an ensemble strategy to probabilistic models. TCK can be used as input in many different learning algorithms, in particular in kernel methods.

The experimental results demonstrated that the TCK (1) is robust to hyperparameter settings, (2) is competitive to established methods on prediction tasks without missing data and (3) is better than established methods on prediction tasks with missing data.

In future works we plan to investigate whether the use of more general covariance structures in the GMM, or the use of HMMs as base probabilistic models, could improve TCK.

Conflict of interest

The authors have no conflict of interest related to this work.

Acknowledgement

This work (Robert Jenssen and Filippo Bianchi) is partially supported by the Research Council of Norway over FRIPRO grant no. 234498 on developing the Next Generation Learning Machines. Cristina Soguero-Ruiz is supported by FPU grant AP2012-4225 from Spanish Government.

The authors would like to thank Sigurd Løkse and Michael Kampffmeyer for useful discussions, and Jonas Nordhaug Myhre for proof reading the article.

Appendix A

Theorem 2.

LPS is a kernel.

Proof.

The LPS similarity between two time series and is computed from the LPS representation, given by the frequency vectors and , where being the number of segments of contained in the leaf of tree and the number of trees baydogan2016time . Let be the total number of segments of length in the MTS of length . Without loss of generality we assume that and , the total number of leaves, are constant in all trees. The LPS similarity reads

(5)

We notice that, if we ignore the normalizing factor, Eq. 5 is the computation of the intersection between and . In order to complete the proof, we now introduce an equivalent binary representation of the frequency vectors in the leaves. We represent the leaf of the tree as a binary sequence, with 1s in front and 0s in the remaining positions

The intersection between and , yielded by Eq. 5, can be expressed as a bitwise operation through dot product

(6)

which is a linear kernel in the linear span of the LPS representations, which is isometric to . ∎

References

  • (1) W. Vandaele, Applied time series and Box-Jenkins models, 1983.
  • (2) C. Chatfield, The analysis of time series: an introduction, CRC press, 2016.
  • (3) J. D. Cryer, N. Kellet, Time series analysis, Vol. 101, Springer, 1986.
  • (4) R. H. Shumway, D. S. Stoffer, Time series analysis and its applications: with R examples, Springer Science & Business Media, 2010.
  • (5) F. Iglesias, W. Kastner, Analysis of similarity measures in times series clustering for the discovery of building energy patterns, Energies 6 (2) (2013) 579–597.
  • (6) M. Ji, F. Xie, Y. Ping, A dynamic fuzzy cluster algorithm for time series, Abstract and Applied Analysis 2013.
  • (7) M. Pyatnitskiy, I. Mazo, M. Shkrob, E. Schwartz, E. Kotelnikova, Clustering gene expression regulators: New approach to disease subtyping, PLOS ONE 9 (1) (2014) 1–10.
  • (8) K. Häyrinen, K. Saranto, P. Nykänen, Definition, structure, content, use and impacts of electronic health records: A review of the research literature, International Journal of Medical Informatics 77 (5) (2008) 291–304.
  • (9) C. Soguero-Ruiz, W. M. Fei, R. Jenssen, K. M. Augestad, J.-L. Rojo-Álvarez, I. Mora-Jiménez, R.-O. Lindsetmo, S. O. Skrøvseth, Data-driven temporal prediction of surgical site infection, in: AMIA Annual Symposium Proceedings, Vol. 2015, American Medical Informatics Association, 2015, pp. 1164 –1173.
  • (10) C. Soguero-Ruiz, K. Hindberg, I. Mora-Jiménez, J. L. Rojo-Álvarez, S. O. Skrøvseth, F. Godtliebsen, K. Mortensen, A. Revhaug, R.-O. Lindsetmo, K. M. Augestad, et al., Predicting colorectal surgical complications using heterogeneous clinical data and kernel methods, Journal of biomedical informatics 61 (2016) 87–96.
  • (11) Y.-C. Hsu, A.-P. Chen, A clustering time series model for the optimal hedge ratio decision making, Neurocomputing 138 (2014) 358–370.
  • (12) R. S. Tsay, Multivariate Time Series Analysis: with R and financial applications, John Wiley & Sons, 2013.
  • (13) O. Anava, E. Hazan, A. Zeevi, Online time series prediction with missing data., in: ICML, 2015, pp. 2191–2199.
  • (14) F. Bashir, H. L. Wei, Handling missing data in multivariate time series using a vector autoregressive model based imputation (var-im) algorithm: Part i: Var-im algorithm versus traditional methods, in: 24th Mediterranean Conference on Control and Automation, 2016, pp. 611–616.
  • (15) B. Scholkopf, A. J. Smola, Learning with kernels: support vector machines, regularization, optimization, and beyond, MIT press, 2001.
  • (16) J. Shawe-Taylor, N. Cristianini, Kernel methods for pattern analysis, Cambridge university press, 2004.
  • (17) X. Wang, A. Mueen, H. Ding, G. Trajcevski, P. Scheuermann, E. Keogh, Experimental comparison of representation methods and distance measures for time series data, Data Mining and Knowledge Discovery 26 (2) (2013) 275–309.
  • (18) S. Aghabozorgi, A. S. Shirkhorshidi, T. Y. Wah, Time-series clustering – a decade review, Information Systems 53 (C) (2015) 16 – 38.
  • (19) C. Faloutsos, M. Ranganathan, Y. Manolopoulos, Fast subsequence matching in time-series databases, in: Proceedings of the 1994 ACM SIGMOD International Conference on Management of data, ACM, 1994, pp. 419–429.
  • (20) K.-P. Chan, A. W.-C. Fu, Efficient time series matching by wavelets, in: Proceedings 15th International Conference on Data Engineering, IEEE, 1999, pp. 126–133.
  • (21) F. Korn, H. V. Jagadish, C. Faloutsos, Efficiently supporting ad hoc queries in large datasets of time sequences, in: Proceedings of the 1997 ACM SIGMOD International Conference on Management of Data, ACM, 1997, pp. 289–300.
  • (22) J. Lin, E. Keogh, L. Wei, S. Lonardi, Experiencing SAX: a novel symbolic representation of time series, Data Mining and Knowledge Discovery 15 (2) (2007) 107–144.
  • (23) E. Keogh, K. Chakrabarti, M. Pazzani, S. Mehrotra, Dimensionality reduction for fast similarity search in large time series databases, Knowledge and Information Systems 3 (3) (2001) 263–286.
  • (24) B. M. Marlin, D. C. Kale, R. G. Khemani, R. C. Wetzel, Unsupervised pattern discovery in electronic health care data using probabilistic clustering models, in: Proceedings of the 2nd ACM SIGHIT International Health Informatics Symposium, ACM, 2012, pp. 389–398.
  • (25) F. Bashir, A. Khokhar, D. Schonfeld, Automatic object trajectory-based motion recognition using Gaussian mixture models, in: IEEE International Conference on Multimedia and Expo, IEEE, 2005, pp. 1532–1535.
  • (26) F. I. Bashir, A. A. Khokhar, D. Schonfeld, Object trajectory-based activity classification and recognition using hidden Markov models, IEEE Transactions on Image Processing 16 (7) (2007) 1912–1919.
  • (27) M. Ramoni, P. Sebastiani, P. Cohen, Bayesian clustering by dynamics, Machine Learning 47 (1) (2002) 91–121.
  • (28) A. Panuccio, M. Bicego, V. Murino, A Hidden Markov Model-based approach to sequential data clustering, in: Proceedings of the Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition, Springer, 2002, pp. 734–743.
  • (29)

    B. Knab, A. Schliep, B. Steckemetz, B. Wichern, Model-based clustering with hidden markov models and its application to financial time-series data, in: Between Data Science and Applied Data Analysis, Springer, 2003, pp. 561–569.

  • (30) N. Kumar, V. N. Lolla, E. Keogh, S. Lonardi, C. A. Ratanamahatana, L. Wei, Time-series bitmaps: a practical visualization tool for working with large time series databases, in: Proceedings of the Fifth SIAM International Conference on Data Mining, SIAM, 2005, pp. 531–535.
  • (31) M. Corduas, D. Piccolo, Time series clustering and classification by the autoregressive metric, Computational Statistics and Data Analysis 52 (4) (2008) 1860 – 1872.
  • (32) Y. Xiong, D.-Y. Yeung, Mixtures of arma models for model-based time series clustering, in: IEEE International Conference on Data Mining, IEEE, 2002, pp. 717–720.
  • (33)

    T.-C. Fu, A review on time series data mining, Engineering Applications of Artificial Intelligence 24 (1) (2011) 164–181.

  • (34) J. Han, J. Pei, M. Kamber, Data mining: concepts and techniques, Elsevier, 2011.
  • (35) D. J. Berndt, J. Clifford, Using dynamic time warping to find patterns in time series, in: Proceedings of the 3rd International Conference on Knowledge Discovery and Data Mining, AAAI Press, 1994, pp. 359–370.
  • (36) M. Vlachos, M. Hadjieleftheriou, D. Gunopulos, E. Keogh, Indexing multi-dimensional time-series with support for multiple distance measures, in: Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, 2003, pp. 216–225.
  • (37) K. Yang, C. Shahabi, An efficient k nearest neighbor search for multivariate time series, Information and Computation 205 (1) (2007) 65–98.
  • (38) L. Chen, M. T. Özsu, V. Oria, Robust and fast similarity search for moving object trajectories, in: Proceedings of the 2005 ACM SIGMOD International Conference on Management of Data, ACM, 2005, pp. 491–502.
  • (39) Z. Bankó, J. Abonyi, Correlation based dynamic time warping of multivariate time series, Expert Systems with Applications 39 (17) (2012) 12814–12823.
  • (40) M. G. Baydogan, G. Runger, Time series representation and similarity based on local autopatterns, Data Mining and Knowledge Discovery 30 (2) (2016) 476–509.
  • (41) R. Jenssen, Kernel entropy component analysis, IEEE transactions on pattern analysis and machine intelligence 32 (5) (2010) 847–860.
  • (42) R. Jenssen, Entropy-relevant dimensions in the kernel feature space: Cluster-capturing dimensionality reduction, IEEE Signal Processing Magazine 30 (4) (2013) 30–39.
  • (43) B. Schölkopf, K. Tsuda, J.-P. Vert, Kernel methods in computational biology, MIT press, 2004.
  • (44) G. Camps-Valls, L. Bruzzone, Kernel methods for remote sensing data analysis, John Wiley & Sons, 2009.
  • (45)

    C. Soguero-Ruiz, K. Hindberg, J. L. Rojo-Álvarez, S. O. Skrøvseth, F. Godtliebsen, K. Mortensen, A. Revhaug, R. O. Lindsetmo, K. M. Augestad, R. Jenssen, Support vector feature selection for early detection of anastomosis leakage from bag-of-words in electronic health records, IEEE Journal of Biomedical and Health Informatics 20 (5) (2016) 1404–1415.

  • (46)

    B. Schölkopf, R. Herbrich, A. J. Smola, A generalized representer theorem, in: International Conference on Computational Learning Theory, Springer, 2001, pp. 416–426.

  • (47) A. Berlinet, C. Thomas-Agnan, Reproducing kernel Hilbert spaces in probability and statistics, Springer Science & Business Media, 2011.
  • (48) I. Steinwart, A. Christmann, Support vector machines, Springer Science & Business Media, 2008.
  • (49)

    B. Schölkopf, A. Smola, K.-R. Müller, Kernel principal component analysis, in: International Conference on Artificial Neural Networks, Springer, 1997, pp. 583–588.

  • (50) B. Haasdonk, C. Bahlmann, Learning with distance substitution kernels, in: Joint Pattern Recognition Symposium, Springer, 2004, pp. 220–227.
  • (51) P.-F. Marteau, S. Gibet, On recursive edit distance kernels with application to time series classification, IEEE Transactions on Neural Networks and Learning Systems 26 (6) (2015) 1121–1133.
  • (52) M. Cuturi, Fast global alignment kernels, in: Proceedings of the 28th International Conference on Machine Learning, 2011, pp. 929–936.
  • (53) T. Jebara, R. Kondor, A. Howard, Probability product kernels, Journal of Machine Learning Research 5 (2004) 819–844.
  • (54) T. S. Jaakkola, M. Diekhans, D. Haussler, Using the Fisher kernel method to detect remote protein homologies, in: Proc Int Conf Intell Syst Mol Biol., Vol. 99, 1999, pp. 149–158.
  • (55) H. Chen, F. Tang, P. Tino, X. Yao, Model-based kernel for efficient time series analysis, in: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, 2013, pp. 392–400.
  • (56) Z. Liu, M. Hauskrecht, Learning adaptive forecasting models from irregularly sampled multivariate clinical data, in: Thirtieth AAAI Conference on Artificial Intelligence, 2016, pp. 1273–1279.
  • (57) A. R. T. Donders, G. J. van der Heijden, T. Stijnen, K. G. Moons, Review: a gentle introduction to imputation of missing values, Journal of clinical epidemiology 59 (10) (2006) 1087–1091.
  • (58) T. G. Dietterich, Ensemble methods in machine learning, in: International workshop on multiple classifier systems, Springer Berlin Heidelberg, 2000, pp. 1–15.
  • (59) M. Cuturi, A. Doucet, Autoregressive kernels for time series, arXiv preprint arXiv:1101.0673.
  • (60) E. Izquierdo-Verdiguier, R. Jenssen, L. Gómez-Chova, G. Camps-Valls, Spectral clustering with the probabilistic cluster kernel, Neurocomputing 149, Part C (2015) 1299 – 1304.
  • (61) Q. Cai, L. Chen, J. Sun, Piecewise statistic approximation based similarity measure for time series, Knowledge-Based Systems 85 (2015) 181 – 195.
  • (62) C. A. Ratanamahatana, E. Keogh, Three myths about dynamic time warping data mining, in: Proceedings of the 2005 SIAM International Conference on Data Mining, SIAM, 2005, pp. 506–510.
  • (63) J. Lines, A. Bagnall, Time series classification with ensembles of elastic distance measures, Data Mining and Knowledge Discovery 29 (3) (2015) 565–592.
  • (64) M. Shokoohi-Yekta, B. Hu, H. Jin, J. Wang, E. Keogh, Generalizing DTW to the multi-dimensional case requires an adaptive approach, Data Mining and Knowledge Discovery 31 (1) (2017) 1–31.
  • (65) C. Berg, J. P. Christensen, P. Ressel, Harmonic Analysis on Semigroups: Theory of Positive Definite and Related Functions, 1st Edition, Vol. 100 of Graduate Texts in Mathematics, Springer, 1984.
  • (66) G. Wu, E. Y. Chang, Z. Zhang, Learning with non-metric proximity matrices, in: Proceedings of the 13th annual ACM international conference on Multimedia, ACM, 2005, pp. 411–414.
  • (67) Y. Chen, E. K. Garcia, M. R. Gupta, A. Rahimi, L. Cazzanti, Similarity-based classification: Concepts and algorithms, Journal of Machine Learning Research 10 (2009) 747–776.
  • (68) K. Tsuda, T. Kin, K. Asai, Marginalized kernels for biological sequences, Bioinformatics 18 (suppl 1) (2002) S268–S275.
  • (69) D. B. Rubin, Inference and missing data, Biometrika 63 (3) (1976) 581–592.
  • (70) J. A. Bilmes, A gentle tutorial of the em algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models, International Computer Science Institute 4 (510) (1998) 126.
  • (71) A. P. Dempster, N. M. Laird, D. B. Rubin, Maximum likelihood from incomplete data via the EM algorithm, Journal of the royal statistical society. Series B (methodological) (1977) 1–38.
  • (72) G. McLachlan, T. Krishnan, The EM algorithm and extensions, Vol. 382, John Wiley & Sons, 2007.
  • (73) T. J. Hastie, R. J. Tibshirani, J. H. Friedman, The elements of statistical learning: data mining, inference, and prediction, Springer series in statistics, Springer, 2009.
  • (74) C. J. Wu, On the convergence properties of the EM algorithm, The Annals of statistics (1983) 95–103.
  • (75) L. Breiman, Bagging predictors, Machine learning 24 (2) (1996) 123–140.
  • (76) Y. Freund, R. E. Schapire, Experiments with a new boosting algorithm, in: Proceedings of the Thirteenth International Conference on Machine Learning (ICML 1996), 1996, pp. 148–156.
  • (77)

    A. L. N. Fred, A. K. Jain, Evidence accumulation clustering based on the k-means algorithm, in: Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition and Structural and Syntactic Pattern Recognition, Springer, 2002, pp. 442–451.

  • (78) S. Monti, P. Tamayo, J. Mesirov, T. Golub, Consensus clustering: A resampling-based method for class discovery and visualization of gene expression microarray data, Machine Learning 52 (1-2) (2003) 91–118.
  • (79) A. Strehl, J. Ghosh, Cluster ensembles – a knowledge reuse framework for combining multiple partitions, Journal of Machine Learning Research 3 (2003) 583–617.
  • (80) S. Vega-Pons, J. Ruiz-Shulcloper, A survey of clustering ensemble algorithms, International Journal of Pattern Recognition and Artificial Intelligence 25 (03) (2011) 337–372.
  • (81) M. Glodek, M. Schels, F. Schwenker, Ensemble Gaussian mixture models for probability density estimation, Computational Statistics 28 (1) (2013) 127–138.
  • (82) A. L. Fred, A. K. Jain, Combining multiple clusterings using evidence accumulation, IEEE Transactions on Pattern Analysis and Machine Intelligence 27 (6) (2005) 835–850.
  • (83) R. J. Kate, Using dynamic time warping distances as features for improved time series classification, Data Mining and Knowledge Discovery 30 (2) (2016) 283–312.
  • (84) K. Ø. Mikalsen, Time series cluster kernel (TCK) Matlab implementation, http://site.uit.no/ml (2017).
  • (85) LPS Matlab implementation, http://www.mustafabaydogan.com/files/viewdownload/18-learned-pattern-similarity-lps/60-multivariate-lps-matlab-implementation.html, accessed: 2017-03-07.
  • (86) T. Giorgino, Computing and visualizing dynamic time warping alignments in R: The dtw package, Journal of Statistical Software 031 (i07).
  • (87) Fast global alignment kernel Matlab implementation, http://www.marcocuturi.net/GA.html, accessed: 2017-06-20.
  • (88) L. Hubert, P. Arabie, Comparing partitions, Journal of Classification 2 (1) (1985) 193–218.
  • (89) H. W. Kuhn, B. Yaw, The Hungarian method for the assignment problem, Naval Research Logistics Quarterly (1955) 83–97.
  • (90) Y. Chen, E. Keogh, B. Hu, N. Begum, A. Bagnall, A. Mueen, G. Batista, The UCR time series classification archive, www.cs.ucr.edu/~eamonn/time_series_data/, accessed: 2016-12-17 (July 2015).
  • (91) M. Lichman, UCI machine learning repository, http://archive.ics.uci.edu/ml, accessed: 2016-10-29 (2013).
  • (92) Carnegie mellon university motion capture database, http://mocap.cs.cmu.edu, accessed: 2017-1-13 (2014).
  • (93)

    R. T. Olszewski, Generalized feature extraction for structural pattern recognition in time-series data, Ph.D. thesis, Pittsburgh, PA, USA (2001).

  • (94) L. Wang, Z. Wang, S. Liu, An effective multivariate time series classification approach using echo state network and adaptive differential evolution algorithm, Expert Systems with Applications 43 (2016) 237 – 249.