HDC-MiniROCKET: Explicit Time Encoding in Time Series Classification with Hyperdimensional Computing

by   Kenny Schlegel, et al.
TU Chemnitz

Classification of time series data is an important task for many application domains. One of the best existing methods for this task, in terms of accuracy and computation time, is MiniROCKET. In this work, we extend this approach to provide better global temporal encodings using hyperdimensional computing (HDC) mechanisms. HDC (also known as Vector Symbolic Architectures, VSA) is a general method to explicitly represent and process information in high-dimensional vectors. It has previously been used successfully in combination with deep neural networks and other signal processing algorithms. We argue that the internal high-dimensional representation of MiniROCKET is well suited to be complemented by the algebra of HDC. This leads to a more general formulation, HDC-MiniROCKET, where the original algorithm is only a special case. We will discuss and demonstrate that HDC-MiniROCKET can systematically overcome catastrophic failures of MiniROCKET on simple synthetic datasets. These results are confirmed by experiments on the 128 datasets from the UCR time series classification benchmark. The extension with HDC can achieve considerably better results on datasets with high temporal dependence without increasing the computational effort for inference.


page 1

page 5


Extreme-SAX: Extreme Points Based Symbolic Representation for Time Series Classification

Time series classification is an important problem in data mining with s...

Deep Cellular Recurrent Network for Efficient Analysis of Time-Series Data with Spatial Information

Efficient processing of large-scale time series data is an intricate pro...

High-Dimensional Changepoint Detection via a Geometrically Inspired Mapping

High-dimensional changepoint analysis is a growing area of research and ...

Probability inequalities for high dimensional time series under a triangular array framework

Study of time series data often involves measuring the strength of tempo...

Time Series Anomaly Detection by Cumulative Radon Features

Detecting anomalous time series is key for scientific, medical and indus...

Tight Risk Bound for High Dimensional Time Series Completion

Initially designed for independent datas, low-rank matrix completion was...

A Correlation Based Feature Representation for First-Person Activity Recognition

In this paper, a simple yet efficient feature encoding for first-person ...

I Introduction

Time series classification has a wide range of applications in robotics, autonomous driving, medical diagnostic, in the financial sector, and so on. As elaborated in [2], classification of time series differs from traditional classification problems because the attributes are ordered. Hence, it is crucial to create discriminative and meaningful features with respect to the specific order in time. Over the past years, various methods for classification of univariate and multivariate time series have been proposed (for instance, [33, 31, 3, 36, 22, 12, 7, 8, 20, 32]). Often, a high accuracy of a method comes at the cost of a high computational effort. A very noticeable exception is MiniROCKET [8] which superseded the earlier ROCKET [7]

and achieves state-of-the-art accuracy at very low computational complexity. Similar to a convolutional neural network (CNN) layer, MiniROCKET applies a set of parallel convolutions to the input signal. To achieve a low runtime, two important design decisions of MiniROCKET are (1) the usage of convolution filters of small size and (2) accumulation of filter responses over time based on the

Proportion of Positive Values (PPV), which is a special kind of averaging. However, the combination of these design decisions can hamper the encoding of temporal variation of signals on a larger scale than the size of the convolution filters. To address this, the authors of MiniROCKET propose to use dilated convolutions. A dilated convolution virtually increases a filter kernel by adding sequences of zeros in between the values of the original filter kernel [38] (e.g. [-1 2 1] becomes [-1 0 2 0 1] or [-1 0 0 2 0 0 1] and so on).

Fig. 1: MiniROCKET is a fast state-of-the-art approach for time series classification. However, it is easy to create simple datasets where its performance is similar to random guessing. The proposed HDC-MiniROCKET uses explicit time encoding to prevent this failure at the same computational costs.

The first contribution of this paper is to demonstrate that although the dilatated convolutions of MiniROCKET perform well on a series of standard benchmark datasets like UCR [6], it is easy to create datasets where classification based on MiniROCKET is not much better than random guessing. An example is illustrated in Fig. 1. There, the task is to distinguish two different classes of time series signals. Each consists of Gaussian noise and a single sharp peak either in the first half of the signal (for the first class) or in the second half of the signal (for each sample from the second class). Since this is a 2-class problem, random guessing of the class of a query signal can achieve 50% accuracy. Surprisingly, the MiniROCKET implementation achieves an only slightly better accuracy of 57% after training (detail on this experiment will be given in Sec. V-A). This is due to a combination of two problems: (1) The location of the sharp peaks cannot be well captured by dilated convolutions with high dilation values due to their large gaps. (2) Although responses of filters with small or no dilation can represent the peak, the averaging implemented by PPV removes the temporal information on the global scale.

Sec. IV will present a novel approach, HDC-MiniROCKET, that addressed this second problem. It is based on the observation that MiniROCKET’s Proportion of Positive Values (the second design decision from above) is a special case of a broader class of accumulation operations known as bundling in the context of Hyperdimensional Computing (HDC) and Vector Symbolic Architectures (VSA) [13, 27, 11, 9]. This novel perspective encourages a straight-forward and computational efficient usage of a second HDC operator, binding, in order to explicitly and flexibly encode temporal information during the accumulation process in MiniROCKET. The original MiniROCKET then becomes a special case of the proposed more general HDC-MiniROCKET. As is illustrated in Fig. 1 and elaborated in Sec. V-A, this explicit time encoding can significantly improve the performance on the above classification problem to 94% accuracy. Sec. V-B will demonstrate that the performance also improves on a large collection of standard time series classification benchmark datasets. The proposed extension can be implemented efficiently as will be discussed in Sec. IV-C and evaluated in Sec. V-D.

Ii Related Work

Ii-a Time Series Classification

The related work of time series classification can basically be divided into two different domains: univariate and multivariate time series. For the first case, there is a large number of algorithms in the literature, while the second case is a more general problem formulation with multiple input channels and only recently received increased attention. There exists are two major survey paper for these problems: [2] for univariate and [29] for multivariate time series classification. The survey papers used dataset collections for benchmarking: 1) the University of California, Riverside (UCR) time series classification and clustering repository [6] for univariate time series, and 2) the University of East Anglia (UEA) repository [1] for multivariate time series.

Bagnall et. all [2] compared a large variety of algorithms from different feature encoding categories like shapelets, dictionary-based or combinations of these. According to this survey, the most accurate algorithms for univariate time series classification are BOSS (Bag-of-SFA-Symbols) [33] (with the more accurate but higher memory consumption extension WEASEL [31]), Shapelet Transform [3] and COTE (superseded by HIVE-COTE [20]

), which is an ensemble and uses multiple other classifiers. BOSS uses SFA (symbolic fourier approximation)


, which is an accurate Bag of Patterns (BoP) method and is based on the Fourier transformation of the signal. WEASEL extended the BOSS approach by an additional word extraction from SFA with histogram calculation.

Besides the survey paper by [2], there are other recent accurate classification methods such as ensemble classifiers like TS-CHIEF [36] and HIVE-COTE 2.0 [22], the model-based neural network InceptionTime [12] (based on the Inseption architecture), and ROCKET [7] with its updated version MiniROCKET [8] as a simple convolution-based method. MiniROCKET is characterized by a very good trade-off between classification accuracy and model complexity and is currently state of the art for univariate time series.

[29] provides an overview of approaches for multivariate time series classification. CIF [21] is an improvement of HIVE-COTE [20] and shows very good accuracies on the mutlivariate benchmark, but both are extremely slow and have high memory consumption (according to [29]: CIF is accurate but takes 6 days for all 30 UEA datasets and HIVE-COTE would take over a year). WEASEL plus MUSE [32] is an extension of WEASEL [31] for the multivariate domain and provides good results but also lags on computation efficiency. ROCKET was extended to multivariate classification in [29] and nominated as the clear winner of the multivariate classification benchmark - it achieves high accuracy and is by far the fastest classifier. Its superseded version MiniROCKET [8] has a recent implementation for multivariate time series and has almost the same accuracy as ROCKET by even lower computational costs.

Therefore, MiniROCKET with its overall good performance is a good starting point for further improvements in both univariate and multivariate time series classification.


As said before, MiniROCKET [8] is a variant of the earlier ROCKET [7] algorithm. Based on the great success of convolutional neural networks, both variants build upon convolutions with multiple kernels. However, learning the kernels is difficult if the dataset is too small, so [7]

uses a fixed set of predefined kernels. While ROCKET uses randomly selected kernels with a high variety on length, weights, biases and dilations, MiniROCKET is more deterministic. It uses predefined kernels based on empirical observations of the behavior of ROCKET. Furthermore, instead of using two global pooling operation in ROCKET (max-pooling and proportion of positive values, PPV), MiniROCKET uses just one pooling value – the PPV. This leads to vectors that are only half as large (about 10,000 instead of 20,000 dimensions). MiniROCKET is up to 75 times faster than ROCKET on large datasets. To classify feature vectors, both ROCKET and MiniROCKET use a simple ridge regression. Although MiniROCKET uses less feature dimensions, the accuracy of both approaches is almost identical.

Ii-C Hyperdimensional Computing (HDC)

Hyperdimensional computing (also known as Vector Symbolic Architectures, VSA) is an established approach to solve computational problems using large numerical vectors (hypervectors) and well-defined mathematical operations. Basic literature with theoretical background and details on implementations of HDC are [13, 27, 11, 9]; further general comparisons and overviews can be found in [16, 17, 10, 35]. HDC has been applied in various fields including addressing catastrophic forgetting in deep neural networks [4], image feature aggregation [25]

, semantic image retrieval

[24], medical diagnosis [37], robotics [23], fault detection [14], analogy mapping [28]

, reinforcement learning


, long-short term memory

[5], text classification [18], and synthesis of finite state automata [26]. Moreover, hypervectors are also intermediate representations in most artificial neural networks. Therefore, a combination with HDC can be straightforward. Related to time series, for example, [23]

used HDC in combination with deep-learned descriptors for temporal sequence encoding for image-based localization. A combination of HDC and neural networks for multivariate time series classification of driving styles was demonstrated in

[34]. There, the HDC approach was used to first encode the sensory values and to then combine the temporal and spatial context in a single representation. This led to faster learning and better classification accuracy compared to standard LSTM neural networks.

Iii A Primer: The algorithmic steps of MiniROCKET [8]

Input is a time series signal where is the length of the time series. MiniROCKET can compute a dimensional output vector that describes this signal. For application to time series classification, given training data of time signals with known class, a classifier (e.g. a simple ridge regression or a neural network) can be trained on these descriptor vectors and then be used to classify descriptors of new time series signals.

The first step in MiniROCKET is a dilated convolution of the input signal with kernels :


is the dilation parameter, refers to 84 pre-defined kernels . The length and weights of these kernels are based on insights from the first ROCKET method [7]: the MiniROCKET kernels have a length of 9, the weights are restricted to one of two values , and there are exactly three weights with value 2. To extend the receptive field of the different kernels, each kernel is applied with one or more dilations

selected from an exponential distribution to ensure that exponentially more features are computed with smaller dilations. For details on the kernels and dilations, please refer to


In a second step, each convolution result is element-wise compared to one or multiple bias values :


This is an approximation of the cumulative distribution function of the filter responses of this particular combination of kernel

and dilation

. MiniROCKET computes the bias values based on the quantiles of the convolution responses of a randomly chosen training sample. The number of bias values is such that there are exactly 119 different combinations of dilations

and bias values for each of the 84 kernels , resulting in different binary vectors ( is the length of the input time series ). Again, for the details, please refer to [8].

The final step of MiniROCKET is to compute each of the 9,996 dimensions of the output descriptor as the mean value of one of the vectors (referred to as PPV in [8]). This averaging is where potentially important temporal information is lost and this final step will be different in HDC-MiniROCKET.

Iv Algorithmic approach: HDC-MiniROCKET

The proposed HDC-MiniROCKET is a variant of MiniROCKET. Both use the same set of dilated convolutions to encode the input signal. The important difference is that HDC-MiniROCKET uses a Hyperdimensional Computing (HDC) binding operator to bind the convolution results to timestamps before creation of the final output. For a general introduction to HDC and explanations, why these operators work in this application, please refer to one of [13, 27, 11, 9]. Here, we will only very briefly introduce some of the concepts and explain their usage for encodings in HDC-MiniROCKET. Sec. IV-A introduces HDC bundling and formulates MiniROCKET in terms of this operation. Sec. IV-B introduces the second operator, binding, and uses it in combination with time encodings based on fractional binding in the proposed HDC-MiniROCKET. Finally, Sec. IV-C will present an efficient implementation.

Iv-a A HDC perspective on MiniROCKET

The general concept of HDC is to systematically process very high-dimensional vectors with carefully designed vector operations. One of these operations is bundling . Input to the bundling operation are two or more vectors from a vector space and the output is a vector of the same size that is similar to each input vector. Dependent on the underlying vector space , there are different Vector Symbolic Architectures (VSAs) that implement this operation differently. For example, the multiply-add-permute (MAP) architecture [11] can operate on bipolar vectors (with values ) and implements bundling with a simple element-wise summation – in high-dimensional vector spaces, the sum of vectors is similar to each summand [27].

The valuable connection between HDC and MiniROCKET is that the 9,996 different vectors from eq. 2 in MiniROCKET also constitute a 9,996-dimensional feature vector for each timestep (think of each vector being a row in a matrix, then these feature vectors are the columns). The averaging in MiniROCKET’s PPV (i.e. the row-wise mean in the above matrix) is then equivalent to bundling of these vectors . More specifically, if we convert to bipolar by and use the MAP architecture’s bundling operation (i.e. elementwise addition) then the result is proportional to the output of MiniROCKET:


This is just another way to compute output vectors that point in the same directions as the MiniROCKET output – which preserves cosine similarities between vectors. But much more importantly, it encourages the integration of a second HDC operation,

binding , as described in the following section.


Fig. 2: Different graded similarities for timestamp encodings.

HDC-MiniROCKET extends MiniROCKET by using the HDC binding operation to also encode the timestamp of each feature in the output vector (without increasing the vector size).

In HDC, the input to the binding operation are two vectors from the same vector space and the output is a single vector, again of the same size, that is non-similar to each of the input vectors. We will use the property that binding is similarity preserving, i.e., . A typical usage of binding is to encode key-value pairs. In the MAP architecture, binding is implemented by simple element-wise multiplication [11].

The output of HDC-MiniROCKET is computed by:


where is a systematic encoding of the timestamp in a vector of the same length as , i.e. 9,996-dimensional. Before we provide details on the creation of , we want to emphasize that all vectors, including , are of the same length and that all operations are simple and efficient local operations. In the MAP architecture, is simple element-wise addition, is element-wise multiplication.

The vectors are intended to encode temporal position by making early events dissimilar to later ones. For creating such hypervectors, we use fractional binding [19]. It uses a predefined real valued hypervector of dimensionality and transfers a scalar value to a hypervector by applying the following equation:


DFT and IDFT are the Discrete Fourier Transform and the Inverse Discrete Fourier Transform. The resulting vector represents a systematically encoded hypervector of the scalar , which means that the procedure preserved the similarity of scalar values (euclidean distance) within the hyperdimensional vector space (cosine similarity).

To be able of adjust the temporal similarity of feature vectors, we introduce a parameter , which influences the graded similarity between consecutive timestamps. In HDC-MiniROCKET the scalar values at time in a time series with length is given by . The scale factor adjusts how fast the similarity of timestamp encodings decreases with increasing difference in . This is visualized in Fig. 2. If is equal to 0, all timestamp encodings are the same. means the similarity decreases for increasing difference of timestamps and only reaches zero for the difference between the first and the last timestamp of the whole time series. means that the similarity went to zero after the half of the length, and so on. weigths the importance of temporal position – the higher its value, the more dissimilar become timestamps.

This systematic encoding of timestamps is exploited in HDC-MiniROCKET in Eq. 4. Although binding and bundling can be implemented as simple as element-wise multiplication and addition, these operations have partially surprising properties in very high-dimensional vector spaces such as the 9,996-dimensional MiniROCKET features. In HDC-MiniROCKET, the underlying mechanism is the similarity preserving property of the HDC-binding operation from the beginning of this section: the similarity of two feature vectors, each bound to the encoding of their timestamp, gradually decreases with increasing difference of the timestamps. This information is maintained when bundling () feature vectors to create the output , since the output of bundling is similar to each input. For more details on these underlying mechanisms of hyperdimensional computing, please refer to one of [35, 23, 16].

As described earlier, the proposed temporal encoding creates a more general variant of MiniROCKET – if parameter is set to 0, the cosine similarities based on HDC-MiniROCKET are identical to those of MiniROCKET. The higher , the more distinctive is the temporal context inside HDC-MiniROCKET’s feature calculation. The selection of the value of will be discussed and evaluated in Sec. V-C.

Iv-C Efficient implementation

A low computational complexity is an important feature of MiniROCKET. HDC-MiniROCKET can be implemented with the same (in fact, even slightly lower) computational complexity during inference.

The efficient implementation of HDC-MiniROCKET builds on two simple observations: First, can be pre-computed for all since it only depends on the fixed seed vector and the scaling parameter , i.e. it is independent of the signal . Second, we can omit all additions and multiplications when computing in Eq. 4. The term in Eq. 4

can be directly computed by replacing the binarization in MiniROCKET from Eq. 



This can also directly integrate the mutiplication with :


Here, is the i-th dimension of the vector time encoding of time step . Finally, the i-th dimension of the output of HDC-MiniROCKET is then simply:


In summary, the computational steps of HDC-MiniROCKET are:

  1. Compute the filter responses for each kernel-dilation combination (Eq. 1).

  2. Compare each of them to a set of bias values to select either a positive or negative precomputed time encoding (Eq. 7).

  3. Compute the sum over all time steps (Eq. 8).

Other than the precomputation of time encodings , there are no additional computations compared to MiniROCKET. The PPV computation of the original MiniROCKET even requires an additional division per output dimension; however, if cosine similarities are used, then this division could also be omitted in the original MiniROCKET.

V Experimental Evaluation

We evaluate the performance of the proposed HDC-MiniROCKET in two different setups: (1) the simple synthetic dataset classification task from the introduction and Fig. 1, and (2) the 128 datasets from the UCR benchmark for univariate time series classification. Finally, we will analyze the computational efficiency of our implementation.

We use a Python implementation based on the code of MiniROCKET from the Python library sktime111sktime, GitHub, https://github.com/alan-turing-institute/sktime, which contains the original code222minirocket, GitHub, https://github.com/angus924/minirocket of [8]. For classification, we also use the same ridge regression classifier as the original MiniROCKET [8].

V-a Synthetic Dataset

As described before, the global pooling by PPV in MiniROCKET can neglect potentially important temporal information if it is not well captured by the dilated convolutions. To demonstrate possible problems of such behavior, we constructed a simple synthetic dataset, which consists of two classes with high temporal dependency. Example signals for both classes are shown in Fig. 1. Each signal consists of a single sharp peak either in the first (class 1) or second (class 2) half of the signal and additional Gaussian noise (,

). The peak is created using an approximate delta function, which is based on the normal distribution function:


The parameter influences the shape of the peak. We use which creates a high and sharp peak that resembles a Dirac impulse.

The length of the time series is 500 time steps. For each time step, we create one sample in the dataset with the peak centered at this position. Accordingly, the dataset consists of 500 samples, 250 samples of each class.

Classification with a random split of 80% training and 20% test data leads to the accuracies in table I second column (“standard case“). It can be seen that MiniROCKET can correctly classify only 65% of the test samples, while the HDC version with can achieve 97%. An even more drastic result is seen if we select a particularly challenging subset from the samples; i.e., those class-1 and class-2 sample pairs of the original MiniROCKET-transformed signals that have high similarity and can be easily confused. The result is a dataset with 250 highly similar data samples that leads to the classification accuracies in the rightmost column of table I (again using a 80%-20% split). In this challenging scenario, MiniROCKET without global temporal encoding is not significantly better than random guessing. However, when HDC-based temporal encoding is added to MiniROCKET, it can solve the classification problem with 94.1% accuracy.

Method Accuracy [%] Accuracy [%]
standard case challenging case
random guess 50.0 50.0
MiniROCKET 65.0 56.8
HDC-MiniROCKET (s=1) 97.0 94.1
TABLE I: Results on the synthetic dataset of random guessing and MiniROCKET without and with temporal encoding

Fig. 3: Similarity matrices on the synthetic dataset. (left) Original MiniROCKET (right) HDC-MiniROCKET with explicit temporal encoding. Bright pixels indicate high and dark low similarity. On top and left to the matrices are overlayed examples of the synthetic signals colored in red for class 1 and blue for class 2.

Fig. 4: Relative accuracy improvements for selecting the optimal and automatically calculated while training. Only datasets with a change unequal zero are shown.

To better illustrate the effect of the time encoding, Fig. 3 shows the two similarity matrices of the original encoded MiniROCKET descriptors (left) and the HDC-MiniROCKET descriptors (right). The i-th row as well as the i-th column both correspond to the signal from the dataset with the peak at time step i. The matrix entries are the cosine similarities. The blue and red signals exemplify the shape of samples of classes 1 and 2. The first half of the rows and columns of the similarity matrices refer to class 1 and the second half to class 2. In the left similarity matrix, it can be seen that for the original MiniROCKET without explicit temporal coding, class 1 signals (red) sometimes resemble class 2 signals (blue) and vice versa. In contrast, the right similarity matrix separates the two different classes by explicit temporal coding - class 1 signals are less likely to be similar to class 2 signals.

V-B UCR benchmark

To evaluate the practical benefit of the proposed HDC-MiniROCKET, we show results on the 128 datasets from the UCR benchmark [6]. This standard benchmark has previously been used to compare different time series classification algorithms and contains datasets from a broad range of domains [6]. Since MiniROCKET showed excellent performance (especially computational efficiency) on this dataset collection compared to other state-of-the-art approaches like [36, 22], we only compare against MiniROCKET in our experiments.

Fig. 5: Classification accuracies on the 128 UCR datasets when using MiniROCKET or HDC-MiniROCKET. There are two points for each dataset. Blue color is with an oracle that selects the best scale parameter, orange color is with the simple cross validation for parameter selection. For points in the top-left triangle, HDC-MiniROCKET is better.

Fig. 5 shows the classification accuracies on all UCR datasets for MiniROCKET in comparison to HDC-MiniROCKET. The blue colored dots show the performance of HDC-MiniROCKET with an oracle that selects the optimal scale value for each dataset from the set . It is important to keep in mind that dependent on the underlying task and dataset, explicit time encoding can be intended and helpful or also unintended and harmful. This oracle and a simple data-driven approach to select the scale are discussed in the next Sec. V-C. The evaluation with oracle is intended to demonstrate the potential of the explicit time encoding in HDC-MiniROCKET. Fig. 4 shows the relative change in accuracy for the individual URC datasets. For 81 out of 128 datasets, there is an improvement by the explicit time encoding. Although the improvement is sometimes rather small (the average improvement across these 81 datasets is 3.1%), it also reaches more than 27% for one dataset. For those datasets without improvement, the performance is the same as for MiniROCKET (which is the special case of ) – if we are able to select the optimal scale.

V-C Selecting the scale

Param. 0 1 2 3 4 5 6 optimal s automatic selected s
Mean 0.8540 0.8537 0.8500 0.8450 0.8407 0.8366 0.8336 0.8681 0.8569
Worst Case 0.2906 0.3080 0.3054 0.2788 0.1827 0.0865 0.0769 0.3080 0.2991
Best Case 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
TABLE II: Results for different similarity parameters on UCR benchmark.

The evaluation from the previous section used an oracle to select the scale . Since this time scale parameter is physically grounded, we assume that in a practical application, there will be expert knowledge that guides the selection of this parameter. When chosing the scale , it is important to keep two properties in mind:

(1) There is no single value of the scale parameter that works for all datasets. This is supported by table II that shows mean, worst-case and best-case performances across all 128 UCR datasets for each scale parameter. No scale choice is considerably better than all others. These average statistics show that the average benefit of scale selection across all datasets is rather small – however, as illustrated in Fig. 4, the benefit for individual datasets can be quite high.

(2) The influence of is quite smooth. Fig. 6 shows the influence of different values of on the performance on individual datasets. Small changes of create rather small changes of the performance. Typically, there is a single local maximum (which can be one of the extrem values of the evaluated range of ).

Although we think that expert knowledge about the task at hand is very valuable to decide whether explicit time encoding is helpful and for selection of , we also propose a purely data-driven approach to select : Since the classifier is trained in a supervised fashion, we can assume a set of labeled training samples. For automatic scale selection, we can simple apply a cross-validation by further splitting the training set into training and validation splits. We use a 10-fold cross-validation to select from the same possible values as in the oracle. For each fold, we train the classifier on the train split and evaluate on the corresponding validation split (which is, of course, also taken from the UCR train split). We select the value that performs best in the highest number of splits. Ties are broken in favour of smaller values.

Fig. 5 and 4 also provide results for this simple automatic scale selection. Although the performance is considerably below that of the oracle, the results are again better than those of the original MiniROCKET. In particular, since MiniROCKET is a special case with , HDC-MiniROCKET with oracle cannot be worse than MiniROCKET. Very importantly, even without any knowledge about the tasks, the automatic scale selection is in worst-case only 0.5% below the performance of MiniROCKET.

Fig. 6: Differences in absolute accuracies between HDC-MiniROCKET and MiniROCKET for different values of .

Fig. 7: Pairwise comparison of inference time of MiniROCKET and HDC-MiniROCKET.

V-D Computational effort

As discussed before, apart from the precomputation of time encodings (which only has to be done once), the computational effort for MiniROCKET and HDC-MiniROCKET is almost identical. Fig. 7 shows the pairwise comparison of the inference times of both approaches (computed on a standard desktop computer with Intel i7-5775C CPU). It can be seen that HDC-MiniROCKET is occasionally slower for some datasets and is consistently marginally faster for fast data sets. Presumably this is due to the difference in the calculation of PPV in MiniROCKET that averages values, while HDC-MiniROCKET only accumulates them. On average, both algorithms require about 1.57s per dataset.

Vi Conclusion

We proposed to extend the MiniROCKET algorithm for time series classification by incorporating an explicit temporal context through hyperdimensional computing. Very importantly, the explicit time encoding with HDC does not affect computational costs during inference. HDC-MiniROCKET is a generalization of MiniROCKET where a parameter can be used to adjust the importance of time encoding in the descriptor of the time series. MiniROCKET becomes the special variant with .

Experiments on a classification task on a synthetic dataset with high temporal dependences demonstrated that such explicit temporal coding can prevent MiniROCKET from failure. An evaluation on the 128 UCR datasets demonstrated the potential of the proposed approach on a standard time series classification benchmark. It is very important to keep in mind that not all tasks and datasets benefit from explicit temporal encoding (e.g.. we can similarly construct datasets where temporal encoding is harmful). In HDC-MiniROCKET, this is reflected with the scale parameter . The experiments demonstrated that the proposed approach with an oracle that selects the best parameter can achieve classification improvements on about 2/3 of the datasets and up to 27%. In practice, selecting should incorporate knowledge about the particular task. We demonstrated that selecting using a simple purely data-driven approach increased performance for 17 of the 128 data sets. However, the presented automatic selection of is only a simple, prelimary approach – the very promising results using the oracle demonstrate the general potential of the proposed time encodings and that the automatic scale selection is a promising direction for future research.


  • [1] A. Bagnall, H. A. Dau, J. Lines, M. Flynn, J. Large, A. Bostrom, P. Southam, and E. Keogh (2018-10) The UEA multivariate time series classification archive. pp. 1–36. External Links: 1811.00075, Link Cited by: §II-A.
  • [2] A. Bagnall, J. Lines, A. Bostrom, J. Large, and E. Keogh (2017) The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Mining and Knowledge Discovery 31 (3), pp. 606–660. External Links: Document, ISSN 1573756X Cited by: §I, §II-A, §II-A, §II-A.
  • [3] A. Bostrom and A. Bagnall (2015) Binary shapelet transform for multiclass time series classification. In Big Data Analytics and Knowledge Discovery, S. Madria and T. Hara (Eds.), Cham, pp. 257–269. External Links: ISBN 978-3-319-22729-0 Cited by: §I, §II-A.
  • [4] B. Cheung, A. Terekhov, Y. Chen, P. Agrawal, and B. A. Olshausen (2019) Superposition of many models into one.. In NeurIPS, Cited by: §II-C.
  • [5] I. Danihelka, G. Wayne, B. Uria, N. Kalchbrenner, and A. Graves (2016) Associative long short-term memory. CoRR abs/1602.03032. External Links: Link Cited by: §II-C.
  • [6] H. A. Dau, A. Bagnall, K. Kamgar, C. C. M. Yeh, Y. Zhu, S. Gharghabi, C. A. Ratanamahatana, and E. Keogh (2019) The UCR time series archive. IEEE/CAA Journal of Automatica Sinica 6 (6), pp. 1293–1305. External Links: Document, 1810.07758, ISSN 23299274 Cited by: §I, §II-A, §V-B.
  • [7] A. Dempster, F. Petitjean, and G. I. Webb (2020) ROCKET: exceptionally fast and accurate time series classification using random convolutional kernels. Vol. 34, Springer US. External Links: Document, 1910.13051, ISBN 1061802000, ISSN 1573756X, Link Cited by: §I, §II-A, §II-B, §III.
  • [8] A. Dempster, D. F. Schmidt, and G. I. Webb (2021) MiniRocket: A Very Fast (Almost) Deterministic Transform for Time Series Classification. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 248–257. External Links: Document, 2012.08791, ISBN 9781450383325 Cited by: §I, §II-A, §II-A, §II-B, §III, §III, §III, §III, §V.
  • [9] C. Eliasmith (2007) How to build a brain: from function to implementation.. Synthese 159 (3), pp. 373–388. Cited by: §I, §II-C, §IV.
  • [10] E. P. Frady, D. Kleyko, and F. T. Sommer (2018)

    A theory of sequence indexing and working memory in recurrent neural networks.

    Neural Comput. 30 (6). Cited by: §II-C.
  • [11] R. W. Gayler (2003) Vector Symbolic Architectures answer Jackendoff’s challenges for cognitive neuroscience. In Int. Conf. on Cognitive Science, Cited by: §I, §II-C, §IV-A, §IV-B, §IV.
  • [12] H. Ismail Fawaz, B. Lucas, G. Forestier, C. Pelletier, D. F. Schmidt, J. Weber, G. I. Webb, L. Idoumghar, P. A. Muller, and F. Petitjean (2020) InceptionTime: Finding AlexNet for time series classification. Data Mining and Knowledge Discovery 34 (6), pp. 1936–1962. External Links: Document, 1909.04939, ISBN 1061802000710, ISSN 1573756X Cited by: §I, §II-A.
  • [13] P. Kanerva (2009-06-23)

    Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors.

    Cognitive Computation 1 (2), pp. 139–159. Cited by: §I, §II-C, §IV.
  • [14] D. Kleyko, E. Osipov, N. Papakonstantinou, V. Vyatkin, and A. Mousavi (2015) Fault detection in the hyperspace: Towards intelligent automation systems. In International Conference on Industrial Informatics (INDIN), External Links: Document, ISSN 1935-4576 Cited by: §II-C.
  • [15] D. Kleyko, E. Osipov, R. W. Gayler, A. I. Khan, and A. G. Dyer (2015) Imitation of honey bees’ concept learning processes using Vector Symbolic Architectures. Biologically Inspired Cognitive Architectures 14, pp. 57 – 72. External Links: ISSN 2212-683X, Document Cited by: §II-C.
  • [16] D. Kleyko, D. A. Rachkovskij, E. Osipov, and A. Rahimi (2021) A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I: Models and Data Transformations. pp. 1–27. External Links: 2111.06077, Link Cited by: §II-C, §IV-B.
  • [17] D. Kleyko, D. A. Rachkovskij, E. Osipov, and A. Rahimi (2021) A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II: Applications, Cognitive Models, and Challenges. pp. 1–36. External Links: 2112.15424, Link Cited by: §II-C.
  • [18] D. Kleyko, A. Rahimi, D. A. Rachkovskij, E. Osipov, and J. M. Rabaey (2018) Classification and Recall With Binary Hyperdimensional Computing: Tradeoffs in Choice of Density and Mapping Characteristics. IEEE Transactions on Neural Networks and Learning Systems 29 (12), pp. 5880–5898. External Links: Document Cited by: §II-C.
  • [19] B. Komer, T. C. Stewart, A. R. Voelker, and C. Eliasmith (2019) A neural representation of continuous space using fractional binding. In 41st Annual Meeting of the Cognitive Science Society, Cited by: §IV-B.
  • [20] J. Lines, S. Taylor, and A. Bagnall (2018) Time series classification with HIVE-COTE: The hierarchical vote collective of transformation-based ensembles. ACM Transactions on Knowledge Discovery from Data 12 (5). External Links: Document, ISSN 1556472X Cited by: §I, §II-A, §II-A.
  • [21] M. Middlehurst, J. Large, and A. Bagnall (2020) The Canonical Interval Forest (CIF) Classifier for Time Series Classification. Proceedings - 2020 IEEE International Conference on Big Data, Big Data 2020, pp. 188–195. External Links: Document, 2008.09172, ISBN 9781728162515 Cited by: §II-A.
  • [22] M. Middlehurst, J. Large, M. Flynn, J. Lines, A. Bostrom, and A. Bagnall (2021) HIVE-COTE 2.0: a new meta ensemble for time series classification. Vol. 110, Springer US. External Links: Document, 2104.07551, ISBN 0123456789, ISSN 15730565, Link Cited by: §I, §II-A, §V-B.
  • [23] P. Neubert, S. Schubert, and P. Protzel (2019) An introduction to hyperdimensional computing for robotics.. Künstliche Intell. 33 (4), pp. 319–330. Cited by: §II-C, §IV-B.
  • [24] P. Neubert, S. Schubert, K. Schlegel, and P. Protzel (2021) Vector semantic representations as descriptors for visual place recognition. In Proc. of Robotics: Science and Systems (RSS), External Links: Document Cited by: §II-C.
  • [25] P. Neubert and S. Schubert (2021) Hyperdimensional computing as a framework for systematic aggregation of image descriptors. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    pp. 16938–16947. External Links: Document Cited by: §II-C.
  • [26] E. Osipov, D. Kleyko, and A. Legalov (2017) Associative synthesis of finite state automata model of a controlled object with hyperdimensional computing. In Conference of the IEEE Industrial Electronics Society (IECON), External Links: Document Cited by: §II-C.
  • [27] T. A. Plate (1994) Distributed representations and nested compositional structure. Ph.D. Thesis, University of Toronto, University of Toronto, Toronto, Ont., Canada, Canada. External Links: ISBN 0-315-97247-5 Cited by: §I, §II-C, §IV-A, §IV.
  • [28] D. A. Rachkovskij and S. V. Slipchenko (2012) Similarity-based retrieval with structure-sensitive sparse binary distributed representations. Computational Intelligence 28 (1), pp. 106–129. External Links: Document, https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1467-8640.2011.00423.x Cited by: §II-C.
  • [29] A. P. Ruiz, M. Flynn, J. Large, M. Middlehurst, and A. Bagnall (2021) The great multivariate time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Vol. 35, Springer US. External Links: Document, ISBN 1061802000727, ISSN 1573756X, Link Cited by: §II-A, §II-A.
  • [30] P. Schäfer and M. Högqvist (2012) SFA: A symbolic fourier approximation and index for similarity search in high dimensional datasets. ACM International Conference Proceeding Series, pp. 516–527. External Links: Document, ISBN 9781450307901 Cited by: §II-A.
  • [31] P. Schäfer and U. Leser (2017-11) Fast and Accurate Time Series Classification with WEASEL. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, New York, NY, USA, pp. 637–646. External Links: Document, ISBN 9781450349185, Link Cited by: §I, §II-A, §II-A.
  • [32] P. Schäfer and U. Leser (2017) Multivariate Time Series Classification with WEASEL+MUSE. arXiv preprint. External Links: Document, 1711.11343v4, ISSN 16130073 Cited by: §I, §II-A.
  • [33] P. Schäfer (2015) The BOSS is concerned with time series classification in the presence of noise. Data Mining and Knowledge Discovery 29 (6), pp. 1505–1530. External Links: Document, ISBN 1061801403, ISSN 13845810, Link Cited by: §I, §II-A.
  • [34] K. Schlegel, F. Mirus, P. Neubert, and P. Protzel (2021) Multivariate Time Series Analysis for Driving Style Classification using Neural Networks and Hyperdimensional Computing. In Intelligent Vehicles Symposium (IV), Cited by: §II-C.
  • [35] K. Schlegel, P. Neubert, and P. Protzel (2021-12) A comparison of vector symbolic architectures. Artificial Intelligence Review. External Links: Document, ISSN 0269-2821, Link Cited by: §II-C, §IV-B.
  • [36] A. Shifaz, C. Pelletier, F. Petitjean, and G. I. Webb (2020) TS-CHIEF: a scalable and accurate forest algorithm for time series classification. Vol. 34, Springer US. External Links: Document, 1906.10329, ISBN 1061802000, ISSN 1573756X, Link Cited by: §I, §II-A, §V-B.
  • [37] D. Widdows and T. Cohen (2015) Reasoning with Vectors: A Continuous Model for Fast Robust Inference. Logic journal of the IGPL / Interest Group in Pure and Applied Logics (2), pp. 141–173. Cited by: §II-C.
  • [38] F. Yu and V. Koltun (2016-05) Multi-scale context aggregation by dilated convolutions. In International Conference on Learning Representations (ICLR), Cited by: §I.