1 Introduction
Time series classification is a general task that can be useful across many subjectmatter domains and applications. The overall goal is to identify a time series as coming from one of possibly many sources or predefined groups, using labeled training data. That is, in this setting we conduct supervised learning, where the different time series sources are considered known. For example, Holan et al. (2010) use signals to classify animal behavior, and Lines et al. (2011)
classify household devices based on electricity consumption time series. A closely related problem is time series clustering, where the goal is still to group a collection of time series by their corresponding sources, but no labels for the true sources are known. This constitutes an exercise in unsupervised learning. Time series clustering is often conducted using either hierarchical clustering methods
(Kakizawa et al., 1998; Kumar et al., 2002), or a means approach (Vlachos et al., 2003; Gullo et al., 2012). Aghabozorgi et al. (2015)give a review of many modern approaches to time series clustering. Although the clustering techniques themselves may not be pertinent in a supervised learning setting, oftentimes data processing or feature extraction steps may be helpful in both the supervised and unsupervised realms. For example,
Shumway (2003) demonstrate that the construction of timevarying spectra can be of use for both classification and clustering of time series.Although some statistical models such as hidden Markov models may be used to classify time series
(Chai and Vercoe, 2001), much of the modern research on time series classification and prediction has come from the machine learning community. For instance,
Medeiros et al. (2019)explore the use of random forests in order to forecast inflation levels. Many of these machine learning methods perform quite well for prediction, but lack some properties that are critical to the field of statistics such as uncertainty quantification, the ability to perform inference, and feature extraction. For example, a distance summary measure such as dynamic time warping can be used in conjunction with nearest neighbor methods to classify time series
(Jeong et al., 2011). Deep learning methods can also be used to classify time series. Fawaz et al. (2019) review and compare many modern deep learning methods that have been used for time series classification, including convolutional neural networks and echo state networks. Heaton et al. (2017) demonstrate how deep learning methods may be used for prediction and classification with application to construction of stock portfolios in finance.Although many of the methods discussed so far consider the series in the time domain, it is also possible to use frequency domain analysis to classify time series. For example,
Holan et al. (2012) use a timefrequency representation to classify periods of economic expansion and recession. Holan and Ravishanker (2018)review many frequency domain methods that may be used to both classify and cluster time series. For nonstationary time series, shorttime Fourier transforms may be necessary, such as those used by
Holan et al. (2010) and Shumway (2003). Similarly, for nonlinear time series, Higher Order Spectral Analysis (HOSA) may be necessary to capture higher moment properties.
Harvill et al. (2013) provide an example of HOSA applied to the problem of time series clustering by utilizing various distance metrics in coordination with a hierarchical clustering algorithm, while Harvill et al. (2017) discussed clustering nonstationary, nonlinear time series.To the best of our knowledge, there is no literature available on statistical classification
of time series using HOSA, particularly for time series that occur in business and industry. One difficulty may be that the highdimensionality associated with HOSA techniques can be problematic for many modelbased approaches. We have found that simple dimension reduction approaches such as principal components analysis (PCA) are generally not sufficient for dealing with this problem. In order to fill this gap in the literature, we develop a HOSA method relying on deep learning that can be used to accurately classify time series. In addition, we use a type of Bayesian deep learning that allows for uncertainty quantification. This provides many of the uncertainty quantification benefits associated with Bayesian modeling, while accommodating the highdimensional data structure corresponding to HOSA, and avoiding the need for costly Markov chain Monte Carlo (MCMC) techniques. Furthermore, our method utilizes a variant of feature extraction that can be used to perform inference by identifying the key frequencies used to determine each class probability. As such, this method is useful as a statistical tool for classifying nonlinear time series.
The outline of this article follows. In Section 2, we review the methodological work necessary to construct our proposed method. This includes a specific type of HOSA based on the bispectrum, CNNs, and the dropout procedure. We describe our proposed method in Section 3, where we also discuss a competing frequency domain method that does not rely on HOSA. A simulation study is constructed in Section 4 in order to evaluate our proposed method. In Section 5, we illustrate our method on two example applications. The first involves the classification of Google trends data, and the second involves the classification of household devices based on electricity consumption as discussed in Lines et al. (2011). Finally, we give a discussion on the advantages of our proposed methods as well as some insight into possible future work in Section 6.
2 Review of Model Components
In order to present our methodology, we first review the necessary components of the model. The first component is the bispectrum, which is used as a form of feature engineering for the raw time series. We then discuss CNNs, which are the main building block of our model. Finally, we consider dropout which may be used as a form of regularization as well as a means for uncertainty quantification.
2.1 Bispectrum
Traditional spectral domain analysis resulting from the Fourier transform of the autocorrelation function enables classification and clustering of stationary time series only based on their secondorder properties. While this can be sufficient for distinguishing between linear time series, the use of higherorder spectral analysis (HOSA) is attractive for classifying or clustering nonlinear time series. Higherorder spectra (HOS) are Fourier transforms of thirdmoments or higherorder moments. Properties of HOS have been discussed in Rosenblatt and Van Ness (1965), Van Ness (1966), Brillinger and Rosenblatt (1967), or Hinich (1982), among others.
Let
represent a white noise process with variance
. For a zeromean, thirdorder stationary series , the autocovariance and thirdorder moment functions are defined asrespectively.
If , the bispectral density function is
Taking note of the symmetries in across the unit square, the principal domain of and can be shown to be the triangle with boundaries .
Let represent a meancorrected realization of the process , and without loss of generality, assume . Then the sample autocovariance and thirdmoment functions are
where . Let represent a symmetric lag window with , and represent a twodimensional lag window satisfying
corresponding to the symmetries in over the unit square. Define the natural frequencies , where represents greatest integer value, and consider , corresponding to . Then the sample spectral and bispectral density functions are defined as
(1)  
(2) 
Harvill et al. (2013) use a normalized sample bispectral density to cluster nonlinear time series based on various distance metrics. Their work utilizes the smoothed sample bispectral density, which is defined by (2). Because this sample bispectral density reflects thirdmoment properties, it may be used to classify nonlinear time series. In contrast to their approach, we consider the unsmoothed bispectrum, noting that our approach provides an implicit smoothing. Notably, the dimensionality of the sample bispectrum can be quite high, and thus dimension reduction may be required.
2.2 Convolutional Neural Networks
Convolutional neural networks (CNNs), introduced by LeCun et al. (1989), are a powerful tool for processing image data. The basis of these networks is the 2dimensional discrete convolution operator, defined as
(3) 
where is a kernel function and is the image for which convolution is being performed. The values of and denote the indices of pixels in the image. An illustration of the discrete convolution operation can be found in Figure 1.
Convolution can be thought of as creating a new image by replacing each pixel with a weighted average of nearby pixels. The weights are determined by the kernel, which is often referred to as a filter in the deep learning literature (for example, see Zeiler and Fergus (2014)
). By including convolution as one component of a deep model, it is possible to use gradient descent or some other optimization technique to estimate the values of the weights. Note that, in practice, with deep learning models, it is typical to use stochastic gradient descent, whereby each iteration of the gradient descent algorithm is based on only a sample of the training data. It is also typical to use many filters in a single convolution layer. Let
be the number of of filters in convolution layer . Then this layer will consist of kernels that need to be learned, and application of this convolution layer to a single image will result in new images. For all of our analysis, we use .Another operation that is commonly used in CNNs is called maxpooling, which works by dividing the image into rectangular subsections, and taking the maximum value within each subsection, which results in a new lower dimensional image. Maxpooling can be thought of as applying the maximum operator over a nonoverlapping contiguous grid placed upon the image. Along with dimension reduction, maxpooling is beneficial, because it helps to provide translation invariance of the original image space.
Goodfellow et al. (2016) review both the convolution operator and maxpooling, as well as provide details on the optimization techniques commonly used in deep learning.2.3 Dropout
A common tool to add parameter regularization and prevent overfitting in deep models is dropout. Dropout is performed by randomly setting a fixed proportion (known as the dropout rate) of the outputs from a given layer to zero. The dropout rate can be tuned via crossvalidation to find the optimal value. Traditionally, dropout has been used to train the model, but at prediction time, dropout is not used, and the parameters are scaled to account for this.
Gal and Ghahramani (2016) take the approach of using dropout both for model training as well as for prediction. In this manner, rather than a single prediction, a distribution of predictions is obtained, since the outputs of each layer being randomly dropped may vary. The authors show that dropout in this way can be interpreted as a variational Bayes approximation to a deep Gaussian process model. This gives a theoretically justified measure of model uncertainty, with little additional computational burden.
3 Proposed Methodology
Using the components outlined in Section 2, we present our methodology. In addition to this, we compare to a similar classification technique that only considers the timeseries spectrum (i.e. no thirdorder properties).
3.1 Bispectrumbased Convolutional Neural Networks
We introduce the Bispectrumbased Convolutional Neural Network (BCNN) with layers for classification of time series. Because the bispectrum involves third moment properties, this method outperforms spectralbased classification methods on nonlinear time series.
The input into the BCNN is an array of size , where is the number of time series in the sample, and
is the length of the time series. This array should consist of the raw sample bispectra for the time series in the sample. We do not use the smoothed bispectra as our input, as smoothing is a convolution operation, which can be learned by the neural network. Note that if the time series have different lengths, then they may be zero padded to the same length. Input images are always scaled such that each pixel has mean zero and standard deviation of one.
The first two hidden layers in the BCNN are a convolution layer with max pooling, followed by a dropout layer. A nonlinear transformation is applied after each max pooling stage. In our case, we used the rectified linear unit (ReLU) function, defined as
, which is applied elementwise. Subsequent convolution and dropout layers may be added as well. The number of desired convolution/dropout layers may be data dependent, but we found that two of each layer worked well for our analyses. For the first convolution layer, we use max pooling, and we use max pooling for the second. We also use and kernel sizes for the first and second convolution layers respectively.The convolution stage of the BCNN is followed by a densely connected hidden layer. The output of time series through a dense layer indexed by , with hidden units, is defined as
(4) 
where
is the output vector from layer
with dimension is an parameter matrix, and is a dimensional vector of intercept (also called bias) terms. The functionis some nonlinear activation function, where again, we use the ReLU function. Note that the dense layer simply takes an input,
, and performs a linear transformation followed by a nonlinear transformation. As with the convolution layers, the number of densely connected hidden layers and their corresponding number of hidden units may be varied. We found that for our work, one dense layer of size 8 hidden units worked well. We also follow the last hidden layer with a dropout layer. Although additional dropout layers may be added with varying dropout rates, we found that additional dropout layers did not add much benefit, and thus use only one in order to limit the necessary crossvalidation.
The final layer of the BCNN is the output layer. For classification of time series into categories, this layer is defined as
The output, is a vector of length giving the class probabilities for time series . Similar to the hidden dense layers, is a parameter matrix, is a
dimensional bias vector, and
is the output from the previous layer. Note that the function is applied elementwise to the vector .Our BCNN model was fit using Keras
(Chollet, 2015). We used a crossentropy loss function with the Adam optimizer
(Kingma and Ba, 2014). Adam is a variant of stochastic gradient descent that uses second moment approximations for more efficient updates. All optimization was done for 20 epochs, using a batch size of 8. These values could be tuned, but we found that our results were not very sensitive to this.
There are many architectural decisions that may be varied in the BCNN: the number of convolution layers and the number of filters per layer, the kernel sizes, as well as the number of dense layers and the number of hidden units per layer. We found that as a general rule, starting with a small network (a single convolution layer and a single hidden dense layer with relatively few filters/hidden units) and gradually increasing until performance seems to flatten seems to yield good results. Our analysis is meant to be illustrative, so we use the architecture described in this section, but these settings could be further tuned if desired.
3.2 Comparison to SpectralSSVS
In order to compare our proposed BCNN to methodology that only considers spectral properties, we introduce spectralstochastic search variable selection (spectralSSVS). This method is based on dimension reduction and variable selection with a binary outcome, and is a special case of Holan et al. (2010). We begin by calculating the periodogram, or the unsmoothed version of (1), for each time series. Since the periodogram can be high dimensional, we use a principal components analysis to reduce the feature space. The number of components (PCs) retained should depend on the data, but for our simulation study, we retained 20 PCs.
The spectralSSVS method then uses a Bernoulli response model (for classification into two classes), with the retained PCs as covariates. Dimension reduction results in a new set of features, but each feature in this reduced space is not necessarily correlated to the response of interest. In this case, the response is the class that each time series belongs to. We require some way to select the important features in terms of predicting the class associated with a given time series, and we recommend Bayesian SSVS as shown by George and McCulloch (1997). Because our response is binary, we can use data augmentation to form a probit model as done by Albert and Chib (1993). The full model for , is
Here if the th time series is of the first class and if the time series is of the second class. The continuous latent variable is used to relate the response to the linear predictor. The covariates for the th time series, , as well as are vectors, in our case, the PCs. All analysis done here will include an intercept term in the construction of the regression coefficients. The prior on is the SSVS prior, where the mixture of normals representation allows for the use of a Gibbs sampler. The constants , , and are all tuning parameters that must be selected by the user.
The parameter can be seen as an indicator of whether or not the th covariate is “selected” as important. For this reason, we would like to be small and to be large so that if then the prior variance on will be small and thus will be shrunk towards zero. Crossvalidation can be used to tune these parameters. In our case, we standardized all covariates. We found that for the probit SSVS model given here with standardized covariates, , , for all seemed to work well. Further fine tuning could be done, but we found that this yields minimal gains in prediction accuracy.
Because this method is Bayesian, we gain a full distribution for the selection of each covariate. Specifically, at each iteration of the MCMC, the current value of indicates whether the th covariate is selected. One could average over all MCMC samples and then refit a model only using the covariates that are selected with probability greater than some threshold. We can use this model to predict the class of out of sample data. Another option is to use Bayesian model averaging (BMA) Hoeting et al. (1999). At each iteration of the MCMC we can make a prediction for the class of out of sample data using the current values of . We can then average over the class prediction done at each iteration of the MCMC to get a class probability that takes into account the uncertainty of all parameters. We use BMA for all class predictions in our simulation study.
Note that we did attempt to apply SSVS to the PCs generated by the sample bispectra rather than the spectra. However, this methodology did not improve upon spectralSSVS, likely because nonlinear modeling is required for bispectral features. We do not present those results in this work.
4 Simulation Study
Harvill et al. (2013) use the bispectral periodogram to perform clustering on time series using various distance metrics. Although their goal was similar to our goal of classification, there are some key differences. They did not use a model to perform clustering, and thus no training data was used. They also did not have class labels, resulting in zero loss as long as two time series are put into separate clusters given that they are truly from separate classes. In our case, if two time series from different classes were both classified into the wrong class, each case would result in a reduction of classification accuracy, even though the two time series were not put into the same class. We perform a similar simulation study as Harvill et al. (2013), although the two studies cannot be compared on equal footing due to their difference in goals. Instead of using the median rand index as they did, we use prediction accuracy to represent a similar singular measure of success for the goal of classification. Prediction accuracy is the ratio of the number of correct classifications to the number of classifications performed. We also consider area under the ROC curve (AUC).
Our simulation study considers 7 different nonlinear processes. For each possible pair of processes, we simulate 200 time series of each process (400 total time series) with 100 time points each. We randomly split the simulated data into a training and validation set, of 200 time series each. We calculate the bispectral periodogram as inputs into our BCNN model. After fitting the BCNN model, we generate 100 ensemble predictions on the validation data. We use the mean predictions over the ensembles to calculate both prediction accuracy and AUC. The prediction accuracy is obtained from these predictions by noting the number of correct classifications using a probability threshold of . For BCNN, we use two convolution layers with max pooling and ReLU activation, and one hidden dense layer, with 8 hidden units before the output layer, again with ReLU activation. We also use dropout with a rate of . We compare out results to the spectralSSVS model, which is trained and validated on the same data. For spectralSSVS, we keep 20 PCs, and set , , and .
The processes considered are the same nonlinear processes considered by Harvill et al. (2013), and include:
Three bilinear (BILIN) processes:
Two selfexciting threshold autoregressive (SETAR) processes:
and two other nonlinear processes:
an exponential autoregressive (EXPAR) model  
and  
a polynomial autoregressive (POLYAR) model  
The (BILIN) models used are those from Rao and Gabr (2012), (SETAR) models from Tong and Lim (1980), and (EXPAR) model from Jones (1978).
The results of this simulation can be found in Table 1. In most cases, the two methods both work extremely well, indicating that for these processes, the second order properties alone are enough to differentiate them. In the cases where spectralSSVS outperforms BCNN, the difference in performance is quite small. However, there are three cases (IV vs. VI, VI vs. IX, and VI vs. X), where BCNN seems to outperform spectralSSVS by a considerable margin.
Process 1  Process 2  BCNN Accuracy  SSVS Accuracy  BCNN AUC  SSVS AUC 

I  II  0.80  0.76  0.88  0.85 
I  III  0.98  0.90  1.00  1.00 
I  IV  1.00  1.00  1.00  1.00 
I  V  1.00  1.00  1.00  1.00 
I  VI  0.98  0.97  0.99  0.99 
I  VII  0.84  0.87  0.92  0.93 
II  III  0.94  0.92  0.96  0.99 
II  IV  1.00  1.00  1.00  1.00 
II  V  1.00  1.00  1.00  1.00 
II  VI  0.96  0.94  1.00  0.99 
II  VII  0.98  0.96  1.00  0.99 
III  IV  1.00  0.98  1.00  1.00 
III  V  1.00  0.94  1.00  1.00 
III  VI  1.00  0.88  1.00  0.92 
III  VII  0.98  0.90  1.00  0.98 
IV  V  1.00  1.00  1.00  1.00 
IV  VI  1.00  1.00  1.00  1.00 
IV  VII  1.00  1.00  1.00  1.00 
V  VI  1.00  1.00  1.00  1.00 
V  VII  1.00  1.00  1.00  1.00 
VI  VII  0.97  0.96  1.00  1.00 
5 BCNN for Two Business Applications
Using two data sources relevant to industry interests, we illustrate the utility of the BCNN. Comparison is also made to spectrumbased classification techniques. We first consider the classification of Google Trends data, which exhibit highly nonlinear behavior. We also show results for classification of various electronic devices based on electricity consumption.
5.1 Google Trends Data
We apply our methodology to a real nonlinear time series classification problem. Specifically, we obtain weekly Google Trends results for 200 search terms over the 52 week period ending on March 31, 2019. Each time series comes scaled between 0100, with 100 representing the maximum search volume for that time series during the 52 week period. The search terms consist of the names of 100 different actors, and 100 different music artists, with the goal of classifying the Google Trends time series as either actor or music artist. We display the bispectral periodograms for two example searches in Figure 2.
Typically in a classification problem such as this, one would split the data into a training, validation, and test set. The model would be trained on the training set, and any tuning parameters would be chosen based on out of sample prediction using the validation set. Finally, the test set would be used to gauge out of sample performance (see Draper (2013) for examples of this framework). Because we have a limited sample size, we compare the proposed BCNN to spectralSSVS using leaveout cross validation. For each iteration of the crossvalidation routine, we randomly sample time series as a validation set and another as a test set, thus leaving the remaining time series for training. We fit the model using the training set, and make predictions on both the validation set and test set. Finally, we compare our predicted vs. true responses over all iterations of the crossvalidation routine. Our dropout rate was chosen based on the validation results and the test set results were used to estimate out of sample accuracy and area under the ROC curve (AUC). In this example, we use and use 50 iterations of crossvalidation, resulting in 500 total predictions each for the validation and test sets.
We use the same BCNN architecture as in our simulation study. The only parameter we tune is the dropout rate, where we tune using grid search over the values . We also use the same spectralSSVS settings as in the simulation. For spectralSSVS, we attain out of sample accuracy of 0.556 and AUC of 0.626. This indicates that the second moment properties are likely not sufficient for classifying these time series. However, our best results for BCNN, using a dropout rate of 0.1, yielded an accuracy of 0.634 and an AUC of 0.649. BCNN is more successful than spectralSSVS at this difficult classification problem, even considering the limited sample size. The third moment properties add critical information about this problem, and the BCNN works well by providing dimension reduction of the bispectra, while simultaneously modeling complex nonlinear functions.
In Figure 3, we show the posterior distributions of the probability that the time series is from an ‘actor’ search term, for two selected ‘actor’ time series, after fitting on the full dataset. Notice that for the ‘Glenn Close’ time series, there is a bimodal distribution, with one mode above 0.5 and one mode below 0.5. A quick internet search reveals that Glenn Close is well known for both acting and singing. In this case, the uncertainty quantification provided by dropout adds insight that would not be available otherwise.
Along with uncertainty quantification, another benefit of the BCNN is inference. Because the BCNN relies on a convolutional neural network structure, a technique known as class activation mapping (CAM) may be used (Selvaraju et al., 2017). The general idea behind CAM is to generate a heatmap of which regions in a given image are most significant in determining whether or not the image comes from a specific class. This is accomplished by taking the final convolution layer and weighting each filtered image by the gradient of the class with respect to the filter. We present an example CAM for the actor class in Figure 4. There are a few regions of importance in determining the actor class, such as the frequency pairs around 0.5 and 0.1.
5.2 Electric Devices Data
As another example comparing BCNN to spectralSSVS, we use the electric devices data from Lines et al. (2011), obtained from the UEA and UCR Time Series Classification Repository (Bagnall et al., 2018). The data consists of electricity consumption time series of length 96, coming from 7 types of household electric devices. The data contains 8,926 time series in the training set, and 7,711 in the test set. We subset the data to only include the first two device types (television and computer), resulting in a training set of size 2,958 and a test size of 2,623. The goals is to classify the time series into the correct device type grouping based on the electricity consumption time series.
We use the same network structure as in the Google Trends example. Because this dataset is much larger, we only fit the model for two epochs, using a batch size of 128. The data includes a predefined test set, so rather than use a crossvalidation procedure, we just compare BCNN vs. spectralSSVS on the test set after fitting on the training set. SpectralSSVS (keeping all principal components) yields a classification accuracy of 0.63 and an AUC of 0.71. BCNN using a dropout rate of 0.1 improves these results considerably, with an accuracy of 0.92 and an AUC of 0.83. This is a 46% and 17% increase respectively.
In this example, we also compare to variable selection of the spectral principal components using Lasso with a binomial response (Tibshirani, 1996). We use the glmnet R package (Friedman et al., 2009) for this, which selects the optimal shrinkage penalty via crossvalidation. This results in a classification accuracy of 0.65 and an AUC of 0.73. This method is very similar in nature to spectralSSVS, so it is not surprising that the two achieve similar results. The Bayesian nature of spectralSSVS gives it some advantage regarding uncertainty quantification, however, we choose to compare to Lasso here, as software is readily available that will extend Lasso to the multiclass (Multinomial) case.
We further compare BCNN on the full electric device dataset (using all 7 categories) to spectralLasso with a multinomial response in order to assess the multiclass classification ability of our methodology. For the multiclass case, we still only include one dense hidden layer, but we increase the number of units in this layer to 32. We chose to use a droput rate of 0.1, but this value could be further tuned. In this scenario, BCNN classifies 58% of the test cases correctly, whereas spectralLasso only classifies 44% correct. For reference, the naive approach of classifying each time series into the most often observed class achieves a total classification accuracy of 0.24. To further illustrate the advantage of using bispectral properties for time series classification, we present the class probability densities under each model in Figure 5. Each frame in the figure considers only the test set time series from the corresponding class label. Within each frame, we present the density of the corresponding class probability point estimates. An ideal model would have all the mass shifted to one in each frame. Here, we see that both models behave similarly when the true class is either 5 or 7, but BCNN outperforms spectralLasso in every other case.
6 Discussion
We present a powerful and interpretable model for time series classification that relies on the construction of the sample bispectrum. The bispectral images are used as inputs into a deep convolutional neural network. CNNs excel at image analysis, because they are translation invariant, reduce dimensionality, and may find many important features within an image. We find that our BCNN may lead to more accurate predictions than methods that rely only second order spectral properties. For many nonlinear time series, third order properties can be critical, and BCNN successfully handles these cases. Furthermore, the purpose of our applications was to illustrate our methodology, and thus we did consider variations of many of the model choices. Crossvalidation over architechure decisions such as the number size of layers could likely yield even higher predictive accuracy.
Inference is another benefit of BCNN. By using parameter dropout as a regularization procedure both during model training and prediction, BCNN allows for theoretically justified uncertainty estimates. In many ways, this is similar to a posterior distribution achieved from Bayesian model averaging, albeit obtained from a complex nonlinear model. Inference can further be made through the use of CAMs, which illustrate the regions of a given image that were important in the determination of a given class prediction. These two advantages help to bridge the gap between machine learning methods which may not always account for uncertainty and statistical methods which do.
Further work may involve adding a time varying component to the bispectrum construction. The BCNN currently makes an implicit assumption that time series are stationary, but by including a time varying component, we could introduce the idea of nonstationarity. The corresponding deep model may include some type of recurrent network structure to account for dependence in time. Additional work may be done by extending the BCNN structure to be used for clustering rather than classification.
Acknowledgement
This research was partially supported by the U.S. National Science Foundation (NSF) under NSF SES1853096 and through the Air Force Research Laboratory (AFRL) Contract No. 19C0067. This article is released to inform interested parties of research and to encourage discussion. The views expressed on statistical issues are those of the authors and not those of the NSF or the U.S. Census Bureau.
References
 Timeseries clustering–a decade review. Information Systems 53, pp. 16–38. Cited by: §1.
 Bayesian analysis of binary and polychotomous response data. Journal of the American Statistical Association 88 (422), pp. 669–679. Cited by: §3.2.
 The uea & ucr time series classification repository. Dostopno na: www. timeseriesclassification. com [26.6. 2018]. Cited by: §5.2.
 Asymptotic theory of estimates of th order spectra. In Spectral Analysis of Time Series, B. Harris (Ed.), pp. 153–158. Cited by: §2.1.

Folk music classification using hidden markov models.
In
Proceedings of the International Conference on Artificial Intelligence
, Vol. 6. Cited by: §1.  Keras. Note: https://keras.io Cited by: §3.1.

Bayesian model specification: heuristics and examples
. Bayesian Theory and Applications, pp. 409–431. Cited by: §5.1.  Deep learning for time series classification: a review. Data Mining and Knowledge Discovery, pp. 1–47. Cited by: §1.
 Glmnet: lasso and elasticnet regularized generalized linear models. R package version 1 (4). Cited by: §5.2.
 Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In International Conference on Machine Learning, pp. 1050–1059. Cited by: §2.3.
 Approaches for Bayesian variable selection. Statistica Sinica, pp. 339–373. Cited by: §3.2.
 Deep learning. MIT Press. Note: http://www.deeplearningbook.org Cited by: §2.2.
 A time series approach for clustering mass spectrometry data. Journal of Computational Science 3 (5), pp. 344–355. Cited by: §1.
 Clustering of nonlinear and nonstationary time series using bslex. Methodology and Computing in Applied Probability 19(3), pp. 935–955. Cited by: §1.
 Bispectralbased methods for clustering time series. Computational Statistics & Data Analysis 64, pp. 113–131. Cited by: §1, §2.1, §4, §4.
 Deep learning for finance: deep portfolios. Applied Stochastic Models in Business and Industry 33 (1), pp. 3–12. Cited by: §1.
 Testing for gaussianity and linearity of a stationary time series. Journal of Time Series Analysis 3, pp. 169–176. Cited by: §2.1.
 Bayesian model averaging: a tutorial. Statistical Science, pp. 382–401. Cited by: §3.2.
 Time series clustering and classification via frequency domain methods. Wiley Interdisciplinary Reviews: Computational Statistics 10 (6), pp. e1444. Cited by: §1.
 Modeling complex phenotypes: generalized linear models using spectrogram predictors of animal communication signals. Biometrics 66 (3), pp. 914–924. Cited by: §1, §1, §3.2.
 An approach for identifying and predicting economic recessions in realtime using time–frequency functional models. Applied Stochastic Models in Business and Industry 28 (6), pp. 485–499. Cited by: §1.
 Weighted dynamic time warping for time series classification. Pattern Recognition 44 (9), pp. 2231–2240. Cited by: §1.
 Nonlinear autoregressive processes. Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, pp. 71–95. Cited by: §4.
 Discrimination and clustering for multivariate time series. Journal of the American Statistical Association 93 (441), pp. 328–340. Cited by: §1.
 Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.1.
 Clustering seasonality patterns in the presence of errors. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 557–563. Cited by: §1.
 Backpropagation applied to handwritten zip code recognition. Neural Computation 1 (4), pp. 541–551. Cited by: §2.2.
 Classification of household devices by electricity usage profiles. In International Conference on Intelligent Data Engineering and Automated Learning, pp. 403–412. Cited by: §1, §1, §5.2.
 Forecasting inflation in a datarich environment: the benefits of machine learning methods. Journal of Business & Economic Statistics, pp. 1–22. Cited by: §1.
 An Introduction to Bispectral Analysis and Bilinear Time Series Models. Vol. 24, Springer Science & Business Media. Cited by: §4.
 Estimation of the bispectrum. Annals of Mathematical Statistics 36, pp. 1120–1136. Cited by: §2.1.

Gradcam: visual explanations from deep networks via gradientbased localization.
In
Proceedings of the IEEE International Conference on Computer Vision
, pp. 618–626. Cited by: §5.1.  Timefrequency clustering and discriminant analysis. Statistics & Probability Letters 63 (3), pp. 307–314. Cited by: §1, §1.
 Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological) 58 (1), pp. 267–288. Cited by: §5.2.
 Threshold autoregression, limit cycles and cyclical data. Journal of the Royal Statistical Society. Series B (Methodological), pp. 245–292. Cited by: §4.
 Asymptotic normality of bispectral estimates. The Annals of Mathematical Statistics 37, pp. 1257–1272. Cited by: §2.1.

A waveletbased anytime algorithm for kmeans clustering of time series
. In In Proc. Workshop on Clustering High Dimensionality Data and its Applications, Cited by: §1.  Visualizing and understanding convolutional networks. In European Conference on Computer Vision, pp. 818–833. Cited by: §2.2.
Comments
There are no comments yet.