Over the past few decades, with the ever-accelerated development of machine learning and deep learning, many previous bottleneck problems in finance, healthcare, media, and other application fields could be transformed into time series classification (TSC) problems and then be solved with the help of advanced deep learning tools, such disease diagnosis from time series of physiological parameters, classifying heart arrhythmias from ECG signals, and human activity recognition 3]
and 1-dimensional convolution neural networks (CNNs)[4, 5, 6] have achieved state-of-the-art results. However, under circumstances where access to a large labeled training data set is not available, which is always the case with time series data, these fancy DNNs overfit terribly [7, 8]. For example, even though Convolutional Neural Networks (CNNs) could have achieved impressive model performance figures when combined with the Dynamic Time Warping (DTW) algorithm (1NN-DTW), it suffers from the problem of over-fitting once applied in the Time Series Classification (TSC) tasks, and this would become worse when there are not enough samples or when patterns of the data are time-variant . In short, due to the difficulty in collecting and annotating time-series data, DNNs could hardly be applied to small-scale time series data sets .
Therefore, studies that focus on solving time series classification (TSC) with other techniques have appeared in the past decade. Although many metrics are proposed in the previous works (e.g., Dynamic Time Wrapping (DTW) , edit distance , elastic distance, they concentrate only on single-view  or univariate time series (u.t.s.) classification tasks [16, 17] instead of those on multi-view time series. Moreover, since these traditional methods largely count on tremendous sample size and labels, they tend to receive poor performance especially in model efficiency and accuracy.
Thanks to the increasing number of various sensors, information of the same object could be collected from multiple perspectives. Such mutually enriched and supported information could largely help machine learning tasks by offering higher quality, more diverse information, and thus increase model performance. Compared with traditional single-view methods, multi-view learning yields better results and has received an increasing amount of attention over the past few years [19, 20]. Recent popular multi-view learning methods include collaborative training, multi-core learning and subspace learning . Hence, if applicable, multi-view data are usually preferred over single-view data. However, in Time Series Classification (TSC) tasks, either existing methods for time series data classification only focus on single-view data and the benefits of mutual-support multiple views are not taken into account, or existing multi-view learning methods cannot be appropriately applied to multi-view time series, because many of the unique properties of multi-view time series are ignored.
On the other hand, transfer learning received quite an amount of research attention recently. It is a research problem that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem. In other words, transfer learning methods could learn from a source task which has enough labeled data collections . The method yields rather satisfying model performance when used in computer vision, , , social media analytics –26]. However, not so much work has examined its performance in time-series classification problems.
In light of all these challenges, we proposed a novel approach to deal with classification tasks on multi-view time series data sets through transfer learning. Overall, our contributions are as follows:
We proposed a dynamic inter-view importance measurement to capture the correlation based on different views more robustly, which can enhance interpretability when combined with knowledge transfer.
We combined cutting-edge density estimation techniques with classical univariate and multivariate time series distance measurements. Density estimation tools are used to approximate the posterior distribution based on similarity features captured by time series distance algorithms.
We proposed a concept of adaptive transfer degree based on multi-view time series data, which will be sampled from approximated posterior distribution. During the training process in source domains and views, the proposed transfer degree can control the degree of knowledge transfer.
We validated our framework’s performance on some widely used time series classification models and experimented results on several open data sets to show that our proposed method truly generalizes, and could significantly improve the classification accuracy.
2 Related Work
Since deep learning models exhibited impressive performance in many application fields during past decades, it has been applied to the TSC problems. A three layers fully connected Convolutional Neural Network with an average pooling layer designed for time-series classification has been introduced in . Fawaz et al.  proposed new data augmentation techniques in order to avoid overfitting. In 
, the authors modified the cost function to enable the FCN model to be more sensitive with skewed time series data sets. In forecasting time series of spatial process, a dynamical spatio-temporal model was proposed in. For limited time series data sets, like Electronic Health Records (EHRs) sequential data, Che et al.  recently trained a generative adversarial network with CNN to generate satisfactory risk prediction. Despite all these applications of deep learning in time series data sets across various domain fields, obstacles to further apply DNNs to other data resources still exist. After all, the fundamental challenge is still the availability of big data across different domains .
On the other hand, transfer learning becomes heated research topic these days, and has also been applied to time series data mining tasks. In this case, a model is simultaneously learned both on the source and the target domains to minimize effects of cross-domain discrepancy within the learned model [31, 32]
In , in time series anomaly detection, the authors proposed a transfer learning approach with a 1-NN DTW classifier to select time series transferred from the source to the target data set.
Apart from time series classification tasks, transfer learning method was found in time series forecasting, where the authors utilized a model trained on the historical wind-speed data of an old farm to predict wind speed in another farm. Also, similar techniques appeared in time series recognition tasks: a model was trained under similar conditions for acoustic phoneme recognition before it was applied to post-traumatic stress disorder diagnosis .
Meanwhile, time series itself has been redefined, as sensor development enables improvements in model performance by taking into consideration multi-view time series data sets, which have received wide attention across many application domains during recent years. Because of their nice property that each of those multiple views provides support to each other and thus could enrich information on the object, it has been applied to many machine learning problems, such as clustering [36, 37], classification [38, 39] on deep multi-view representation learning.
3 Problem Definition
In this section, we presented formal definitions for time series classification problem and multi-view time series classification problem. Let’s start with the simplest definition of univariate time series.
A univariate time series is an ordered set of real values. The length of is equal to the number of real values
We then define the multivariate time series.
A multivariate time series is an ordered sequence of
-dimensional vectors, in whichis the observation at the -th timestamp, and is the length of time series.
Since the increasing number of various sensors, information of the same object could be collected from multiple perspectives. Here, we give a formal definition of multi-view multivariate time series.
Multi-View Multivariate Time Series A multi-view m.t.s is a set of time series data collected from multiple views, where denotes the time series observed in the -th view, is the number of measurements of time series, is the length of time series, and is the total number of views.
Next we introduce the formal definition of Transfer Learning Based Multi-View Multivariate Time Series Classification.
Transfer Learning Based Multi-View Multivariate Time Series Classification. Let denote a set of multi-view .t.s., where is the number of in each view, and denote a set of class labels shared by views. Specifically, we define the last view is the target view and views from all can be considered as source views. The task of transfer learning based multi-view m.t.s. classification is first to learn a classier from source views and then transfer the learned knowledge and features (the netwrok’s weights) to the target view and task.
For any set we denote set with- out the -th element as
After reviewing all definitions above, it is natural to come up with the intuition that the framework of multi-view multivariate TSC may consist pieces of solutions to both multivariate TSC and univariate TSC tasks. Here we introduce another that we would applied in the following section.
The technique of Importance Sampling (IS) can be used to improve the sampling efficiency. The basic idea of the importance sampling is quite straightforward: instead of sampling from the nominal distribution, we draw samples from an alternative distribution from which an appropriate weight is assigned to each sample. Most recent works focus on its application on stochastic gradient descent. Now we apply this perspective to different views, we construct a dynamic inter-view importance metric to measure each source view’s importance contributing to target view.
Dynamic Inter-View Importance Score is a kind of metric that indicates view ’s importance to view via sampled from the alternative distribution (for this task this is the approximated posterior distribution of density in latent space)
The notion of dynamic inter-view importance is similar to view-level similarity, to some degree, but with more uncertainty.
4 Adaptive Transfer Learning of Multi-View Time Series Classification
In this section we presented our novel approach. We first gave a brief introduction of our framework and then elaborated it in details later.
Since inter-view importance is hard to define on multi-view time series datasets, we proposed this dynamic measurement, which is constructed by the following steps. First we compute similarities between corresponding multivariate time series in different views. After carrying out the all the pairwise similarities, we put those similarities into a latent space and apply density estimation methods such as kernel density estimation or normalizing flow to approximate the posterior similarity distributions. Then, we look at it from a perspective of importance sampling, where this sampled value from inter-view importance approximated posterior distribution could be viewed as importance value for the target view, which would control the knowledge transfer degree of each source view in the pre-training process.
4.1 Computation of Inter-View Importance Value
To implement transfer learning on multi-view time series data, we need to capture inter-view importance. In this part, we list various measurements. Among them, we chose Dynamic Time Wrapping (DTW) and Bag of SFA Symbols (BOSS) to calibrate the inter-view importance, as these two measure show great performance in univariate time series cases. We speculate the performance to be consistent.
Complete procedures to carry out inter-view importance value are as follows.
Let be the source view , and target view be . The order from to in both source and target view corresponds to each other. In other words, for the term, we have both views and .
Notice that and are multivariate time series, therefore when calculating similarities we should decompose multivariate time series and into univariate time series and , where denotes the dimension of multivariate time series in both views.
We then compute the corresponding decomposed pairwise univariate time series distance between set and
Here, we list several widely-used time series distance measurements, but remember we chose Dynamic Time Wrapping (DTW) and Bag of SFA Symbols (BOSS) for in this paper.
Dynamic Time Wrapping (DTW): could yield better performance when the lengths differ. Dynamic time warping distance is given as
where represents the set of all possible sequences of
pairs. Under most circumstances, shaped-based approaches give better results on small-scale time series data sets with much less noises and outliers.
Bag of SFA Symbols (BOSS):
BOSS uses windows to form words over series. BOSS uses a truncated Discrete Fourier Transform (DFT) instead of PAA on each window, and the truncated series is discretized through a technique called Multiple Coefficient Binning (MCB). Then it windows each series to form word distribution through the application of DFT and discretization by MCB.
Every decomposed univariate time series distance between source and target view is a good measure of importance value, and all of these measurements are later transformed to dimensions, adjusted by length of multivariate time series and the number of time series views.
We carry out all above pairwise distance to , and put all values into this observation set in latent space. Now, after mapping all elements into a latent space , we are ready to construct a approximated posterior distribution of importance values.
4.2 Density Estimation
In this part, we approximated a posterior distribution to describe the importance relationships between source views and target view. Below, we separately elaborated on how to deal with high-dimensional and low-dimensional scenarios.
In a high-dimensional scenario, a normalizing flow model is constructed as an invertible transformation which maps observed data points in latent space by a standard Gaussian latent variable
as what is like in non-linear Independent Component Analysis. Stacking individual simple invertible transformations is the key idea in designing a flow model. Explicitly,is made up of a series of invertible flow , with each having a tractable Jacobian determinant. This way, sampling is efficient, as can be performed by computing for . So is the training process by maximum likelihood. Because the model density is easy to compute and differentiate with respect to the flows 
In computing the Jacobian determinant, we set a threshold for adding stochastic perturbation to balance the computational complexity and precision, in case the matrices are singular. The trained flow model can be considered as maximizing a posterior estimation for inter-view importance value in latent space.
In low-dimensional scenarios, kernel density estimation is a good fit. The univariate kernel density estimator for a continuous variable based on a sample at the evaluation point can be expressed as
where is the kernel function, a symmetric weightion, and is the smoothing parameter or bandwidth.
For multivariate kernel density estimation, let be a sample of d-variate random vectors drawn from a common distribution generated by density function ƒ. The kernel density estimate is
where is the bandwidth (or smoothing), matrix which is symmetric and positive definite, and , the kernel function, can be further written with respect to
which is plainly a symmetric multivariate density. For simplicity purposes, we directly choose the standard multivariate normal kernel,
4.3 Importance Value Sampling and Matrix Norm Computation
After constructing a posterior distribution, we could conduct importance value sampling now. We start by sampling importance values in mini-batches. Let denotes the batch with size ,
denotes the standard Gaussian distribution, and we have
Then, let go through the trained normalizing flow . If low-dimensional scenario, we draw samples from approximated kernel distribution. In this way, we can acquire dynamic inter-view importance values in new batches on approximated distribution. Put all the sampled vectors into a matrix, we have
If we unfold each , we have
Here we compute the norm per matrix. The elements of these sample-composed matrices contain the information of importance value in every dimension, which can be accumulated and computed via matrix norm.
Then we compute the matrix norm of by
Finally, we arrive at the output as the desired probablistic representation of the inter-view importance values between source view and target view , which describes the degree of knowledge transfer in the pre-training process. We will elaborate how we use these dynamic importance values to control the the degree of knowledge transfer in the experiment section.
4.4 Model Architecture
We selected one dimensional Fully Convolutional Neural Network (FCN) and Long Short-Term Memory (LSTM) network model to construct our adaptive transfer learning framework. The reason behind our choice is these networks’ robustness as they have already achieved state-of-the-art results on several data sets from the UCR archive and UEA repository. However, please note that our adaptive transfer learning framework is totally independent of the chosen neural networks.
|LSTM ( 256)|
|Conv1D (length = or ) 128|
|Conv1D (length = or ) 256|
|Conv1D (length = or ) 128|
The structure of Multilayer Perceptron (MLP).
|Daily and Sports Activity||Movement||Self-Regulation of SCPs|
5 Experiment Result
5.1 Data Set
UCI Daily and Sports Activity Data Set  contains motion sensor data of 19 daily and sports activities, each of which is performed by 8 subjects within 5 minutes. In particular, the subjects were asked to perform these activities in there own styles without any restrictions. As a result, the time series samples for each activity have considerable inter-subject variations in terms of speed and amplitude, which makes it extremely difficult to reach accurate classification results. During the data collection, nine sensors were put onto torso, right arm, left arm, right leg, and left leg these five units. The 5-minute time series collected from each subject is divided into 5-second segments. For each activity, the total number of segments is 480, and each segment is considered as a multivariate time series sample of size 45 × 125.
Indoor User Movement Prediction from RSS Data Set represents a real-life benchmark in the area of Ambient Assisted Living applications. The binary classification task consists in predicting the pattern of user movements in real-world office environments from time-series generated by a Wireless Sensor Network (WSN). Input data contains temporal streams of radio signal strength (RSS) measured between the nodes of a WSN, comprising 5 sensors: 4 anchors deployed in the environment and 1 mote worn by the user. Data has been collected. In the provided data set, the RSS signals have been re-scaled to the interval [-1,1], singly on the set of traces collected from each anchor. Target data consists in a class label indicating whether the user’s trajectory will lead to a change in the spatial context (i.e. a room change) or not.
Self-Regulation of SCPs Data Set was taken from a healthy subject. The subject was asked to move a cursor up and down on a computer screen. During the recording, the subject received visual feedback of his slow cortical potentials. Cortical positivity lead to a downward movement of the cursor on the screen. Cortical negativity lead to an upward movement of the cursor. Each trial lasted 6s. During every trial, the task was visually presented by a highlighted goal at either the top or bottom of the screen to indicate negativity or positivity from second 0.5 until the end of the trial. The visual feedback was presented from second 2 to second 5.5. Only this 3.5 second interval of every trial is provided for training and testing and 896 samples per channel for every trial.
5.2 Experiment Setup
For UCI Daily and Sports Activity data sets, we regard information from sensors on different parts the body as different views and thus this data set gives 5 views. Then, we randomly pick 4 out of the 5 as source views and the rest as target view. Next, we set 6 out of 8 subjects as training set and the other 2 subjects as testing set. Based on the fact that multivariate time series in different views are all 9-dimensional, we select the high-dimensional solution (Normalizing Flow) for latent space density estimation after computing inter-view importance. After acquiring the
for 4 source views, we consider these 4 value as the importance score from corresponding source view to target view and control the proportion of pre-training. We set 200 epochs for all these 4 source views, and each takes up a proportion
of their corresponding importance score. For loss function, we choose categorical entropy.
where denotes total epoch for pretraining and denotes the assigned number of epoch for a specific source view .
Indoor User Movement Prediction from RSS Data Set contains 4 sensors. By considering them as 4 different views, we get 4 views, but with different timestamp (in this case average is 42).
Pad the shorter sequences with zeros to make the length of all the series equal.
Find maximum time series length and pad the rest of the shorter sequences with last-row values.
Identify minimum time series length of each data set and truncate all the other series to that length. However, this leads to a huge information loss.
Calculate average series lengths, truncate all longer-than-average series, and pad all shorter-than-average series.
After these pre-processing procedures, we randomly select 3 out of 4 views as source views and the rest as target view. By applying model described above, we can calculate to get a desired probablistic representation of corresponding importance scores. We set 120 epochs for pre-training. The detailed algorithm is the same with UCI Daily and Sports Activity data sets above.
For Self-Regulation of SCPs data set, we take time series from 6 channels as a 6-view time series data set. We randomly pick 5 out of 6 views as source views and the rest as the target view, and then apply proposed model to get . With this probablistic representation, we finally get each one’s corresponding importance score. We set 100 epochs for pre-training. The detailed algorithm for assigning weight during pretraining is the same with UCI Daily and Sports Activity data sets above.
For all the network training, we normally applied batch size as and optimizer as AdaM. Due to the error and randomness, we repeat every experiment with 5 times and compute average classification accuracy.
5.3 Result Analysis
We run 3 baseline models (Long Short-Term Memory Recurrent Neural Network (LSTM-RNN), Fully Convolutional Network (FCN) and Multi-Layer Perceptron (MLP)) and 6 adaptive transfer learning frameworks (Dynamic Time Wraping (DTW)-LSTM, Bag Of SFA Symbols (BOSS)-LSTM, DTW-FCN, BOSS-FCN, DTW-MLp and BOSS-MLP) on these 3 data sets. Our fine-tuned classification accuracy results are shown in the table and related figures.
As shown in Figure.2,3 and 4, in most scenarios, our proposed approaches perform better results than baselines. Due to the pretrain process, our proposed approaches always reach a high classification accuracy at beginning, but the proposed approaches take the lead in classification accuracy all the time during network training. As listed results in table.4, proposed approaches reach a better accuracy after the training process. We provide the density estimation results of latent space of different datasets too.
In this paper, we presented an adaptive transfer learning framework on multi-view multivariate time series data. We looked at the multi-view time series data through a perspective of importance sampling, where we attempted to measure the importance value of a specific source view time sequence for the target view time sequence. The inter-view importance was carried out in the following procedures. First, we calculated decomposed corresponding pairwise univariate time series distance. Second, we updated the importance value into a latent space to calculate observation density estimation. Finally, we arrived at approximated posterior distribution. Particularly, we discussed two scenarios when input dimensions are either high or low. Later, we sampled several importance values to compute a composed matrix norm as output importance score, which also indicates the degree of knowledge transfer in the pre-training process. On average, our proposed adaptive transfer learning framework demonstrates a generally improved classification performance of to over some state-of-the-art baseline models.
-  P. D. Grunwald, The Minimum Description Length Principle (Adaptive Computation and Machine Learning), The MIT Press, 2007.
-  F. Petitjean, G. Forestier, G. I. Webb, A. E. Nicholson, Y. Chen, and E. Keogh,Dynamic Time Warping Averaging of Time Series Allows Faster and More Accurate Classification,IEEE International Conference on Data Mining, 2014, pp. 470–479.
-  H. I. Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P. Muller,Data augmentation using synthetic data for time series classification with deep residual networks, CoRR, vol. abs/1808.02455, 2018.
-  S. Das Bhattacharjee, B. V. Balantrapu, W. Tolone, and A. Talukder,Identifying extremism in social media with multi-view context-aware subset optimization , 2017 IEEE International Conference on Big Data (Big Data), 2017, pp. 3638–3647.
M. Langkvist, L. Karlsson, and A. Loutfi,A review of unsupervised
feature learning and deep learning for time-series modeling
, Pattern Recognition Letters, pp. 11–24, 2014.
-  S. Li, Y. Li, and Y. Fu,Multi-view time series classification: A discriminative bilinear projection approach, Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, ser. CIKM ’16, 2016, pp. 989–998.
-  Z. Cui, W. Chen, and Y. Chen,Multi-Scale Convolutional Neural Networks for Time Series Classification, ArXiv, 2016.
-  I. Sutskever, O. Vinyals, and Q. V. Le, Sequence to Sequence Learning with Neural Networks, Neural Information Processing Systems, 2014, pp. 3104–3112.
-  Y. Chen, E. Keogh, B. Hu, N. Begum, A. Bagnall, A. Mueen, and G. Batista,The UCR Time Series Classification Archive, July 2015.
-  Z. Wang, W. Yan, and T. Oates, Time series classification from scratch with deep neural networks: A strong baseline, CoRR, vol. abs/1611.06455, 2016.
-  J. Cristian Borges Gamboa,Deep Learning for Time-Series Analysis, ArXiv, 2017.
-  S. Seto, W. Zhang, and Y. Zhou,Multivariate time series classification using dynamic time warping template selection for human activity recognition, 2015 IEEE Symposium Series on Computational Intelligence, pp. 1399–1406, 2015.
-  P.-F. Marteau and S. Gibet,On recursive edit distance kernels with application to time series classification,IEEE transactions on neural networks and learning systems, vol. 26, no. 6, June 2015.
-  J. Lines and A. Bagnall,Time series classification with ensembles of elastic distance measures, Data Min. Knowl. Discov., vol. 29, no. 3, pp. 565–592, May 2015.
-  E. Keogh and S. Kasetty,On the need for time series data mining benchmarks: a survey and empirical demonstration,Data Mining and Knowledge Discovery, 7(4):349–371, 2003.
-  Z. Xing, J. Pei, and S. Y. Philip, Early classification on time series, Knowledge and information systems, 31(1):105–127, 2012.
A. Blum and T. Mitchell, Combining labeled and unlabeled
data with co-training
, In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pages 92–100. ACM, 1998.
-  C. Xu, D. Tao, and C. Xu, A survey on multi-view learning, arXiv preprint arXiv:1304.5634, 2013.
-  Z. Fang and Z. Zhang, Simultaneously combining multi-view multi-label learning with maximum margin classification, In Proceedings of IEEE International Conference on Data Mining, pages 864–869. IEEE, 2012.
-  J. Yosinski, J. Clune, Y. Bengio, and H. Lipson,How transferable are features in deep neural networks?, in Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, Eds., 2014.
-  G. Csurka,Domain adaptation for visual applications: A comprehensive survey, CoRR, vol. abs/1702.05374, 2017.
A. T. Sreyasee Das Bhattacharjee,
“Graph clustering for weapon discharge event detection and tracking in infrared imagery using deep features, 2017. [Online]. Available: https://doi.org/10.1117/12.2277737
-  S. Das Bhattacharjee, B. V. Balantrapu, W. Tolone, and A. Talukder,Identifying extremism in social media with multi-view context-aware subset optimization, 2017 IEEE International Conference on Big Data (Big Data), 2017, pp. 3638–3647.
-  S. Das Bhattacharjee, A. Talukder, and B. V. Balantrapu,Active learning based news veracity detection with feature weighting and deepshallow fusion, 2017 IEEE International Conference on Big Data (Big Data), 2017, pp. 556–565.
-  S. Das Bhattacharjee, V. S. Paranjpe, and W. Tolone, Identifying malicious social media contents using multi-view context-aware active learning, Future Generation Computer Systems, Elsevier, 2017.
-  S. Das Bhattacharjee, J. Yuan, Z. Jiaqi, and Y. Tan,Context-aware graph-based analysis for detecting anomalous activities, 2017 IEEE International Conference on Multimedia and Expo (ICME), 2017, pp. 1021–1026.
-  Y. Geng and X. Luo, Cost-Sensitive Convolution based Neural Networks for Imbalanced Time-Series Classification, ArXiv e-prints, 2018.
-  A. Ziat, E. Delasalles, L. Denoyer, and P. Gallinari, Spatio-Temporal Neural Networks for Space-Time Series Forecasting and Relations Discovery, IEEE International Conference on Data Mining, 2017, pp. 705–714.
-  Z. Che, Y. Cheng, S. Zhai, Z. Sun, and Y. Liu,Boosting Deep Learning Risk Prediction with Generative Adversarial Networks for Electronic Health Records, IEEE International Conference on Data Mining, 2017, pp. 787–792.
-  H. Ismail Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P.-A. Muller,Deep learning for time series classification: a review, ArXiv, 2018.
-  M. Baktashmotlagh, M. Faraki, T. Drummond, and M. Salzmann,Learning factorized representations for open-set domain adaptation, CoRR, vol. abs/1805.12277, 2018.
-  M. Long and J. Wang, Learning transferable features with deep adaptation networks, CoRR, vol. abs/1502.02791, 2015.
-  V. Vercruyssen, W. Meert, and J. Davis,Transfer Learning for Time Series Anomaly Detection, Workshop and Tutorial on Interactive Adaptive Learning co-located with European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2017, pp. 27–36.
-  D. Zhan, S. Yi and D. Jiang, Small-Scale Demographic Sequences Projection Based on Time Series Clustering and LSTM-RNN. ICDM Workshops 2018.
-  J. Serra, S. Pascual, and A. Karatzoglou, Towards a universal neural network encoder for time series,CoRR, vol. abs/1805.03908, 2018.
Y. Guo, Convex subspace representation learning from multi-view data
, Proceedings of the 27th AAAI Conference on Artificial Intelligence, volume 1, page 2, 2013.
Y. Li, F. Nie, H. Huang, and J. Huang,
Large-scale multi-view spectral clustering via bipartite graph, Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pages 2750–2756, 2015.
-  W. Wang, R. Arora, K. Livescu, and J. Bilmes, On deep multi-view representation learning, Proceedings of the 32nd International Conference on Machine Learning, pages 1083–1092, 2015.
-  M. Kan, S. Shan, H. Zhang, S. Lao, and X. Chen, Multi-view discriminant analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(1):188–194, 2016.
P.-Y. Zhou and K. C. Chan,
A feature extraction method for multivariate time series classification using temporal patterns, Advances in Knowledge Discovery and Data Mining, pages 409–421. Springer, 2015.
-  H. Hayashi, T. Shibanoki, K. Shima, Y. Kurita, and T. Tsuji,A recurrent probabilistic neural network with dimensionality reduction based on time-series discriminant component analysis, IEEE Transactions on Neural Networks and Learning Systems, 26(12):3021–3033, 2015.
-  Y. Zheng, Q. Liu, E. Chen, J. L. Zhao, L. He, and G. Lv, Convolutional nonlinear neighbourhood components analysis for time series classification, Advances in Knowledge Discovery and Data Mining, pages 534–546. Springer, 2015.
-  S. Yi, D. Zhan, Z. Geng, W. Zhang and C. Xu, FIS-GAN: GAN with Flow-based Importance Sampling, arXiv, preprint, arXiv:1910.02519.
-  M. Lichman. UCI machine learning repository, 2013 .