I Introduction
Temporal event sequences record timestamped events, which are ubiquitous in the real world and are addressed in various data mining problems. We address the scenario where the attached timestamps are ambiguous, as shown in Fig. 1. Such sequences are found in events that are recorded passively, i.e., without observers’ instantaneous control. The timestamps do not represent when events have occurred but represent when they were recorded. In such cases, the time intervals are variable and the timestamps are not reliable. Thus, small time shifts in the observed sequences have no significant effect, and there are uncertainties in recorded timestamps. In medical informatics, for example, attention is being paid to analyzing such passively recorded data, as found in electronic health records (EHRs) for predicting risks to patients [1, 2, 3]. They are recorded when a patient is treated at a hospital.
Modeling event sequences is one of the fundamental problems in data mining, such as analyses on sensor timeseries, economic data, and EHRs. While we can assume independent observations for nonsequential data, we must take into account the dependencies between successive events. In standard modeling approaches, one of two major assumptions is made: the observations occur at regular intervals (so the timestamps can be ignored) or at irregular intervals (so the timestamps must be considered). The first approach is typically used for timeseries and language modeling. Once the order of the sequence is incorporated into the model, exact timestamps and durations are unimportant. There are many established models, including vector autoregressive (VAR) models
[4][5], recurrent neural networks (RNN)
[6], long shortterm memory (LSTM) models
[7], and Boltzmann machines for time series
[8, 9, 10, 11, 12, 13, 14, 15, 16]. If the time intervals are not constant, the performance is degraded in return for simpler modeling for temporal dependencies.The second approach is typically used for asynchronously observed event sequences, such as in log records and processseries data. Along with other features representing events, the timestamps or intervals between events are explicitly input to the model for encoding the dependencies between successive events. RNN models have been used for this purpose [17, 18, 19]. However, if the timestamps are not reliable, directly inputting them into the model might not be effective.
For modeling the event sequences with ambiguous timestamps, we addressed three major modeling requirements.
The model should be invariant against time shifts, since almost the same patterns would be recorded with a time shift if there is no instantaneous and ondemand control.
The model should be robust against uncertainty in timestamps, since the timestamps represent when they were recorded but not when events have occurred.
The ability to forget meaningless past information should be inherited from timeseries models, enabling the handling of infinite sequences and longterm dependency.
We propose a timediscounting convolution method that uses a specific convolutional structure and a dynamic pooling mechanism. Our convolutional structure has a unidirectional convolution mechanism across time with two kinds of parameter sharing for efficiently representing the dependency between events in a timeshift invariant manner. It also has a mechanism of naturally forgetting by discounting the effects of past observations. The structure is based on the eligibility trace in dynamic Boltzmann machines (DyBMs) [12, 13, 14, 15, 16], whose learning rules have a biologically important characteristic, i.e., spiketimingdependent synaptic plasticity (STDP). This is our first contribution. The dynamic pooling mechanism provides robustness against the uncertainty in timestamps and enhances the timediscounting capability by dynamically changing the window size in its pooling operations. This is our second contribution.
Several timeconvolutional models have been proposed to capture shiftinvariant patterns while representing temporal dependencies. The timedelay neural network [20] is a pioneering model. Convolutional neural networks (CNNs) have recently been applied to sequential data, such as stock price movements [21], sensor outputs [22], radio communication signals [23], videos [24], and EHRs [1, 3, 25]. However, these models do not have a temporal nature, i.e., the natural forgetting capability, which is one of our modeling requirements. To represent the both convolutional and temporal natures, stacking combinations of convolutional models and timeseries models have also been studied recently [26, 27]. Our model inherently has both a timeseries and convolutional aspects. This reduces the number of model parameters, which is quite useful when the amount of training data is limited.
We empirically evaluated the effectiveness of the proposed method in numerical experiments to examine its utility of the method for the real event sequences with ambiguous timestamps and general workability of the method for the real timeseries data. We found that the proposed method improves predictive accuracy.
Ii Prediction from Temporal Event Sequences
Our goal is to construct a model for predicting objective variables, at time , from an event sequence before time , , where vector can be a future event itself (autoregression) or any other unobservable variable (regression or classification), and represents all the observed sequences before time .
An event sequence is a set of records of events with timestamps. A record has the values of attributes, and each of the attributes may or may not be observed with other attributes at the same time, as shown in Fig. 1. For ease of analysis, the sequence is usually represented as a matrix [28, 3, 25]. The horizontal dimension represents the timestamp at regular intervals with the highest temporal resolution in the sequence, and the vertical dimension represents the attribute values, as shown in Fig. 1, i.e., , where is the vector of attribute values at . If the event records are originally observed at regular intervals and all the attributes are always observed, the temporal event sequence is reduced to ordinary timeseries data. If the original observation intervals vary over time and all the attributes are not observed simultaneously, several elements of the matrix will be missing. We replace missing values with large negative values (sufficiently lower than the minimum value of each attribute), which works well with our dynamic pooling described in the following section.
We learn the parameters of our prediction model, , by minimizing the objective function:
(1)  
where is the number of training samples () and
is a loss function, which is selected for each task and the corresponding objective variables
, from mean squared error, cross entropy, log likelihood, and other functions.By using the learned parameters, , we can predict ;
(2) 
We define our prediction model in the following section.
Iii Timediscounting Convolution
Iiia Prediction model with timediscounting convolution
We propose a timediscounting convolution method for the prediction model. It has a convolutional structure across time, where the weight of a convolutional patch decays exponentially at each time point, as shown in Fig. 2. The proposed model has parameters and predicts the th element of by the use of a nonlinear function that maps a dimensional input to a dimensional output:
(3)  
where is a convolutional parameter across time with for the th attribute value at time , and is the time length of the th convolutional patch. We use bias parameter individually for each of the feature maps. The nonlinearity in makes this apparently redundant formulation meaningful, analogous to CNNs. We define specific functional forms of for each task in Section IV along with the details of the implementation.
We use two different parametric forms for :
(4)  
(5) 
where is the decay rate. Note that
forms a tensor and corresponds to a patch in CNNs. Eq. (
4) uses consisting of a single parameter across time for each , , and . In the convolutional patch based on Eq. (4), is replicated and used for multiple temporal positions of the time convolution with the decay rate of . The parameterization with shared parameters in Eq. (4) works similarly to the eligibility trace of DyBM, as shown on the left side of Fig. 2; that is, the convolutional patch extracts the feature that represents the frequency of each observation and its distance from the current prediction. Eq. (5) uses consisting of individual parameters for each time and other indexes , , and . The forms the convolutional patch by itself and all the parameters in this patch decay together by in accordance with the timestamp . We use Eqs. (4) and (5) in the same proportion in our feature maps. The proposed method can capture discriminative temporal information in a shiftinvariant manner because of its convolutional operation. Also, it can naturally forget past information, and prediction and gradients in learning do not diverge in the limit of thanks to the decay rates and in Eqs. (4) and (5).We can use our model as a layer in a neural network, can incorporate a neural network into our model via , or as a preprocessor of the input in Eq. (3
). In our experiments, we actually used our model along with a fullyconnected layer and activation functions for prediction.
IiiB Dynamic pooling
We introduce dynamic pooling as a powerful mechanism to avoid overfitting to vague timestamps and past meaningless information. Our pooling window increases in accordance with the time from prediction point as shown in Fig. 3. Specifically, we let
(6) 
where is the initial window size and is the growth rate of the window. We can dynamically downsample the observed event sequences or latent representations by taking the maximum value over subtemporal regions along with increasing the window size exponentially.
Dynamic pooling is used in the proposed method both as a preprocessor of input and function in Eq. (3). As a preprocessor of , we first apply dynamic pooling to a raw sequence then use the preprocessed sequence as . As , we apply dynamic pooling to the latent representations that are the inputs to .
Dynamic pooling leads to tractable analysis of the missing values by ignoring them in its max operation as the first pooling layer. It also enables us to easily handle infinite sequences when we make the final window size infinite. Also, we can handle the varying (horizontal) dimensions of across different in the same manner as handling infinite sequences. The pooling layer after convolution works as an ordinary pooling method, i.e., the patterns having the largest effect are extracted. Because the rate of selecting each time point in the max operation decreases due to the growth of the window width in accordance with the time length from the prediction point , the expected effect of each time point decays exponentially. This is also similar to the eligibility trace in DyBM. We can define other pooling mechanisms, such as dynamic meanpooling, by replacing the operation with another operation.
IiiC Learning Model Parameters
In our experiments, we tackled autoregression and classification problems. We define the objective function for each problem setting and derive the learning rules. For the autoregression problems, we use the L2norm in Eq. (1):
(7) 
For the classification problems, we use the following form in Eq. (1):
(8) 
where the function is the cross entropy, and function is the softmax function. For the classification problems, we use as the prediction model.
The model parameters are learned by minibatch gradient descent. The update rule with is defined as
(9) 
where is the learning rate and
is the minibatch size. The specific gradients of the parameters are omitted due to space limitations (see the Appendix). In the training phase, the dynamic pooling is simply passed through the gradient to the unit selected as maximum, analogous to ordinary maxpooling. In the minibatch gradient descent, the learning rate
is controlled using the Adam optimizer with the hyperparameters recommended in
[29], and the minibatches are set as examples. We used the same procedure for all the models we compared in our experiments. The detailed settings of the hyperparameters , , , , , and are described for each experimental task in Section IV.IiiD Relationships to Other Models
Our model can be seen as a generalization of DyBM. The corresponding prediction model by the DyBM is defined as
(10)  
This is a special case of our model (Eq. (3)) when for any . In Eq. (10), the conduction delay of DyBM is assumed to be zero. In other words, we extend the summation and definition of the eligibility trace in DyBM to the convolutional operation in our model. We also extended DyBMs to be applicable to classification problems and neuralnetwork layers.
Our model reduces to a VAR model with lag if we use only Eq. (5), let be the summation over , and set , , and . Eq. (3) then reduces to
(11) 
where we omit and because of the above assumptions.
From the above relationships, our model can be seen as an ensemble of convolutional terms (generalization of eligibility trace of DyBM) and VARlike terms. The convolution with particularly differentiates our model from them.
DyBM can be considered a temporal expansion of the restricted Boltzmann machine (RBM). By replacing the temporal sequences with hidden variables, the RBM’s prediction model for the
th hidden variable is(12) 
Its convolutional extension is a convolutional restricted Boltzmann machine (CRBM). For twodimensional data, the prediction model for the th hidden variable is
(13) 
From Eq. (13), our model can be seen as a temporal expansion of a CRBM with convolution and special parameterizations in Eqs. (4) and (5) having exponential decay inspired by DyBM. We summarize these relationships in Fig. 4.
Iv Experimental Results
We assessed the effectiveness of our method in numerical experiments. First, we applied our method to a realworld event sequence with ambiguous timestamps extracted from an EHR. Since our method was designed for general event sequences including ordinary timeseries data, we then evaluated its effectiveness for realworld timeseries data.
Iva Prediction from Realworld Event Sequence with Ambiguous Timestamps
We evaluated the proposed method using two realworld event sequence datasets, EHRs for patients at a Japanese hospital [30]. The first dataset included data on patients treated for diabetic nephropathy (DN). We constructed a model for predicting progression of DN from stage to stage after days from the latest record in the input EHR (binary classification task). The progression label of the th input EHR was defined as such that means that the patient remained in stage and means that the patient had progressed to stage . The th input EHR was represented as a day sequence of realvalued results of the lab tests, where we represented the sequence as a matrix for which the horizontal dimension corresponds to the timestamp (time length ) and the vertical dimension corresponds to the lab tests having attributes, i.e., Albumin, Albuminuria, ALT(GPT), Amylase, AST(GOT), Blood Glucose, Blood Platelet Count, BMI, BUN, CPK, CRP, eGFR, HbA1c, Ht, Hgb, K, Na, RBC, Total Bilirubin, Total Cholesterol, Total Protein, Troponin, Uric acid, WBC count, and GTP. The second dataset included data on patients treated for cardiovascular disease (CVD). We constructed a model for predicting the occurrence of major cardiovascular events after days from the latest record in the input EHR (binary classification task). The label of the th input EHR was defined as such that means that the patient did not experience any of the events and means that the patient had experienced the event. The definition of was the same as for the DN case. For both tasks, following [3], the first of each dataset was used for training, and the remaining
was used for testing. We standardized the attribute values by subtracting its mean and dividing by its standard deviation in the training data.
IvA1 Implementation
Since we were solving the classification tasks, we used Eq. (8) for the objective function. We show the overall structure of the proposed method for the experiments in Fig. 5. We first applied dynamic pooling to the raw matrix . Then, the outputs of the first dynamic pooling were inputted to timediscounting convolution. After that, we applied dynamic pooling again. Finally, we used a fully connected neural network as in Eq. (3), where and are parameters, is the inputs to in Eq. (3), and the function
is a rectified linear unit (ReLU)
[31]. We also used L1regularization for the hidden units, which are outputs of the second dynamic pooling, in the optimization of Eq. (1). We tuned the regularization parameter of L1regularization and hyperparameters of the proposed method using the last of the training data as validation data. We then trained the model using all of the training data and the tuned parameters. The candidates for were . The hyperparameters candidates were , , , and . We used four different : , , , and the sequence length. We used them in the same proportion in our feature maps.IvA2 Results
Task  DN  CVD 

DyBM  
CNN  
CNN w/ dynamic pooling  
Proposed w/o dynamic pooling  
Proposed w/ dynamic pooling 
Task  Sunspot  Price  

Average  Best  Average  Best  
VAR  
DyBM  
CNN  
Proposed 
We used the area under the curve (AUC) as the evaluation metric since the tasks were binary classification. We compared the results against those of two baseline methods, DyBM, a stateoftheart model for the timeseries data, and CNN, a stateoftheart model for EHR analysis. For fair comparison, we tuned their hyperparameters in the same manner as with the proposed method for each task. For prediction with DyBM, we used the prediction model of DyBM defined in Eq. (
10) (DyBM). For the prediction with CNN, we used the prediction model by replacing the timediscounting convolution in Fig. 5 with ordinary convolutional layer in Eq. (13) (CNN) and that with the two dynamic pooling in Fig. 5 (CNN w/ dynamic pooling). We also present the results of the proposed method without dynamic pooling. As shown in Table I, the AUC for the proposed method was better than those for the baselines. This shows that the convolutional structure and our temporal parameterization work well for event sequences with ambiguous timestamps. Moreover, the values for the DN dataset were higher than thereported for a stacked convolutional autoencoder model using the same dataset
[3]. The selected hyperparameters for the proposed method were , , , , , and for DN, and , , , , , and for CVD.IvB Prediction from Realworld Timeseries Data
We also evaluated the proposed method using two realworld timeseries datasets. The first dataset was a publicly available dataset containing the monthly sunspot number (Sunspot). We constructed a model for predicting the sunspot number for the next month from the obtained sunspot timeseries data (autoregression task). The timeseries data had dimension and time steps (corresponding to January to December ). The second dataset was a publicly available dataset containing the weekly retail gasoline and diesel prices (Price). We constructed a model for predicting the prices for the following week from the obtained price timeseries data (autoregression task). The had dimensions (corresponding to eight locations in the U.S.) and time steps (corresponding to April th, , to September th, ). For both tasks, following [14], the first of each time series was used for training, and the remaining was used for testing. We normalized the values of each dataset in such a way that the values in the training data were in for each dimension, as in [16].
IvB1 Implementation
Since we were solving the autoregression tasks, we used Eq. (7) for the objective function. In these tasks, the overall structure of the proposed method and the hyperparameters candidates were the same as in the eventsequence experiment in Section IVA.
IvB2 Results
We evaluated the methods by using the average test root mean squared error (RMSE) after iterations and that of the best case. We compared the results against those of three baseline methods, VAR, DyBM, and CNN. For DyBM and CNN, we used as the same implementation of them in the eventsequence experiment. For VAR, we simply used Eq. (11) for the prediction function. As shown in Table II, the RMSE for the proposed method was comparable to or better than those of the baselines. Moreover, the RMSE values for the proposed method were lower than the [14] and [16] for the Sunspot data and the [14] and [16] for the Price data, which were reported as the results from experiments including other DyBM variants and LSTM models. These results indicate that the convolutional structure and our temporal parameterization work well even for ordinary timeseries data. The selected hyperparameters for the proposed method were , , , , , and for Sunspot, and , , , , , and for Price.
V Conclusion
We proposed a timediscounting convolution method that can handle timeshift invariance in event sequences and has robustness against the uncertainty in timestamps while maintaining the important capabilities of timeseries models. Experimental evaluation demonstrated that the proposed method was comparable to or even better than stateoftheart methods in several prediction tasks using event sequences with ambiguous timestamps and ordinary timeseries data. The next step in our work is to develop a learning algorithm in an online manner for the proposed method. Actually, we can approximately update the model parameters in an online manner without back propagation through infinite sequences or storing infinite sequences by leveraging dynamic pooling. Increasing the interpretability of our method is another interesting next step.
Acknowledgments
Takayuki Katsuki and Takayuki Osogami were supported in part by JST CREST Grant Number JPMJCR1304, Japan.
a Specific gradients for model parameters
Here, we show the gradients used in our learning of the model parameters by minibatch gradient descent in Section IIIC. The gradient of with respect to the parameter is
(14) 
where we shorten the partial set of the inputs of the function on the righthand side of Eq. (3) related to the th element of and the index in the dimensional inputs as and
(15) 
The gradient of with respect to the parameters and , and , are almost the same to Eq (14) — they differ only for the gradient of with regard to and . We show them simply as
(16) 
(17) 
Here, for autoregression problems with Eq. (7) is
(18) 
and that for classification problems with Eq. (8) is
(19) 
Through the function and the gradient , other functions, such as the activation function, and other layers can be applied to our model and the learning algorithm.
References

[1]
Y. Cheng, F. Wang, P. Zhang, and J. Hu, “Risk prediction with electronic health records: A deep learning approach,” in
Proceedings of the 2016 SIAM International Conference on Data Mining. SIAM, 2016, pp. 432–440.  [2] Z. Che, Y. Cheng, S. Zhai, Z. Sun, and Y. Liu, “Boosting deep learning risk prediction with generative adversarial networks for electronic health records,” in Data Mining (ICDM), 2017 IEEE International Conference on. IEEE, 2017, pp. 787–792.

[3]
T. Katsuki, M. Ono, A. Koseki, M. Kudo, K. Haida, J. Kuroda, M. Makino, R. Yanagiya, and A. Suzuki, “Risk prediction of diabetic nephropathy via interpretable feature extraction from ehr using convolutional autoencoder,”
Studies in health technology and informatics, vol. 247, pp. 106–110, 2018.  [4] H. Lütkepohl, New introduction to multiple time series analysis. Springer Berlin Heidelberg, 2005, vol. Part I.

[5]
L. E. Baum and T. Petrie, “Statistical inference for probabilistic functions of finite state markov chains,”
The annals of mathematical statistics, vol. 37, no. 6, pp. 1554–1563, 1966.  [6] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representations by error propagation,” California Univ San Diego La Jolla Inst for Cognitive Science, Tech. Rep., 1985.
 [7] S. Hochreiter and J. Schmidhuber, “Long shortterm memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
 [8] G. W. Taylor, G. E. Hinton, and S. T. Roweis, “Modeling human motion using binary latent variables,” in Advances in neural information processing systems, 2007, pp. 1345–1352.
 [9] G. E. Hinton and A. D. Brown, “Spiking boltzmann machines,” in Advances in neural information processing systems, 2000, pp. 122–128.

[10]
I. Sutskever and G. Hinton, “Learning multilevel distributed representations for highdimensional sequences,” in
Artificial Intelligence and Statistics, 2007, pp. 548–555.  [11] I. Sutskever, G. E. Hinton, and G. W. Taylor, “The recurrent temporal restricted boltzmann machine,” in Advances in Neural Information Processing Systems, 2009, pp. 1601–1608.

[12]
T. Osogami and M. Otsuka, “Seven neurons memorizing sequences of alphabetical images via spiketiming dependent plasticity,”
Scientific Reports, vol. 5, p. 14149, 2015.  [13] ——, “Learning dynamic boltzmann machines with spiketiming dependent plasticity,” arXiv preprint arXiv:1509.08634, 2015.
 [14] S. Dasgupta and T. Osogami, “Nonlinear dynamic Boltzmann machines for timeseries prediction,” in The 31st AAAI Conference on Artificial Intelligence (AAAI17), January 2017.
 [15] H. Kajino, “A functional dynamic Boltzmann machine,” in Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI17), 2017, pp. 1987–1993.

[16]
T. Osogami, H. Kajino, and T. Sekiyama, “Bidirectional learning for
timeseries models with hidden units,” in
Proceedings of the 34th International Conference on Machine Learning (ICML 2017)
, August 2017, pp. 2711–2720.  [17] E. Choi, M. T. Bahadori, A. Schuetz, W. F. Stewart, and J. Sun, “Doctor ai: Predicting clinical events via recurrent neural networks,” in Machine Learning for Healthcare Conference, 2016, pp. 301–318.
 [18] S. Xiao, J. Yan, X. Yang, H. Zha, and S. M. Chu, “Modeling the intensity function of point process via recurrent neural networks.” in AAAI, 2017, pp. 1597–1603.
 [19] Z. Che, S. Purushotham, K. Cho, D. Sontag, and Y. Liu, “Recurrent neural networks for multivariate time series with missing values,” Scientific reports, vol. 8, no. 1, p. 6085, 2018.
 [20] A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. J. Lang, “Phoneme recognition using timedelay neural networks,” IEEE transactions on acoustics, speech, and signal processing, vol. 37, no. 3, pp. 328–339, 1989.
 [21] X. Ding, Y. Zhang, T. Liu, and J. Duan, “Deep learning for eventdriven stock prediction.” in Ijcai, 2015, pp. 2327–2333.
 [22] D. Singh, E. Merdivan, S. Hanke, J. Kropf, M. Geist, and A. Holzinger, “Convolutional and recurrent neural networks for activity recognition in smart environment,” in Towards Integrative Machine Learning and Knowledge Extraction. Springer, 2017, pp. 194–205.
 [23] T. J. O’Shea, J. Corgan, and T. C. Clancy, “Unsupervised representation learning of structured radio communication signals,” in Sensing, Processing and Learning for Intelligent Machines (SPLINE), 2016 First International Workshop on. IEEE, 2016, pp. 1–5.

[24]
K. Bascol, R. Emonet, E. Fromont, and J.M. Odobez, “Unsupervised
interpretable pattern discovery in time series using autoencoders,” in
Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR)
. Springer, 2016, pp. 427–438.  [25] T. Katsuki, M. Ono, A. Koseki, M. Kudo, K. Haida, J. Kuroda, M. Makino, R. Yanagiya, and A. Suzuki, “Feature extraction from electronic health records of diabetic nephropathy patients with convolutional autoencoder,” in AAAI 2018 Joint Workshop on Health Intelligence, 2018.

[26]
R. Sennrich, B. Haddow, and A. Birch, “Neural machine translation of rare words with subword units,” in
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, 2016, pp. 1715–1725. 
[27]
S. Zheng, S. Jayasumana, B. RomeraParedes, V. Vineet, Z. Su, D. Du, C. Huang,
and P. H. Torr, “Conditional random fields as recurrent neural networks,”
in
Proceedings of the IEEE International Conference on Computer Vision
, 2015, pp. 1529–1537.  [28] F. Wang, N. Lee, J. Hu, J. Sun, and S. Ebadollahi, “Towards heterogeneous temporal clinical event pattern discovery: a convolutional approach,” in Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2012, pp. 453–461.
 [29] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proceedings of the 3rd International Conference for Learning Representations (ICLR2015), 2015.
 [30] M. Makino, M. Ono, T. Itoko, T. Katsuki, A. Koseki, M. Kudo, K. Haida, J. Kuroda, R. Yanagiya, and A. Suzuki, “Artificial intelligence predicts progress of diabetic kidney diseasenovel prediction model construction with big data machine learning,” 2018.
 [31] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th international conference on machine learning (ICML10), 2010, pp. 807–814.
Comments
There are no comments yet.