Time-series data is one of the most important types of data, and it is increasingly collected in many different domains 
. The high-resolution data would be extremely beneficial for many analytical applications such as data visualization, short-term forecasts of energy generation/load  or energy prices . The explanation for not working with high-resolution data is the enormous amount of storage space required. For example, in electrical grid, utility companies serving millions of customers, storing smart-meter measurements of high resolution would sum up to petabytes of data . Even if storage prices are expected to decrease, the benefits of analytical applications will not always justify huge investments in storage. Similar concerns hold for the transmission of such data, which might cause congestion.
To deal with such huge amounts of fine-grained time series data, one possibility is to employ compression techniques. In contrast to lossless compression techniques  such as , lossy compression promises to achieve much higher compression ratios . When doing lossy compression of data, one has to ensure that the original data can be retrieved or reconstructed with sufficient quality for the respective applications. This is, tasks like precise short-term load forecasting can only be done if very-close-to-original measurement values can be reconstructed from the compressed data. This requires guarantees regarding the maximum deviation of the values. Although time-series data and lossless compression have been investigated for quite some time, there is little research on lossy compression and retrieval of such data with guarantees. Studies such as 
do not compress all types of data well or have disadvantages in terms of runtime. Moreover, none of these articles propose methods estimating the performance of their compression techniques.
RNNs are particularly suitable for modeling dynamical systems as they operate on input information as well as a trace of acquired previous information (due to recurrent connections) allowing for direct processing of temporal dependencies. RNNs can be employed for a wide range of tasks as they inherit their flexibility from plain neural networks. Among all RNN architectures, the most successful one to characterize long-term memory is the long short-term memory network (LSTM) , which learns both short-term and long-term memory by enforcing constant error flow through the designed cell state.
In this work, we propose a novel scheme for lossy compression of time series. We model the encoder and decoder by two LSTM cells working parallely. Between encoder and decoder, we use the autoencoder to compress the hidden state and input observations together. Since the LSTM has strong capability of learning long-term dependencies, the encoder and decoder can both capture the long-term dependencies in time series, which can reduce the amount of information needed in the transmission. Because the input size of LSTM and autoencoder is fixed, we need to an extra interpolation step to adapt to the changes of local statistics in time series, which can significantly reduce the compression efficiency. And the proposed algorithm can be easily extended to multi-dimensional time series. The experimental study shows the compression capability of the proposed algorithm.
The Autoencoder (AE) was first introduced as a dimension-reduction model 
. An autoencoder takes an input vectorand transforms it into a latent representation . The transformation, typically referred as encoder, follows the equation as below:
where and correspond to the weighs and bias in the encoder, and
is the sigmoid function.
The resulting latent representation is then mapped back into the reconstructed feature space in the decoder as follows:
where and correspond to the weights and bias in the decoder. The autoencoder is trained by minimizing the reconstruction error .
2.2 Recurrent Neural Network
RNNs are discrete-time state–space models trainable by specialized weight adaptation algorithms. The input to RNN is a variable-length sequence which can be recursively processed. And when processing each symbol, RNN maintains its internal hidden state . The operation of RNN at each timestep can be formulated as
where is the deterministic state transition function and is the parameter of . The output of RNN is computed using the following equation:
where function can be modeled as a neural network with weights . In implementation, the function can be realized by long short-term memory .
Although traditional RNN exhibits a superior capability of modeling nonlinear time series problems in an effective fashion, there are still several issues to be addressed :
Traditional RNNs cannot train the time series with very long time lags, which is commonly seen in real-world datasets.
Traditional RNNs rely on the predetermined time lags to learn the temporal sequence processing, but it is difficult to find the optimal window size in an automatic way.
2.3 Long Short-Term Memory
To overcome the aforementioned disadvantages of traditional RNNs, Long Short-Term Memory (LSTM) neural network is adopted in this study to model time series. LSTM was initially introduced in  with the objective of modeling long-term dependencies and determining the optimal time lag for time series problems. A LSTM is composed of one input layer, one recurrent hidden layer, and one output layer. The basic unit in the hidden layer is memory block, containing memory cells with self-connections memorizing the temporal state, and a pair of adaptive, multiplicative gating units to control information flow in the block. It also has input gate and output gate controlling the input and output activations into the block.
The memory cell is primarily a recurrently self-connected linear unit, called Constant Error Carousel (CEC), and the cell state is represented by the activation of the CEC. Because of CEC, the multiplicative gates can learn when to open and close. Then by keeping the network error constant, the vanishing gradient problem can be solved in LSTM. Moreover, a forget gate was added to the memory cell, which can prevent the gradient from exploding when learning long time series. The operation and structure of LSTM can be described as below:
where and are denoted as input gate, forget gate and output gate at time respectively, and represent the hidden state and cell state of the memory cell at time , and is the elementwise multiplication.
3 Model: Adaptive Pairwise Recurrent Autoencoder
The model presented in this work is a combination of autoencoder and LSTM, where the input window size can also adjust to local statistics of time series. Denote the input observations, compressed signal, and reconstruction at time as and respectively. The input window size, i.e. the dimension of vector , is denoted as .
3.1 Recurrent Autoencoder
On the encoder side, at each time step , the encoder reads the input , and extracts its feature . Inside the LSTM cell, the state transition can be described as below:
where the function is the operation described by (2) parametrized by . The compressed signal is calculated by the output function as below:
where output function is implemented by a two-layer neural network. Since LSTM cell at the decoder can also capture the long-term dependencies, we don’t need encode cell state into compressed signal. The incorporation of the previous hidden state is not redundant, because that hidden state contains information to predict the current input which can improve the compression efficiency. Actually, the popular predictive coding compression is to compress the prediction error, i.e., . In high dimension case, the prediction residual is not small, where the prediction based compression may not work well. However, compressing the input signal together with the previous hidden state can augment the compression, and predictive coding is a special case of our method.
On the decoder side, denote the hidden state and cell state of LSTM as and . When receiving the compressed signal, the decoder first decodes the input feature and memory state:
where is also implemented by two-layer neural network. The LSTM state transition at the decoder side is described as:
The reconstructed signal is obtained by the output function at the decoder:
where function is implemented by two-layer neural network, here is the hidden state of the decoder LSTM cell, and is the inverse function of at the encoder side. Since both the encoder and decoder can learn the long-term dependencies at the same time, the dimension of compressed signal can be significantly reduced. The whole structure is shown in Figure 1.
3.2 Adaptive Pairwise Compression Algorithm
Our piecewise compression algorithm employs a greedy strategy to compress intervals of time series. This is necessary as the size of the state space for an optimal solution is extremely large.
For most kinds of time series, such as seismic signals, the local statistics are changing dramatically across time. It can be smooth and flat in some time, but it can also be fluctuating and has great variance in some time. According to, the learnability of fully connected neural network is constraint by the variance of the function to learn, which is the -1 norm of the coefficients of the function represented by 1-Lipschitz basis. For simplicity, we preprocess the training data based on local total variation, i.e.,
where is the local time window. During training, for each trace, we greedily partition time series into segments, where the total variation of each segments is close to a pre-defined value which is specific to the property of dataset. When the length of the segment is greater than the input dimension of RAE , this segment is down sampled to , since this segment is smooth and has redundant information. When the length of the segment is smaller than , we interpolate this segment to the length of , because the interpolated of this segment is easier to learn. This pre-processing facilitate the training without losing important information. The following figures show two seismic traces before and after pre-processing.
After trained on pre-processed dataset, the recurrent autoencoder (RAE) can be a good compressor for each segment of time series. The main algorithm (Algorithm 1) of our compression algorithm employs RAE incrementally, and the compression result depends on a user-defined maximum tolerable deviation. This deviation is realized as a threshold on the uniform norm between the original and the reconstructed time series. This norm is defined as the maximum absolute distance between any pair of points of the real and the reconstructed time series:
In order to improve the compression efficiency, the size of input observations is changed according to the local statistics of time series. So, the number of input observations may be different from the input dimension of RAE. When the nearby data points are relatively smooth, which is easier to compress, the input window size can be larger and more data are compressed at once. In this case, the input observations are down-sampled to meet the input dimension of RAE. When the nearby data points are changing with higher diversity, which is relatively difficult to compress, the input window size is decrease. Here, in order to meet the input dimension of RAE, the input observations are interpolated to longer vector.
Therefore, the time series are encoded with various resolution. Some parts are compressed in a coarse way while some parts are compressed with more refine codes, which can significantly improve the compression efficiency. In implementation, at each time, we use binary search to find the best input window size.
In this section, we conduct experimental study to show the compression capability of the proposed algorithm.
4.1 1D Time Series
In this experiment, the time series are from seismic signal . We first train the RAE model on a qualified seismic dataset with malicious traces evicted. Then we apply the adaptive pairwise compression algorithm on the trained RAE model. All traces in training and test sets are normalized to [-1, 1]. For each signal trace, the tolerable deviations 0.1 and 0.15 are tested. The dashed red lines represent the boundaries between adaptive input windows. The green line shows the reconstructed signal, and the blue shows the original signal.
4.2 Multi-dimensional Time Series
In the second experiment, we evaluate the proposed method on human activity time series from smartphone sensors . The signal has three dimensions, X, Y and Z-axis. The training was performed on the first 1000 data points, and the compression was evaluated on next 1000 data points. The results are shown below.
In this work, we propose a novel lossy compression algorithm for time series. The model is a combination of autoencoder and LSTM. And we apply an adaptive input window to adjust to local statistics of time series. Our work has two advantages:
Both encoder and decoder can learn the long-term dependencies in time series, and the amount of information in the transmission can be reduced.
The adaptive input window size can make the algorithm ignore redundant information, and focus more on informative part of time series.
In the future, we plan to apply this algorithm to financial data.
-  C. Ratanamahatana, J. Lin, D. Gunopulos, E. Keogh, M. Vlachos, G. Das. Mining Time Series Data. In: O. Maimon, L. Rokach (eds.) Data Mining and Knowledge Discovery Handbook, chap. 56, pp. 1049-1077. Springer (2010)
-  Nga, D., See, O., Do Nguyet Quang, C., Chee, L.: Visualization Techniques in Smart Grid. Smart Grid and Renewable Energy 3(3), 175-185 (2012)
-  Ramanathan, R., Engle, R., Granger, C.W., Vahid-Araghi, F., Brace, C.: Short-run Forecasts of Electricity Loads and Peaks. International Journal of Forecasting 13(2), 161-174 (1997)
-  Aggarwal, S.K., Saini, L.M., Kumar, A.: Electricity Price Forecasting in Deregulated Markets: A Review and Evaluation. International Journal of Electrical Power and Energy Systems 31(1), 13-22 (2009)
-  Eichinger, F., Pathmaperuma, D., Vogt, H., Müller, E.: Data Analysis Challenges in the Future Energy Domain. In: T. Yu, N. Chawla, S. Simoff (eds.) Computational Intelligent Data Analysis for Sustainable Development, chap. 7, pp. 181-242. Chapman and Hall/CRC (2013)
-  Ringwelski, M., Renner, C., Reinhardt, A., Weigely, A., Turau, V.: The Hitchhiker’s Guide to Choosing the Compression Algorithm for Your Smart Meter Data. In: Int. Energy Conference (ENER- GYCON), pp. 935-940 (2012)
-  Huffman, D.A.: A Method for the Construction of Minimum Redundancy Codes. Proceedings of the Institute of Radio Engineers 40(9), 1098-1101 (1952)
-  Ziv, J., Lempel, A.: A Universal Algorithm for Sequential Data Compression. IEEE Transactions on Information Theory 23(3), 337-343 (1977)
-  Salomon, D.: A Concise Introduction to Data Compression. Springer (2008)
-  Elmeleegy, H., Elmagarmid, A.K., Cecchet, E., Aref, W.G., Zwaenepoel, W.: Online Piece-wise Linear Approximation of Numerical Streams with Precision Guarantees. In: Int. Conference on Very Large Data Bases (VLDB), pp. 145-156 (2009)
-  Lazaridis, I., Mehrotra, S.: Capturing Sensor-Generated Time Series with Quality Guarantees. In: Int. Conference on Data Engineering (ICDE), pp. 429-440 (2003)
-  Papaioannou, T.G., Riahi, M., Aberer, K.: Towards Online Multi-Model Approximation of Time Series. In: Int. Conference on Mobile Data Management (MDM), pp. 33-38 (2011)
-  S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735-1780, 1997.
-  Bengio, Y., 2007. Learning deep architectures for AI. Technical Report 1312. Dept. IRO, Universite de Montreal.
-  F.-A. Gers, J. Schmidhuber, F. Cummins. Learning to Forget: Continual Prediction with LSTM. Technical Report IDSIA-01-99, 1999.
-  Smith, K., Pechmann, J., Meremonte, M., and Pankow, K. (2011). Preliminary analysis of the mw 6.0 wells, nevada, earthquake sequence. Nevada Bureau of Mines and Geology Special Publication, 36:127-145.
-  Shoaib, M. and Bosch, S. and Incel, O.D. and Scholten, H. and Havinga, P.J.M. (2014) Fusion of Smartphone Motion Sensors for Physical Activity Recognition. Sensors, 14: 10146-10176.
Zhang, Yuchen, et al. ”On the Learnability of Fully-Connected Neural Networks.” Artificial Intelligence and Statistics. 2017.