HyperTime: Implicit Neural Representation for Time Series

08/11/2022
by   Elizabeth Fons, et al.
51

Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data. Their robustness as general approximators has been shown in a wide variety of data sources, with applications on image, sound, and 3D scene representation. However, little attention has been given to leveraging these architectures for the representation and analysis of time series data. In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed. We show how these networks can be leveraged for the imputation of time series, with applications on both univariate and multivariate data. Finally, we propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset. We introduce an FFT-based loss to guide training so that all frequencies are preserved in the time series. We show that this network can be used to encode time series as INRs, and their embeddings can be interpolated to generate new time series from existing ones. We evaluate our generative method by using it for data augmentation, and show that it is competitive against current state-of-the-art approaches for augmentation of time series.

READ FULL TEXT VIEW PDF
01/27/2022

Robust Augmentation for Multivariate Time Series Classification

Neural networks are capable of learning powerful representations of data...
10/15/2019

Improving Robustness of time series classifier with Neural ODE guided gradient based data augmentation

Exploring adversarial attack vectors and studying their effects on machi...
02/16/2021

Adaptive Weighting Scheme for Automatic Time-Series Data Augmentation

Data augmentation methods have been shown to be a fundamental technique ...
09/10/2021

Implicit Copulas: An Overview

Implicit copulas are the most common copula choice for modeling dependen...
06/17/2021

Voice2Series: Reprogramming Acoustic Models for Time Series Classification

Learning to classify time series with limited data is a practical yet ch...
08/16/2021

A complex network approach to time series analysis with application in diagnosis of neuromuscular disorders

Electromyography (EMG) refers to a biomedical signal indicating neuromus...
10/10/2019

Graph Spectral Embedding for Parsimonious Transmission of Multivariate Time Series

We propose a graph spectral representation of time series data that 1) i...

1 Introduction

Modeling time series data has been a key topic of research for many years, constituting a crucial component of applications in a wide variety of areas such as climate modeling, medicine, biology, retail and finance Lim and Zohren (2021)

. Traditional methods for time series modeling have relied on parametric models informed by expert knowledge. However, the development of modern machine learning methods has provided purely data-driven techniques to learn temporal relationships. In particular, neural network-based methods have gained popularity in recent times, with applications on a wide range of tasks, such as time series classification 

Ismail Fawaz et al. (2020), clustering Ma et al. (2019); Alqahtani et al. (2021), segmentation Perslev et al. (2019); Zeng et al. (2022)

, anomaly detection 

Choi et al. (2021); Xu et al. (2018); Hundman et al. (2018), upsampling Oh et al. (2020); Bellos et al. (2019), imputation Liu (2018); Luo et al. (2018); Cao et al. (2018), forecasting Lim and Zohren (2021); Torres et al. (2021) and synthesis Alaa et al. (2021); Yoon et al. (2019a); Jordon et al. (2019). In particular, the generation of time series data for augmentation has remained as an open problem, and is currently gaining interest due to the large number of potential applications such as in medical and financial datasets, where data cannot be shared, either for privacy reasons or for proprietary restrictions Jordon et al. (2021, 2019); Assefa et al. (2020); Coletta et al. (2021).

In recent years, implicit neural representations (INRs) have gained popularity as an accurate and flexible method to parameterize signals, such as from image, video, audio and 3D scene data Sitzmann et al. (2020b); Mildenhall et al. (2020). Conventional methods for data encoding often rely on discrete representations, such as data grids, which are limited by their spatial resolution and present inherent discretization artifacts. In contrast, implicit neural representations encode data in terms of continuous functional relationships between signals, and thus are uncoupled to spatial resolution. In practical terms, INRs provide a new data representation framework that is resolution-independent, with many potential applications on time series data, where irregularly sampled and missing data are common occurrences Fang and Wang (2020). However, there are currently no works exploring the suitability of INRs on time series representation and analysis.

In this work, we propose an implicit neural representation for univariate and multivariate time series data. We compare the performance of different activation functions in terms of reconstruction accuracy and training convergence, and we formulate and compare different strategies for data imputation in time series, relying on INRs (Section 4.2). Finally, we combine these representations with a hypernetwork architecture, in order to learn a prior over the space of time series. The training of our hypernetwork takes into account the accurate reconstruction of both the time series signals and their respective power spectra. This motivates us to propose a Fourier-based loss that proves to be crucial in guiding the learning process. The advantage of employing such a Fourier-based loss is that it allows our hypernetwork to preserve all frequencies in the time series representation. In Section 4.3, we leverage the latent embeddings learned by the hypernetwork for the synthesis of new time series by interpolation, and show that our method performs competitively against recent state-of-the-art methods for time series augmentation.

2 Related Work

Implicit Neural Representations

INRs (or coordinate-based neural networks) have recently gained popularity in computer vision applications. The usual implementation of INRs consists of a fully-connected neural network (MLP) that maps coordinates (

e.g. xyz-coordinates) to the corresponding values of the data, essentially encoding their functional relationship in the network. One of the main advantages of this approach for data representation, is that the information is encoded in a continuous/grid-free representation, that provides a built-in non-linear interpolation of the data. This avoids the usual artifacts that arise from discretization, and has been shown to combine flexible and accurate data representation, with high memory efficiency Sitzmann et al. (2020b); Tancik et al. (2020). Whilst INRs have been shown to work on data from diverse sources, such as video, images and audio Sitzmann et al. (2020b); Chen et al. (2021); Rott Shaham et al. (2021), their recent popularity has been motivated by multiple applications in the representation of 3D scene data, such as 3D geometry Park et al. (2019); Mescheder et al. (2019); Sitzmann et al. (2020a, 2019) and object appearance Mildenhall et al. (2020); Sztrajman et al. (2021).

In early architectures, INRs showed a lack of accuracy in the encoding of high-frequency details of signals. Mildenhall et alMildenhall et al. (2020) proposed positional encodings to address this issue, and Tancik et alTancik et al. (2020) further explored them, showing that by using Fourier-based features in the input layer, the network is able to learn the full spectrum of frequencies from data. Concurrently, Sitzmann et alSitzmann et al. (2020b) tackled the encoding of high-frequency data by proposing the use of sinusoidal activation functions (SIREN: Sinusoidal Representation Networks), and Benbarka et alBenbarka et al. (2022) showed the equivalence between Fourier features and single-layer SIRENs. Our INR architecture for time series data (Section 3.1) is based on the SIREN architecture by Sitzmann et al. In Section 4.1 we compare the performance of different activation layers, in terms of reconstruction accuracy and training convergence speed, for both univariate and multivariate time series.

Hypernetworks

A hypernetwork is a neural network architecture designed to predict the weight values of a secondary neural network, denominated a HypoNetwork Sitzmann et al. (2020a). The concept of hypernetwork was formalized by Ha et alHa et al. (2017)

, drawing inspiration from methods in evolutionary computing 

Stanley et al. (2009). Moreover, while convolutional encoders have been likened to the function of the human visual system Skorokhodov et al. (2021), the analogy cannot be extended to convolutional decoders, and some authors have argued that hypernetworks much more closely match the behavior of the prefrontal cortex  Russin et al. (2020). Hypernetworks have been praised for their expressivity, compression due to weight sharing, and for their fast inference timesSkorokhodov et al. (2021). They have been leveraged for multiple applications, including few-shot learning Rusu et al. (2019); Zhao et al. (2020), continual learning von Oswald et al. (2020) and architecture search Zhang et al. (2019); Brock et al. (2018). Moreover, in the last two years some works have started to leverage hypernetworks for the training of INRs, enabling the learning of latent encodings of data, while also maintaining the flexible and accurate reconstruction of signals provided by INRs. This approach has been implemented with different hypernetwork architectures, to learn priors over image data Sitzmann et al. (2020b); Skorokhodov et al. (2021), 3D scene geometry Littwin and Wolf (2019); Sitzmann et al. (2019, 2020a) and material appearance Sztrajman et al. (2021). Tancik et alTancik et al. (2021) leverage hypernetworks to speed-up the training of INRs by providing learned initializations of the network weights. Sitzmann et alSitzmann et al. (2020b) combine a set encoder with a hypernetwork decoder to learn a prior over INRs representing image data, and apply it for image in-painting. Our hypernetwork architecture from Section 3 is similar to Sitzmann et al.’s, however we learn a prior over the space of time series and leverage it for new data synthesis through interpolation of the learned embeddings. Furthermore, our architecture implements a Fourier-based loss, which we show is crucial for the accurate reconstruction of time series datasets (Section 4.3).

Time Series Generation

Realistic time series generation has been previously studied in the literature by using the generative adversarial networks (GANs). In TimeGAN architecture 

Yoon et al. (2019b), realistic generation of temporal patterns was achieved by jointly optimizing with both supervised and adversarial objectives to learn an embedding space. QuantGAN Wiese et al. (2020) consists of a generator and discriminator functions represented by temporal convolutional networks, which allows it to synthesize long-range dependencies such as the presence of volatility clusters that are characteristic of financial time series. TimeVAE Desai et al. (2021)

was recently proposed as a variational autoencoder alternative to GAN-based time series generation. GANs and VAEs are typically used for creating statistical replicas of the training data, and not the distributionally new scenarios needed for data augmentation. More recently, Alaa 

et alAlaa et al. (2021)

presented Fourier Flows, a normalizing flow model for time series data that leverages the frequency domain representation, currently considered together with TimeGAN as state-of-the-art for time series augmentation.

Data augmentation is well established in computer vision tasks due to the simplicity of label-preserving geometric image transformation techniques, but it is still not widely used for time series with some early work being discussed in the literature Iwana and Uchida (2021). For example, simple augmentation techniques applied to financial price time series such as adding noise or time warping were shown to improve the quality of next day price prediction model. Fons et al. (2021)

3 Formulation

In this Section we describe the network architectures that we use to encode time series data (Subsection 3.1), and the hypernetwork architecture (HyperTime) leveraged for prior learning and new data generation (Subsection 3.2).

3.1 Time Series Representation

In Figure 1 we present a diagram of the INR used for univariate time series. The network is composed of fully-connected layers of dimensions , with sine activations (SIREN Sitzmann et al. (2020b)):

(1)

where corresponds to the layer of the network. A general factor multiplying the network weights determines the order of magnitude of the frequencies that will be used to encode the signal. Input and output of the INR are uni-dimensional, and correspond to the time coordinate and the time series evaluation . Training of the network is done in a supervised manner, with MSE loss, and takes less than in a GeForce GTX 1080 Ti GPU. After training, the network encodes a continuous representation of the functional relationship for a single time series.

Figure 1:

Diagram of the implicit neural representation (INR) for univariate time series. Neurons with a black border use sine activations. The network is composed of fully-connected layers with dimensions

.

The architecture from Figure 1 can be modified to encode multivariate time series, by simply increasing the number of neurons of the output layer to match the number of channels of the signal. Due to weight-sharing, this adds a potential for data compression of the time series. In addition, the simultaneous encoding of multiple correlated channels can be leveraged for channel imputation, as we will show in Section 4.2.

3.2 Time Series Generation with HyperTime

In Figure 2 we display a diagram of the HyperTime architecture, which allows us to leverage INRs to learn priors over the space of time series. The Set Encoder (green network), composed of SIREN layers Sitzmann et al. (2020b) with dimensions , takes as input a pair of values, corresponding to the time-coordinate and the time series signal . Each pair of input values is thus encoded into a full

-values embedding and fed to the HyperNet decoder (red network), composed of fully-connected layers with ReLU activations (MLP), with dimensions

. The output of the HyperNet is a one-dimensional -values embedding that contains the network weights of an INR which encodes the time series data from the input. The INR architecture used within HyperTime is the same described in the previous section, and illustrated in Figure 1. Following previous works Sitzmann et al. (2020a), in order to avoid ambiguities we refer to these predicted INRs as HypoNets.

Figure 2: Diagram of HyperTime network architecture. Each pair of time-coordinate and time series is encoded as a 40-values embedding by the Set Encoder. The HyperNet decoder learns to predict HypoNet weights from the embeddings. During training, the output of the HyperNet is used to build a HypoNet and evaluate it on in the input time-coordinates. The loss is computed as a difference between and the output of the HypoNet .

During the training of HyperTime, we use the weights predicted by the HyperNet decoder to instantiate a HypoNet and evaluate it on the input time-coordinate , to produce the predicted time series value . The entire chain of operations is implemented within the same differentiable pipeline, and hence the training loss can be computed as the difference between the ground truth time series signal and the value predicted by the HypoNet .

After the training of HyperTime, the Set Encoder is able to generate latent embeddings for entire time series. In Section 4.3, we show that these embeddings can be interpolated to synthesize new time series signals from known ones, which can be leveraged for data augmentation (see additional material for a pseudo-code of the procedure).

Loss

The training of HyperTime is done by optimizing the following loss, which contains an MSE reconstruction term and two regularization terms and , for the network weights and the latent embeddings respectively:

(2)

In addition, we introduce a Fourier-based loss that focuses on the accurate reconstruction of the power spectrum of the ground truth signal (see Supplement for more details):

(3)

In Section 4.3, we show that is crucial for the accurate reconstruction of the time series signals.

4 Experiments

4.1 Reconstruction

We start by showing that encoding time series using SIRENS leads to a better reconstruction error than using implicit networks with other activations. We use univariate and multivariate time series datasets from the UCR archive Bagnall et al. (2017).111The datasets can be downloaded from the project’s website: www.timeseriesclassification.com Bagnall et al. We selected datasets with different characteristics, either short length time series or long, or in the case of the multivariate datasets, with many features (in some cases, more features than time series length). We sample 300 time series (or the maximum number available) from each dataset, train a single SIREN for each time series and calculate the reconstruction error. For comparison we train implicit networks using ReLU, Tanh and Sigmoid activations. As a sample case, we show in Figure 3 the losses (left) and reconstruction plots (right) for one of the univariate datasets (NonInvasiveFetalECGThorax1). Here we observe that sine activations converge much faster, and to lower error values, than other activation functions (for the full set of loss and reconstruction plots, along with a description of each dataset, see additional materials). A summary of results can be found in Table 1, where we observe that the MSE error is at least an order of magnitude lower for sine activations, with respect to other activation layers.

Losses                                                                                               Reconstructions

MSE (log scale)        Time-series value

Iterations                                                                                               Time Steps

Figure 3: Comparison of implicit networks using different activation functions, MSE loss (left) and reconstructed time-series (right).
Sine ReLU Tanh Sigmoid
Univariate
Crop 5.1e-06 5.4e-03 2.8e-02 5.1e-01
NonInvasiveFetalECGThorax1 2.3e-05 2.8e-02 5.7e-02 8.1e-02
PhalangesOutlinesCorrect 7.5e-06 1.9e-02 1.4e-01 3.3e-01
FordA 9.2e-06 1.4e-01 1.5e-01 1.5e-01
Multivariate
Cricket 1.6e-04 4.2e-03 5.1e-03 1.6e-02
DuckDuckGeese 9.1e-05 8.0e-04 8.7e-04 9.1e-04
MotorImagery 1.7e-03 1.1e-02 1.1e-02 1.8e-02
PhonemeSpectra 1.1e-06 6.0e-03 1.6e-02 1.8e-02
Table 1: Comparison using MSE of implicit networks using different activation functions on different univariate and multivariate time series from the UCR dataset.

4.2 Imputation

Real world time series data tend to suffer from missing values, in some cases achieving missing rates of up to , making the data difficult to use Fang and Wang (2020). Motivated by this, we first use a SIREN trained with the available data points to infer the missing values. Besides fitting a SIREN to the time series, we include a simple regularization term based on a total variation (TV) prior Lysaker (2006). The TV prior samples random points from the available points of the time series (those that do not have missing values) and the regularization consists of the L1 norm on the gradient

We compare this approach with common time series imputation methods, mean, that fills missing values with the average, kNN, cubic spline and linear. We are not solely interested in the reconstruction error, but also in the accurate and smooth reproduction of the Fourier spectrum composition of the original time series. Therefore, we evaluate both the MSE of the time series reconstruction, and the MAE between the Fourier spectra of ground truth and reconstructed, that we name Fourier Error (FFTE).

Table 2 shows the comparison of SIREN with and without prior with the classical methods using different ratios of missing values. We can see that the reconstruction error from baseline methods tends to be low, although when we compare the spectrum through FFTE, we can see that the lowest errors are achieved by SIREN, except when we have a fraction of missing values of , where in small time series such as Crop or Phalanges, that have less than 100 time steps, both reconstruction error and FFTE are poor. With regard to adding a prior over the gradient, we can see that for very short time series such as Crop and Phalanges (which are 46 and 80 time steps of length, respectively), the prior improves the reconstruction error when compared to SIREN without a prior, and this is also the case for FFTE. Figure 4 shows the comparison of SIREN, SIREN plus TV prior and Linear for a randomly selected time series from the NonInvasiveFetalECGThorax1 (NonInv) dataset with of data points missing (the plot is zoomed in a segment of 250 timesteps to highlight the imputation characteristics). We can see that the linear imputation is not smooth, so it’s not a good representation of the series, while SIREN is smooth but tends to have higher deviations from the series. SIREN plus the TV prior is a good compromise between maintaining the smoothness of the series while also deviating less from the ground truth.

SIREN SIREN_TV Mean kNN Cubic Spline Linear
MSE FFTE MSE FFTE MSE FFTE MSE FFTE MSE FFTE MSE FFTE
Crop 0.0 5.5e-07 0.00 1.7e-06 0.01 - - - - - - - -
0.1 6.5e-03 0.43 1.2e-03 0.18 1.7e-02 3.01 1.7e-02 3.03 6.7e-07 2.25 2.2e-06 2.25
0.5 1.2e-01 1.80 1.3e-02 0.63 1.7e-02 3.02 1.6e-02 2.97 8.3e-05 2.25 7.6e-05 2.25
0.7 2.9e-01 2.52 3.8e-02 0.95 1.7e-02 3.03 1.7e-02 3.03 9.5e-03 2.25 8.6e-04 2.24
0.9 5.4e-01 2.74 3.5e-01 2.02 1.7e-02 3.02 1.7e-02 3.00 1.1e+02 2.49 1.6e-02 2.24
NonInv 0.0 1.5e-06 0.02 3.5e-06 0.04 - - - - - - - -
0.1 2.1e-06 0.03 4.7e-06 0.04 1.7e-02 3.00 1.6e-02 2.97 3.5e-07 2.25 2.2e-06 2.25
0.5 9.2e-05 0.15 7.6e-05 0.14 1.7e-02 3.00 1.7e-02 3.02 1.1e-04 2.25 1.1e-04 2.25
0.7 1.2e-03 0.46 1.3e-03 0.45 1.7e-02 2.99 1.7e-02 2.99 1.7e-02 2.25 6.9e-04 2.24
0.9 1.5e-02 1.40 1.6e-02 1.55 1.6e-02 2.98 1.6e-02 2.97 2.8e-01 2.27 1.8e-02 2.24
Phalanges 0.0 4.3e-07 0.00 1.7e-06 0.01 - - - - - - - -
0.1 9.0e-04 0.22 7.6e-04 0.20 1.7e-02 3.00 1.7e-02 3.05 3.4e-07 2.25 1.7e-06 2.25
0.5 2.8e-02 1.12 1.1e-02 0.74 1.7e-02 3.00 1.7e-02 3.04 3.5e-05 2.25 1.1e-04 2.25
0.7 1.1e-01 1.94 4.3e-02 1.27 1.6e-02 2.99 1.7e-02 2.99 2.3e-03 2.25 6.2e-04 2.24
0.9 3.3e-01 2.75 2.2e-01 2.26 1.6e-02 2.99 1.7e-02 2.99 7.1e-02 2.26 1.4e-02 2.24
FordA 0.0 2.1e-06 0.02 4.1e-06 0.03 - - - - - - - -
0.1 7.2e-06 0.04 3.3e-05 0.09 1.7e-02 3.00 1.7e-02 3.02 5.3e-07 2.25 1.7e-06 2.25
0.5 2.0e-03 0.54 5.1e-03 0.97 1.7e-02 3.02 1.7e-02 3.01 9.4e-05 2.25 8.4e-05 2.25
0.7 1.9e-02 1.57 2.9e-02 2.06 1.6e-02 2.96 1.6e-02 2.97 3.4e-03 2.25 6.4e-04 2.24
0.9 1.3e-01 3.52 1.5e-01 3.78 1.7e-02 3.01 1.7e-02 2.99 1.2e-01 2.26 1.6e-02 2.24
Table 2: Comparison of implicit neural representations (INRs) with classical imputation methods using different fractions of missing values. Each method shows the MSE reconstruction errors and the Fourier reconstruction errors.

NonInv - Linear                                                    NonInv - SIREN                                                    NonInv - SIREN + TV

Time series value    

Time step                                                                 Time step                                                               Time step

Figure 4: Comparison of imputation methods, zoomed in 250 time steps for the NonInv dataset, with of missing values.

4.3 Time Series Generation

To evaluate the utility of learning a prior over the space of implicit functions, we use the set encoder network and the hypernetwork to generate new time series. We do so by projecting time series into the latent vector of the HyperTime network and interpolating the latent vector. This is similar to training an autoencoder and interpolating the latent space, but the output of the decoder of HyperTime are the weights of the SIREN networks. We follow the experimental set up proposed in

Alaa et al. (2021)

for the evaluation, were the performance of the synthetic data is evaluated using a predictive score (MAE) that corresponds to the prediction accuracy of an off-the-shelf neural network trained on the synthetic data and tested on the real data. Additionally, to measure the quality of the synthetic data, we use the precision and recall averaged over all time steps, which are then combined into a single F-score. We use the same datasets as before, and we add two datasets that were used in Fourier Flows 

Alaa et al. (2021) and TimeGAN Yoon et al. (2019b), Google stocks data and UCI Energy data. We compare our HyperTime model with generating data using PCA, with Fourier Flows and TimeGAN, two state-of-the-art methods for time series generation. Table 3 shows the performance scores for all models and datasets. Additionally, we visualize the generated samples using t-SNE plots 5 where we can see that the generated data from HyperTime exhibits the same patterns as the original data. In the case of Fourier Flows, in the UCR datasets we see that NonInv and Phalanges do not show a good agreement.

The synthesis of time series via principal component analysis is performed in a similar fashion as our HyperTime generation pipeline. We apply PCA to generate a decomposition of time series into a basis of

principal components. The coefficients of these components constitute a latent representation for each time series of the dataset, and we can interpolate between embeddings of known time series to synthesize new ones. The main limitation of this procedure, besides its linearity, is that it can only be applied to datasets of equally sampled time series.

Crop NonInv Phalanges Energy Stock
PCA
MAE 0.050 0.019 0.050 0.007 0.110
F1 Score 0.999 0.999 0.999 0.998 0.999
HyperTime (Ours)
MAE 0.040 0.005 0.026 0.058 0.013
F1 Score 0.999 0.996 0.998 0.999 0.995
TimeGAN
MAE 0.048 0.028 0.108 0.056 0.173
F1 Score 0.831 0.914 0.960 0.479 0.938
Fourier Flows
MAE 0.040 0.018 0.056 0.029 0.008
F1 Score 0.991 0.990 0.992 0.945 0.992
Table 3: Performance scores for data generated with HyperTime and for all baselines.
Figure 5: t-SNE visualization on univariate datasets (in rows: Stocks, Energy, Crop, NonInv and Phalanges), using different time series generation methods (in columns: HyperTime, PCA, Fourier Flows and TimeGAN). Blue corresponds to original data and orange to synthetic data.
Figure 6: Left: t-SNE visualization of ground truth and generated data on two univariate datasets (NonInv and FordA), using HyperNet with and without the Fourier-based loss (Eq. 3). Right:Standard deviation of the power spectra for the time series of the same two datasets. FordA shows a considerably larger number of variations in the distributions of the power spectra, which explains the difficulty of HyperTime to learn patterns from the data.

Finally, we analyze the importance of the Fourier-based loss from equation 3 on the training of HyperTime. In Figure 6-left we display t-SNE visualizations of time series synthesized by HyperTime with and without the use of the FFT loss during training, for two datasets (NonInv and FordA). In both cases, the addition of the loss results in an improved matching between ground truth and generated data. However, in the case of FordA, the addition of this loss becomes crucial to guide the learning process. This is also reflected in the numerical evaluations from Table 4, which shows steep improvements in performance for the FordA dataset.

A likely explanation for the difficulty of the network to learn meaningful patterns from the data of this dataset is provided by the right plot in Figure 6. Here we show the standard deviation of the power spectrum for both datasets, as a function of the frequency. The difference in the distributions indicates that FordA is composed of spectra that present larger variability, while NonInv’s spectra are considerably more clustered. Further research on the characteristics of the datasets that benefit the more from the loss should be further investigated, especially focusing on non-stationary time series.

NonInv FordA
HyperTime + FFT loss
MAE 0.0053 0.0076
F1 Score 0.9962 0.9987
HyperTime (no FFT)
MAE 0.0058 0.1647
F1 Score 0.9960 0.0167
Table 4: Performance scores for data generated with HyperTime, with and without the Fourier-based loss , for two datasets (NonInvasiveFetalECGThorax1, FordA).

5 Conclusions

In this paper we explored the use of implicit neural representations for the encoding and analysis of both univariate and multivariate time series data. We compared multiple activation functions, and showed that periodic activation layers outperform traditional activations in terms of reconstruction accuracy and training speed. Additionally, we showed that INRs can be leveraged for data imputation, resulting in good reconstructions of both the original data and its power spectrum, when compared with classical imputation methods. Finally, we presented HyperTime, a hypernetwork architecture to generate synthetic data which enforces not only learning an accurate reconstruction over the learned space of time series, but also preserving the shapes of the power distributions. For this purpose, we introduced a new Fourier-based loss on the training, and showed that for some datasets it becomes crucial to enable the network to find meaningful patterns in the data. We leveraged the latent representations learned by the hypernetwork for the generation of new time-series data, and compared favorably against current state-of-the-art methods for time-series augmentation. Besides reconstruction, imputation and synthesis, we believe that both INRs and HyperTime open the door for a large number of potential applications in time-series, such as upsampling, forecasting and privacy preservation.

Disclaimer

This paper was prepared for informational purposes in part by the Artificial Intelligence Research group of JPMorgan Chase & Coȧnd its affiliates (“JP Morgan”), and is not a product of the Research Department of JP Morgan. JP Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful.

References

  • [1] A. Alaa, A. J. Chan, and M. van der Schaar (2021) Generative time-series modeling with fourier flows. In International Conference on Learning Representations, Cited by: §1, §2, §4.3.
  • [2] A. Alqahtani, M. Ali, X. Xie, and M. W. Jones (2021) Deep time-series clustering: a review. Electronics 10 (23). External Links: Link, ISSN 2079-9292, Document Cited by: §1.
  • [3] S. A. Assefa, D. Dervovic, M. Mahfouz, R. E. Tillman, P. Reddy, and M. Veloso (2020) Generating synthetic data in finance: opportunities, challenges and pitfalls. In Proceedings of the First ACM International Conference on AI in Finance, ICAIF ’20, New York, NY, USA. External Links: ISBN 9781450375849, Link, Document Cited by: §1.
  • [4] A. Bagnall, J. Lines, A. Bostrom, J. Large, and E. Keogh (2017) The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Mining and Knowledge Discovery 31, pp. 606–660. Cited by: §4.1.
  • [5] A. Bagnall, J. Lines, W. Vickers, and E. Keogh The uea ucr time series classification repository. Note: www.timeseriesclassification.comAccessed: 2022-05-10 Cited by: footnote 1.
  • [6] D. Bellos, M. Basham, T. P. Pridmore, and A. P. French (2019)

    A convolutional neural network for fast upsampling of undersampled tomograms in x-ray ct time-series using a representative highly sampled tomogram

    .
    Journal of Synchrotron Radiation 26, pp. 839 – 853. Cited by: §1.
  • [7] N. Benbarka, T. Höfer, H. ul Moqeet Riaz, and A. Zell (2022) Seeing implicit neural representations as fourier series. 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 2283–2292. Cited by: §2.
  • [8] A. Brock, T. Lim, J.M. Ritchie, and N. Weston (2018) SMASH: one-shot model architecture search through hypernetworks. In International Conference on Learning Representations, External Links: Link Cited by: §2.
  • [9] W. Cao, D. Wang, J. Li, H. Zhou, L. Li, and Y. Li (2018) BRITS: bidirectional recurrent imputation for time series. In Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), Vol. 31, pp. . External Links: Link Cited by: §1.
  • [10] Y. Chen, S. Liu, and X. Wang (2021) Learning continuous image representation with local implicit image function. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    ,
    pp. 8628–8638. Cited by: §2.
  • [11] K. Choi, J. Yi, C. Park, and S. Yoon (2021) Deep learning for anomaly detection in time-series data: review, analysis, and guidelines. IEEE Access 9 (), pp. 120043–120065. External Links: Document Cited by: §1.
  • [12] A. Coletta, M. Prata, M. Conti, E. Mercanti, N. Bartolini, A. Moulin, S. Vyetrenko, and T. Balch (2021) Towards realistic market simulations: a generative adversarial networks approach. In Proceedings of the Second ACM International Conference on AI in Finance, ICAIF ’21, New York, NY, USA. External Links: ISBN 9781450391481, Link, Document Cited by: §1.
  • [13] A. Desai, C. Freeman, Z. Wang, and I. Beaver (2021) TimeVAE: a variational auto-encoder for multivariate time series generation. arXiv preprint arXiv:2111.08095. Cited by: §2.
  • [14] C. Fang and C. Wang (2020) Time series data imputation: a survey on deep learning approaches. ArXiv abs/2011.11347. Cited by: §1, §4.2.
  • [15] E. Fons, P. Dawson, X. Zeng, J. Keane, and A. Iosifidis (2021) Adaptive weighting scheme for automatic time-series data augmentation. External Links: 2102.08310 Cited by: §2.
  • [16] D. Ha, A. M. Dai, and Q. V. Le (2017) HyperNetworks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, External Links: Link Cited by: §2.
  • [17] K. Hundman, V. Constantinou, C. Laporte, I. Colwell, and T. Soderstrom (2018) Detecting spacecraft anomalies using lstms and nonparametric dynamic thresholding. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery ; Data Mining, KDD ’18, New York, NY, USA, pp. 387–395. External Links: ISBN 9781450355520, Link, Document Cited by: §1.
  • [18] H. Ismail Fawaz, B. Lucas, G. Forestier, C. Pelletier, D. F. Schmidt, J. Weber, G. I. Webb, L. Idoumghar, P. Muller, and F. Petitjean (2020) InceptionTime: finding alexnet for time series classification. Data Mining and Knowledge Discovery. Cited by: §1.
  • [19] B. K. Iwana and S. Uchida (2021-07) An empirical survey of data augmentation for time series classification with neural networks. PLOS ONE 16 (7), pp. e0254841. External Links: ISSN 1932-6203, Link, Document Cited by: §2.
  • [20] J. Jordon, D. Jarrett, E. Saveliev, J. Yoon, P. Elbers, P. Thoral, A. Ercole, C. Zhang, D. Belgrave, and M. van der Schaar (2021-06–12 Dec) Hide-and-seek privacy challenge: synthetic data generation vs. patient re-identification. In Proceedings of the NeurIPS 2020 Competition and Demonstration Track, H. J. Escalante and K. Hofmann (Eds.), Proceedings of Machine Learning Research, Vol. 133, pp. 206–215. External Links: Link Cited by: §1.
  • [21] J. Jordon, J. Yoon, and M. van der Schaar (2019) PATE-gan: generating synthetic data with differential privacy guarantees. In ICLR, Cited by: §1.
  • [22] B. Lim and S. Zohren (2021) Time-series forecasting with deep learning: a survey. Phylosophical Transactions of the Royal Society A. External Links: Document Cited by: §1.
  • [23] G. Littwin and L. Wolf (2019-10) Deep meta functionals for shape representation. In IEEE/CVF International Conference on Computer Vision (ICCV), pp. 1824–1833. External Links: Document Cited by: §2.
  • [24] Y. Liu (2018) Recurrent neural networks for multivariate time series with missing values. Scientific Reports 8 (1), pp. 6085. External Links: Document, ISBN 2045-2322, Link Cited by: §1.
  • [25] Y. Luo, X. Cai, Y. ZHANG, J. Xu, and Y. xiaojie (2018) Multivariate time series imputation with generative adversarial networks. In Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), Vol. 31, pp. . External Links: Link Cited by: §1.
  • [26] M. Lysaker (2006-01) Iterative image restoration combining total variation minimization and a second-order functional. International Journal of Computer Vision 66, pp. 5–18. External Links: Document Cited by: §4.2.
  • [27] Q. Ma, J. Zheng, S. Li, and G. W. Cottrell (2019) Learning representations for time series clustering. In Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlché-Buc, E. Fox, and R. Garnett (Eds.), Vol. 32, pp. . External Links: Link Cited by: §1.
  • [28] L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger (2019) Occupancy networks: learning 3d reconstruction in function space. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [29] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng (2020) NeRF: representing scenes as neural radiance fields for view synthesis. In ECCV, Cited by: §1, §2, §2.
  • [30] C. Oh, S. Han, and J. Jeong (2020) Time-series data augmentation based on interpolation. Procedia Computer Science 175, pp. 64–71. Note: The 17th International Conference on Mobile Systems and Pervasive Computing (MobiSPC),The 15th International Conference on Future Networks and Communications (FNC),The 10th International Conference on Sustainable Energy Information Technology External Links: ISSN 1877-0509, Document, Link Cited by: §1.
  • [31] J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove (2019-06) DeepSDF: learning continuous signed distance functions for shape representation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [32] M. Perslev, M. Jensen, S. Darkner, P. J. Jennum, and C. Igel (2019) U-time: a fully convolutional network for time series segmentation applied to sleep staging. In Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlché-Buc, E. Fox, and R. Garnett (Eds.), Vol. 32, pp. . External Links: Link Cited by: §1.
  • [33] T. Rott Shaham, M. Gharbi, R. Zhang, E. Shechtman, and T. Michaeli (2021) Spatially-adaptive pixelwise networks for fast image translation. In Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [34] J. R. Russin, R. O’Reilly, and Y. B. Bengio (2020) Deep learning needs a prefrontal cortex. In Bridging AI and Cognitive Science ICLR 2020 Workshop, Cited by: §2.
  • [35] A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, and R. Hadsell (2019) Meta-learning with latent embedding optimization. In International Conference on Learning Representations, External Links: Link Cited by: §2.
  • [36] V. Sitzmann, E. R. Chan, R. Tucker, N. Snavely, and G. Wetzstein (2020) MetaSDF: meta-learning signed distance functions. In Proc. NeurIPS, Cited by: §2, §2, §3.2.
  • [37] V. Sitzmann, J. N.P. Martel, A. W. Bergman, D. B. Lindell, and G. Wetzstein (2020) Implicit neural representations with periodic activation functions. In Proc. NeurIPS, Cited by: §1, §2, §2, §2, §3.1, §3.2.
  • [38] V. Sitzmann, M. Zollhöfer, and G. Wetzstein (2019) Scene representation networks: continuous 3d-structure-aware neural scene representations. In Advances in Neural Information Processing Systems, Cited by: §2, §2.
  • [39] I. Skorokhodov, S. Ignatyev, and M. Elhoseiny (2021-06) Adversarial generation of continuous images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10753–10764. Cited by: §2.
  • [40] K. O. Stanley, D. B. D’Ambrosio, and J. Gauci (2009-04) A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks. Artificial Life 15 (2), pp. 185–212. External Links: ISSN 1064-5462, Document, Link Cited by: §2.
  • [41] A. Sztrajman, G. Rainer, T. Ritschel, and T. Weyrich (2021) Neural brdf representation and importance sampling. Computer Graphics Forum 40 (6), pp. 332–346. External Links: Document, Link, https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14335 Cited by: §2, §2.
  • [42] M. Tancik, B. Mildenhall, T. Wang, D. Schmidt, P. P. Srinivasan, J. T. Barron, and R. Ng (2021) Learned initializations for optimizing coordinate-based neural representations. In CVPR, Cited by: §2.
  • [43] M. Tancik, P. P. Srinivasan, B. Mildenhall, S. Fridovich-Keil, N. Raghavan, U. Singhal, R. Ramamoorthi, J. T. Barron, and R. Ng (2020) Fourier features let networks learn high frequency functions in low dimensional domains. NeurIPS. Cited by: §2, §2.
  • [44] J. F. Torres, D. Hadjout, A. Sebaa, F. Martínez-Álvarez, and A. T. Lora (2021) Deep learning for time series forecasting: a survey. Big data. Cited by: §1.
  • [45] J. von Oswald, C. Henning, B. F. Grewe, and J. Sacramento (2020) Continual learning with hypernetworks. In International Conference on Learning Representations, External Links: Link Cited by: §2.
  • [46] M. Wiese, R. Knobloch, R. Korn, and P. Kretschmer (2020-04) Quant gans: deep generation of financial time series. Quantitative Finance, pp. 1–22. External Links: ISSN 1469-7696, Link, Document Cited by: §2.
  • [47] H. Xu, W. Chen, N. Zhao, Z. Li, J. Bu, Z. Li, Y. Liu, Y. Zhao, D. Pei, Y. Feng, J. Chen, Z. Wang, and H. Qiao (2018) Unsupervised anomaly detection via variational auto-encoder for seasonal kpis in web applications. In Proceedings of the 2018 World Wide Web Conference, WWW ’18, pp. 187–196. External Links: ISBN 9781450356398, Link, Document Cited by: §1.
  • [48] J. Yoon, D. Jarrett, and M. van der Schaar (2019) Time-series generative adversarial networks. In NeurIPS, Cited by: §1.
  • [49] J. Yoon, D. Jarrett, and M. van der Schaar (2019) Time-series generative adversarial networks. In NeurIPS, Cited by: §2, §4.3.
  • [50] L. Zeng, B. Zhou, M. Al-Rifai, and E. Kharlamov (2022) SegTime: precise time series segmentation without sliding window. External Links: Link Cited by: §1.
  • [51] C. Zhang, M. Ren, and R. Urtasun (2019) Graph hypernetworks for neural architecture search. In International Conference on Learning Representations, External Links: Link Cited by: §2.
  • [52] D. Zhao, S. Kobayashi, J. Sacramento, and J. von Oswald (2020) Meta-learning via hypernetworks. In 4th Workshop on Meta-Learning at NeurIPS 2020 (MetaLearn 2020), External Links: Document Cited by: §2.