1 Introduction
Modeling time series data has been a key topic of research for many years, constituting a crucial component of applications in a wide variety of areas such as climate modeling, medicine, biology, retail and finance Lim and Zohren (2021)
. Traditional methods for time series modeling have relied on parametric models informed by expert knowledge. However, the development of modern machine learning methods has provided purely datadriven techniques to learn temporal relationships. In particular, neural networkbased methods have gained popularity in recent times, with applications on a wide range of tasks, such as time series classification
Ismail Fawaz et al. (2020), clustering Ma et al. (2019); Alqahtani et al. (2021), segmentation Perslev et al. (2019); Zeng et al. (2022)Choi et al. (2021); Xu et al. (2018); Hundman et al. (2018), upsampling Oh et al. (2020); Bellos et al. (2019), imputation Liu (2018); Luo et al. (2018); Cao et al. (2018), forecasting Lim and Zohren (2021); Torres et al. (2021) and synthesis Alaa et al. (2021); Yoon et al. (2019a); Jordon et al. (2019). In particular, the generation of time series data for augmentation has remained as an open problem, and is currently gaining interest due to the large number of potential applications such as in medical and financial datasets, where data cannot be shared, either for privacy reasons or for proprietary restrictions Jordon et al. (2021, 2019); Assefa et al. (2020); Coletta et al. (2021).In recent years, implicit neural representations (INRs) have gained popularity as an accurate and flexible method to parameterize signals, such as from image, video, audio and 3D scene data Sitzmann et al. (2020b); Mildenhall et al. (2020). Conventional methods for data encoding often rely on discrete representations, such as data grids, which are limited by their spatial resolution and present inherent discretization artifacts. In contrast, implicit neural representations encode data in terms of continuous functional relationships between signals, and thus are uncoupled to spatial resolution. In practical terms, INRs provide a new data representation framework that is resolutionindependent, with many potential applications on time series data, where irregularly sampled and missing data are common occurrences Fang and Wang (2020). However, there are currently no works exploring the suitability of INRs on time series representation and analysis.
In this work, we propose an implicit neural representation for univariate and multivariate time series data. We compare the performance of different activation functions in terms of reconstruction accuracy and training convergence, and we formulate and compare different strategies for data imputation in time series, relying on INRs (Section 4.2). Finally, we combine these representations with a hypernetwork architecture, in order to learn a prior over the space of time series. The training of our hypernetwork takes into account the accurate reconstruction of both the time series signals and their respective power spectra. This motivates us to propose a Fourierbased loss that proves to be crucial in guiding the learning process. The advantage of employing such a Fourierbased loss is that it allows our hypernetwork to preserve all frequencies in the time series representation. In Section 4.3, we leverage the latent embeddings learned by the hypernetwork for the synthesis of new time series by interpolation, and show that our method performs competitively against recent stateoftheart methods for time series augmentation.
2 Related Work
Implicit Neural Representations
INRs (or coordinatebased neural networks) have recently gained popularity in computer vision applications. The usual implementation of INRs consists of a fullyconnected neural network (MLP) that maps coordinates (
e.g. xyzcoordinates) to the corresponding values of the data, essentially encoding their functional relationship in the network. One of the main advantages of this approach for data representation, is that the information is encoded in a continuous/gridfree representation, that provides a builtin nonlinear interpolation of the data. This avoids the usual artifacts that arise from discretization, and has been shown to combine flexible and accurate data representation, with high memory efficiency Sitzmann et al. (2020b); Tancik et al. (2020). Whilst INRs have been shown to work on data from diverse sources, such as video, images and audio Sitzmann et al. (2020b); Chen et al. (2021); Rott Shaham et al. (2021), their recent popularity has been motivated by multiple applications in the representation of 3D scene data, such as 3D geometry Park et al. (2019); Mescheder et al. (2019); Sitzmann et al. (2020a, 2019) and object appearance Mildenhall et al. (2020); Sztrajman et al. (2021).In early architectures, INRs showed a lack of accuracy in the encoding of highfrequency details of signals. Mildenhall et al. Mildenhall et al. (2020) proposed positional encodings to address this issue, and Tancik et al. Tancik et al. (2020) further explored them, showing that by using Fourierbased features in the input layer, the network is able to learn the full spectrum of frequencies from data. Concurrently, Sitzmann et al. Sitzmann et al. (2020b) tackled the encoding of highfrequency data by proposing the use of sinusoidal activation functions (SIREN: Sinusoidal Representation Networks), and Benbarka et al. Benbarka et al. (2022) showed the equivalence between Fourier features and singlelayer SIRENs. Our INR architecture for time series data (Section 3.1) is based on the SIREN architecture by Sitzmann et al. In Section 4.1 we compare the performance of different activation layers, in terms of reconstruction accuracy and training convergence speed, for both univariate and multivariate time series.
Hypernetworks
A hypernetwork is a neural network architecture designed to predict the weight values of a secondary neural network, denominated a HypoNetwork Sitzmann et al. (2020a). The concept of hypernetwork was formalized by Ha et al. Ha et al. (2017)
, drawing inspiration from methods in evolutionary computing
Stanley et al. (2009). Moreover, while convolutional encoders have been likened to the function of the human visual system Skorokhodov et al. (2021), the analogy cannot be extended to convolutional decoders, and some authors have argued that hypernetworks much more closely match the behavior of the prefrontal cortex Russin et al. (2020). Hypernetworks have been praised for their expressivity, compression due to weight sharing, and for their fast inference timesSkorokhodov et al. (2021). They have been leveraged for multiple applications, including fewshot learning Rusu et al. (2019); Zhao et al. (2020), continual learning von Oswald et al. (2020) and architecture search Zhang et al. (2019); Brock et al. (2018). Moreover, in the last two years some works have started to leverage hypernetworks for the training of INRs, enabling the learning of latent encodings of data, while also maintaining the flexible and accurate reconstruction of signals provided by INRs. This approach has been implemented with different hypernetwork architectures, to learn priors over image data Sitzmann et al. (2020b); Skorokhodov et al. (2021), 3D scene geometry Littwin and Wolf (2019); Sitzmann et al. (2019, 2020a) and material appearance Sztrajman et al. (2021). Tancik et al. Tancik et al. (2021) leverage hypernetworks to speedup the training of INRs by providing learned initializations of the network weights. Sitzmann et al. Sitzmann et al. (2020b) combine a set encoder with a hypernetwork decoder to learn a prior over INRs representing image data, and apply it for image inpainting. Our hypernetwork architecture from Section 3 is similar to Sitzmann et al.’s, however we learn a prior over the space of time series and leverage it for new data synthesis through interpolation of the learned embeddings. Furthermore, our architecture implements a Fourierbased loss, which we show is crucial for the accurate reconstruction of time series datasets (Section 4.3).Time Series Generation
Realistic time series generation has been previously studied in the literature by using the generative adversarial networks (GANs). In TimeGAN architecture
Yoon et al. (2019b), realistic generation of temporal patterns was achieved by jointly optimizing with both supervised and adversarial objectives to learn an embedding space. QuantGAN Wiese et al. (2020) consists of a generator and discriminator functions represented by temporal convolutional networks, which allows it to synthesize longrange dependencies such as the presence of volatility clusters that are characteristic of financial time series. TimeVAE Desai et al. (2021)was recently proposed as a variational autoencoder alternative to GANbased time series generation. GANs and VAEs are typically used for creating statistical replicas of the training data, and not the distributionally new scenarios needed for data augmentation. More recently, Alaa
et al. Alaa et al. (2021)presented Fourier Flows, a normalizing flow model for time series data that leverages the frequency domain representation, currently considered together with TimeGAN as stateoftheart for time series augmentation.
Data augmentation is well established in computer vision tasks due to the simplicity of labelpreserving geometric image transformation techniques, but it is still not widely used for time series with some early work being discussed in the literature Iwana and Uchida (2021). For example, simple augmentation techniques applied to financial price time series such as adding noise or time warping were shown to improve the quality of next day price prediction model. Fons et al. (2021)
3 Formulation
In this Section we describe the network architectures that we use to encode time series data (Subsection 3.1), and the hypernetwork architecture (HyperTime) leveraged for prior learning and new data generation (Subsection 3.2).
3.1 Time Series Representation
In Figure 1 we present a diagram of the INR used for univariate time series. The network is composed of fullyconnected layers of dimensions , with sine activations (SIREN Sitzmann et al. (2020b)):
(1) 
where corresponds to the layer of the network. A general factor multiplying the network weights determines the order of magnitude of the frequencies that will be used to encode the signal. Input and output of the INR are unidimensional, and correspond to the time coordinate and the time series evaluation . Training of the network is done in a supervised manner, with MSE loss, and takes less than in a GeForce GTX 1080 Ti GPU. After training, the network encodes a continuous representation of the functional relationship for a single time series.
The architecture from Figure 1 can be modified to encode multivariate time series, by simply increasing the number of neurons of the output layer to match the number of channels of the signal. Due to weightsharing, this adds a potential for data compression of the time series. In addition, the simultaneous encoding of multiple correlated channels can be leveraged for channel imputation, as we will show in Section 4.2.
3.2 Time Series Generation with HyperTime
In Figure 2 we display a diagram of the HyperTime architecture, which allows us to leverage INRs to learn priors over the space of time series. The Set Encoder (green network), composed of SIREN layers Sitzmann et al. (2020b) with dimensions , takes as input a pair of values, corresponding to the timecoordinate and the time series signal . Each pair of input values is thus encoded into a full
values embedding and fed to the HyperNet decoder (red network), composed of fullyconnected layers with ReLU activations (MLP), with dimensions
. The output of the HyperNet is a onedimensional values embedding that contains the network weights of an INR which encodes the time series data from the input. The INR architecture used within HyperTime is the same described in the previous section, and illustrated in Figure 1. Following previous works Sitzmann et al. (2020a), in order to avoid ambiguities we refer to these predicted INRs as HypoNets.During the training of HyperTime, we use the weights predicted by the HyperNet decoder to instantiate a HypoNet and evaluate it on the input timecoordinate , to produce the predicted time series value . The entire chain of operations is implemented within the same differentiable pipeline, and hence the training loss can be computed as the difference between the ground truth time series signal and the value predicted by the HypoNet .
After the training of HyperTime, the Set Encoder is able to generate latent embeddings for entire time series. In Section 4.3, we show that these embeddings can be interpolated to synthesize new time series signals from known ones, which can be leveraged for data augmentation (see additional material for a pseudocode of the procedure).
Loss
The training of HyperTime is done by optimizing the following loss, which contains an MSE reconstruction term and two regularization terms and , for the network weights and the latent embeddings respectively:
(2) 
In addition, we introduce a Fourierbased loss that focuses on the accurate reconstruction of the power spectrum of the ground truth signal (see Supplement for more details):
(3) 
In Section 4.3, we show that is crucial for the accurate reconstruction of the time series signals.
4 Experiments
4.1 Reconstruction
We start by showing that encoding time series using SIRENS leads to a better reconstruction error than using implicit networks with other activations. We use univariate and multivariate time series datasets from the UCR archive Bagnall et al. (2017).^{1}^{1}1The datasets can be downloaded from the project’s website: www.timeseriesclassification.com Bagnall et al. We selected datasets with different characteristics, either short length time series or long, or in the case of the multivariate datasets, with many features (in some cases, more features than time series length). We sample 300 time series (or the maximum number available) from each dataset, train a single SIREN for each time series and calculate the reconstruction error. For comparison we train implicit networks using ReLU, Tanh and Sigmoid activations. As a sample case, we show in Figure 3 the losses (left) and reconstruction plots (right) for one of the univariate datasets (NonInvasiveFetalECGThorax1). Here we observe that sine activations converge much faster, and to lower error values, than other activation functions (for the full set of loss and reconstruction plots, along with a description of each dataset, see additional materials). A summary of results can be found in Table 1, where we observe that the MSE error is at least an order of magnitude lower for sine activations, with respect to other activation layers.
Sine  ReLU  Tanh  Sigmoid  

Univariate  
Crop  5.1e06  5.4e03  2.8e02  5.1e01 
NonInvasiveFetalECGThorax1  2.3e05  2.8e02  5.7e02  8.1e02 
PhalangesOutlinesCorrect  7.5e06  1.9e02  1.4e01  3.3e01 
FordA  9.2e06  1.4e01  1.5e01  1.5e01 
Multivariate  
Cricket  1.6e04  4.2e03  5.1e03  1.6e02 
DuckDuckGeese  9.1e05  8.0e04  8.7e04  9.1e04 
MotorImagery  1.7e03  1.1e02  1.1e02  1.8e02 
PhonemeSpectra  1.1e06  6.0e03  1.6e02  1.8e02 
4.2 Imputation
Real world time series data tend to suffer from missing values, in some cases achieving missing rates of up to , making the data difficult to use Fang and Wang (2020). Motivated by this, we first use a SIREN trained with the available data points to infer the missing values. Besides fitting a SIREN to the time series, we include a simple regularization term based on a total variation (TV) prior Lysaker (2006). The TV prior samples random points from the available points of the time series (those that do not have missing values) and the regularization consists of the L1 norm on the gradient
We compare this approach with common time series imputation methods, mean, that fills missing values with the average, kNN, cubic spline and linear. We are not solely interested in the reconstruction error, but also in the accurate and smooth reproduction of the Fourier spectrum composition of the original time series. Therefore, we evaluate both the MSE of the time series reconstruction, and the MAE between the Fourier spectra of ground truth and reconstructed, that we name Fourier Error (FFTE).
Table 2 shows the comparison of SIREN with and without prior with the classical methods using different ratios of missing values. We can see that the reconstruction error from baseline methods tends to be low, although when we compare the spectrum through FFTE, we can see that the lowest errors are achieved by SIREN, except when we have a fraction of missing values of , where in small time series such as Crop or Phalanges, that have less than 100 time steps, both reconstruction error and FFTE are poor. With regard to adding a prior over the gradient, we can see that for very short time series such as Crop and Phalanges (which are 46 and 80 time steps of length, respectively), the prior improves the reconstruction error when compared to SIREN without a prior, and this is also the case for FFTE. Figure 4 shows the comparison of SIREN, SIREN plus TV prior and Linear for a randomly selected time series from the NonInvasiveFetalECGThorax1 (NonInv) dataset with of data points missing (the plot is zoomed in a segment of 250 timesteps to highlight the imputation characteristics). We can see that the linear imputation is not smooth, so it’s not a good representation of the series, while SIREN is smooth but tends to have higher deviations from the series. SIREN plus the TV prior is a good compromise between maintaining the smoothness of the series while also deviating less from the ground truth.
SIREN  SIREN_TV  Mean  kNN  Cubic Spline  Linear  

MSE  FFTE  MSE  FFTE  MSE  FFTE  MSE  FFTE  MSE  FFTE  MSE  FFTE  
Crop  0.0  5.5e07  0.00  1.7e06  0.01                 
0.1  6.5e03  0.43  1.2e03  0.18  1.7e02  3.01  1.7e02  3.03  6.7e07  2.25  2.2e06  2.25  
0.5  1.2e01  1.80  1.3e02  0.63  1.7e02  3.02  1.6e02  2.97  8.3e05  2.25  7.6e05  2.25  
0.7  2.9e01  2.52  3.8e02  0.95  1.7e02  3.03  1.7e02  3.03  9.5e03  2.25  8.6e04  2.24  
0.9  5.4e01  2.74  3.5e01  2.02  1.7e02  3.02  1.7e02  3.00  1.1e+02  2.49  1.6e02  2.24  
NonInv  0.0  1.5e06  0.02  3.5e06  0.04                 
0.1  2.1e06  0.03  4.7e06  0.04  1.7e02  3.00  1.6e02  2.97  3.5e07  2.25  2.2e06  2.25  
0.5  9.2e05  0.15  7.6e05  0.14  1.7e02  3.00  1.7e02  3.02  1.1e04  2.25  1.1e04  2.25  
0.7  1.2e03  0.46  1.3e03  0.45  1.7e02  2.99  1.7e02  2.99  1.7e02  2.25  6.9e04  2.24  
0.9  1.5e02  1.40  1.6e02  1.55  1.6e02  2.98  1.6e02  2.97  2.8e01  2.27  1.8e02  2.24  
Phalanges  0.0  4.3e07  0.00  1.7e06  0.01                 
0.1  9.0e04  0.22  7.6e04  0.20  1.7e02  3.00  1.7e02  3.05  3.4e07  2.25  1.7e06  2.25  
0.5  2.8e02  1.12  1.1e02  0.74  1.7e02  3.00  1.7e02  3.04  3.5e05  2.25  1.1e04  2.25  
0.7  1.1e01  1.94  4.3e02  1.27  1.6e02  2.99  1.7e02  2.99  2.3e03  2.25  6.2e04  2.24  
0.9  3.3e01  2.75  2.2e01  2.26  1.6e02  2.99  1.7e02  2.99  7.1e02  2.26  1.4e02  2.24  
FordA  0.0  2.1e06  0.02  4.1e06  0.03                 
0.1  7.2e06  0.04  3.3e05  0.09  1.7e02  3.00  1.7e02  3.02  5.3e07  2.25  1.7e06  2.25  
0.5  2.0e03  0.54  5.1e03  0.97  1.7e02  3.02  1.7e02  3.01  9.4e05  2.25  8.4e05  2.25  
0.7  1.9e02  1.57  2.9e02  2.06  1.6e02  2.96  1.6e02  2.97  3.4e03  2.25  6.4e04  2.24  
0.9  1.3e01  3.52  1.5e01  3.78  1.7e02  3.01  1.7e02  2.99  1.2e01  2.26  1.6e02  2.24 
4.3 Time Series Generation
To evaluate the utility of learning a prior over the space of implicit functions, we use the set encoder network and the hypernetwork to generate new time series. We do so by projecting time series into the latent vector of the HyperTime network and interpolating the latent vector. This is similar to training an autoencoder and interpolating the latent space, but the output of the decoder of HyperTime are the weights of the SIREN networks. We follow the experimental set up proposed in
Alaa et al. (2021)for the evaluation, were the performance of the synthetic data is evaluated using a predictive score (MAE) that corresponds to the prediction accuracy of an offtheshelf neural network trained on the synthetic data and tested on the real data. Additionally, to measure the quality of the synthetic data, we use the precision and recall averaged over all time steps, which are then combined into a single Fscore. We use the same datasets as before, and we add two datasets that were used in Fourier Flows
Alaa et al. (2021) and TimeGAN Yoon et al. (2019b), Google stocks data and UCI Energy data. We compare our HyperTime model with generating data using PCA, with Fourier Flows and TimeGAN, two stateoftheart methods for time series generation. Table 3 shows the performance scores for all models and datasets. Additionally, we visualize the generated samples using tSNE plots 5 where we can see that the generated data from HyperTime exhibits the same patterns as the original data. In the case of Fourier Flows, in the UCR datasets we see that NonInv and Phalanges do not show a good agreement.The synthesis of time series via principal component analysis is performed in a similar fashion as our HyperTime generation pipeline. We apply PCA to generate a decomposition of time series into a basis of
principal components. The coefficients of these components constitute a latent representation for each time series of the dataset, and we can interpolate between embeddings of known time series to synthesize new ones. The main limitation of this procedure, besides its linearity, is that it can only be applied to datasets of equally sampled time series.Crop  NonInv  Phalanges  Energy  Stock  

PCA  
MAE  0.050  0.019  0.050  0.007  0.110 
F1 Score  0.999  0.999  0.999  0.998  0.999 
HyperTime (Ours)  
MAE  0.040  0.005  0.026  0.058  0.013 
F1 Score  0.999  0.996  0.998  0.999  0.995 
TimeGAN  
MAE  0.048  0.028  0.108  0.056  0.173 
F1 Score  0.831  0.914  0.960  0.479  0.938 
Fourier Flows  
MAE  0.040  0.018  0.056  0.029  0.008 
F1 Score  0.991  0.990  0.992  0.945  0.992 
Finally, we analyze the importance of the Fourierbased loss from equation 3 on the training of HyperTime. In Figure 6left we display tSNE visualizations of time series synthesized by HyperTime with and without the use of the FFT loss during training, for two datasets (NonInv and FordA). In both cases, the addition of the loss results in an improved matching between ground truth and generated data. However, in the case of FordA, the addition of this loss becomes crucial to guide the learning process. This is also reflected in the numerical evaluations from Table 4, which shows steep improvements in performance for the FordA dataset.
A likely explanation for the difficulty of the network to learn meaningful patterns from the data of this dataset is provided by the right plot in Figure 6. Here we show the standard deviation of the power spectrum for both datasets, as a function of the frequency. The difference in the distributions indicates that FordA is composed of spectra that present larger variability, while NonInv’s spectra are considerably more clustered. Further research on the characteristics of the datasets that benefit the more from the loss should be further investigated, especially focusing on nonstationary time series.
NonInv  FordA  

HyperTime + FFT loss  
MAE  0.0053  0.0076 
F1 Score  0.9962  0.9987 
HyperTime (no FFT)  
MAE  0.0058  0.1647 
F1 Score  0.9960  0.0167 
5 Conclusions
In this paper we explored the use of implicit neural representations for the encoding and analysis of both univariate and multivariate time series data. We compared multiple activation functions, and showed that periodic activation layers outperform traditional activations in terms of reconstruction accuracy and training speed. Additionally, we showed that INRs can be leveraged for data imputation, resulting in good reconstructions of both the original data and its power spectrum, when compared with classical imputation methods. Finally, we presented HyperTime, a hypernetwork architecture to generate synthetic data which enforces not only learning an accurate reconstruction over the learned space of time series, but also preserving the shapes of the power distributions. For this purpose, we introduced a new Fourierbased loss on the training, and showed that for some datasets it becomes crucial to enable the network to find meaningful patterns in the data. We leveraged the latent representations learned by the hypernetwork for the generation of new timeseries data, and compared favorably against current stateoftheart methods for timeseries augmentation. Besides reconstruction, imputation and synthesis, we believe that both INRs and HyperTime open the door for a large number of potential applications in timeseries, such as upsampling, forecasting and privacy preservation.
Disclaimer
This paper was prepared for informational purposes in part by the Artificial Intelligence Research group of JPMorgan Chase & Coȧnd its affiliates (“JP Morgan”), and is not a product of the Research Department of JP Morgan. JP Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful.
References
 [1] (2021) Generative timeseries modeling with fourier flows. In International Conference on Learning Representations, Cited by: §1, §2, §4.3.
 [2] (2021) Deep timeseries clustering: a review. Electronics 10 (23). External Links: Link, ISSN 20799292, Document Cited by: §1.
 [3] (2020) Generating synthetic data in finance: opportunities, challenges and pitfalls. In Proceedings of the First ACM International Conference on AI in Finance, ICAIF ’20, New York, NY, USA. External Links: ISBN 9781450375849, Link, Document Cited by: §1.
 [4] (2017) The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Mining and Knowledge Discovery 31, pp. 606–660. Cited by: §4.1.
 [5] The uea ucr time series classification repository. Note: www.timeseriesclassification.comAccessed: 20220510 Cited by: footnote 1.

[6]
(2019)
A convolutional neural network for fast upsampling of undersampled tomograms in xray ct timeseries using a representative highly sampled tomogram
. Journal of Synchrotron Radiation 26, pp. 839 – 853. Cited by: §1.  [7] (2022) Seeing implicit neural representations as fourier series. 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 2283–2292. Cited by: §2.
 [8] (2018) SMASH: oneshot model architecture search through hypernetworks. In International Conference on Learning Representations, External Links: Link Cited by: §2.
 [9] (2018) BRITS: bidirectional recurrent imputation for time series. In Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett (Eds.), Vol. 31, pp. . External Links: Link Cited by: §1.

[10]
(2021)
Learning continuous image representation with local implicit image function.
In
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
, pp. 8628–8638. Cited by: §2.  [11] (2021) Deep learning for anomaly detection in timeseries data: review, analysis, and guidelines. IEEE Access 9 (), pp. 120043–120065. External Links: Document Cited by: §1.
 [12] (2021) Towards realistic market simulations: a generative adversarial networks approach. In Proceedings of the Second ACM International Conference on AI in Finance, ICAIF ’21, New York, NY, USA. External Links: ISBN 9781450391481, Link, Document Cited by: §1.
 [13] (2021) TimeVAE: a variational autoencoder for multivariate time series generation. arXiv preprint arXiv:2111.08095. Cited by: §2.
 [14] (2020) Time series data imputation: a survey on deep learning approaches. ArXiv abs/2011.11347. Cited by: §1, §4.2.
 [15] (2021) Adaptive weighting scheme for automatic timeseries data augmentation. External Links: 2102.08310 Cited by: §2.
 [16] (2017) HyperNetworks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 2426, 2017, Conference Track Proceedings, External Links: Link Cited by: §2.
 [17] (2018) Detecting spacecraft anomalies using lstms and nonparametric dynamic thresholding. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery ; Data Mining, KDD ’18, New York, NY, USA, pp. 387–395. External Links: ISBN 9781450355520, Link, Document Cited by: §1.
 [18] (2020) InceptionTime: finding alexnet for time series classification. Data Mining and Knowledge Discovery. Cited by: §1.
 [19] (202107) An empirical survey of data augmentation for time series classification with neural networks. PLOS ONE 16 (7), pp. e0254841. External Links: ISSN 19326203, Link, Document Cited by: §2.
 [20] (202106–12 Dec) Hideandseek privacy challenge: synthetic data generation vs. patient reidentification. In Proceedings of the NeurIPS 2020 Competition and Demonstration Track, H. J. Escalante and K. Hofmann (Eds.), Proceedings of Machine Learning Research, Vol. 133, pp. 206–215. External Links: Link Cited by: §1.
 [21] (2019) PATEgan: generating synthetic data with differential privacy guarantees. In ICLR, Cited by: §1.
 [22] (2021) Timeseries forecasting with deep learning: a survey. Phylosophical Transactions of the Royal Society A. External Links: Document Cited by: §1.
 [23] (201910) Deep meta functionals for shape representation. In IEEE/CVF International Conference on Computer Vision (ICCV), pp. 1824–1833. External Links: Document Cited by: §2.
 [24] (2018) Recurrent neural networks for multivariate time series with missing values. Scientific Reports 8 (1), pp. 6085. External Links: Document, ISBN 20452322, Link Cited by: §1.
 [25] (2018) Multivariate time series imputation with generative adversarial networks. In Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett (Eds.), Vol. 31, pp. . External Links: Link Cited by: §1.
 [26] (200601) Iterative image restoration combining total variation minimization and a secondorder functional. International Journal of Computer Vision 66, pp. 5–18. External Links: Document Cited by: §4.2.
 [27] (2019) Learning representations for time series clustering. In Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlchéBuc, E. Fox, and R. Garnett (Eds.), Vol. 32, pp. . External Links: Link Cited by: §1.
 [28] (2019) Occupancy networks: learning 3d reconstruction in function space. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
 [29] (2020) NeRF: representing scenes as neural radiance fields for view synthesis. In ECCV, Cited by: §1, §2, §2.
 [30] (2020) Timeseries data augmentation based on interpolation. Procedia Computer Science 175, pp. 64–71. Note: The 17th International Conference on Mobile Systems and Pervasive Computing (MobiSPC),The 15th International Conference on Future Networks and Communications (FNC),The 10th International Conference on Sustainable Energy Information Technology External Links: ISSN 18770509, Document, Link Cited by: §1.
 [31] (201906) DeepSDF: learning continuous signed distance functions for shape representation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
 [32] (2019) Utime: a fully convolutional network for time series segmentation applied to sleep staging. In Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlchéBuc, E. Fox, and R. Garnett (Eds.), Vol. 32, pp. . External Links: Link Cited by: §1.
 [33] (2021) Spatiallyadaptive pixelwise networks for fast image translation. In Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
 [34] (2020) Deep learning needs a prefrontal cortex. In Bridging AI and Cognitive Science ICLR 2020 Workshop, Cited by: §2.
 [35] (2019) Metalearning with latent embedding optimization. In International Conference on Learning Representations, External Links: Link Cited by: §2.
 [36] (2020) MetaSDF: metalearning signed distance functions. In Proc. NeurIPS, Cited by: §2, §2, §3.2.
 [37] (2020) Implicit neural representations with periodic activation functions. In Proc. NeurIPS, Cited by: §1, §2, §2, §2, §3.1, §3.2.
 [38] (2019) Scene representation networks: continuous 3dstructureaware neural scene representations. In Advances in Neural Information Processing Systems, Cited by: §2, §2.
 [39] (202106) Adversarial generation of continuous images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10753–10764. Cited by: §2.
 [40] (200904) A HypercubeBased Encoding for Evolving LargeScale Neural Networks. Artificial Life 15 (2), pp. 185–212. External Links: ISSN 10645462, Document, Link Cited by: §2.
 [41] (2021) Neural brdf representation and importance sampling. Computer Graphics Forum 40 (6), pp. 332–346. External Links: Document, Link, https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14335 Cited by: §2, §2.
 [42] (2021) Learned initializations for optimizing coordinatebased neural representations. In CVPR, Cited by: §2.
 [43] (2020) Fourier features let networks learn high frequency functions in low dimensional domains. NeurIPS. Cited by: §2, §2.
 [44] (2021) Deep learning for time series forecasting: a survey. Big data. Cited by: §1.
 [45] (2020) Continual learning with hypernetworks. In International Conference on Learning Representations, External Links: Link Cited by: §2.
 [46] (202004) Quant gans: deep generation of financial time series. Quantitative Finance, pp. 1–22. External Links: ISSN 14697696, Link, Document Cited by: §2.
 [47] (2018) Unsupervised anomaly detection via variational autoencoder for seasonal kpis in web applications. In Proceedings of the 2018 World Wide Web Conference, WWW ’18, pp. 187–196. External Links: ISBN 9781450356398, Link, Document Cited by: §1.
 [48] (2019) Timeseries generative adversarial networks. In NeurIPS, Cited by: §1.
 [49] (2019) Timeseries generative adversarial networks. In NeurIPS, Cited by: §2, §4.3.
 [50] (2022) SegTime: precise time series segmentation without sliding window. External Links: Link Cited by: §1.
 [51] (2019) Graph hypernetworks for neural architecture search. In International Conference on Learning Representations, External Links: Link Cited by: §2.
 [52] (2020) Metalearning via hypernetworks. In 4th Workshop on MetaLearning at NeurIPS 2020 (MetaLearn 2020), External Links: Document Cited by: §2.