Recurrent Neural Processes

06/13/2019 ∙ by Timon Willi, et al. ∙ NNAISENSE 2

We extend Neural Processes (NPs) to sequential data through Recurrent NPs or RNPs, a family of conditional state space models. RNPs can learn dynamical patterns from sequential data and deal with non-stationarity. Given time series observed on fast real-world time scales but containing slow long-term variabilities, RNPs may derive appropriate slow latent time scales. They do so in an efficient manner by establishing conditional independence among subsequences of the time series. Our theoretically grounded framework for stochastic processes expands the applicability of NPs while retaining their benefits of flexibility, uncertainty estimation and favourable runtime with respect to Gaussian Processes. We demonstrate that state spaces learned by RNPs benefit predictive performance on real-world time-series data and nonlinear system identification, even in the case of limited data availability.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Given a time series, how can we capture its generative process? The problem of learning a deep generative state space model (DGSSM) is a major challenge of machine learning.

The generative process is usually assumed to consist of a set of generating variables and their relations as well as their temporal dynamics. Various scientific communities have proposed frameworks to model these components. Classic approaches include Hidden Markov Models and Recurrent Neural Networks (RNNs). The latter have been combined with Variational Autoencoders (VAEs) which yield high-level latent random variables adding extra variability to the RNN’s internal state

(Chung et al., 2015; Bayer and Osendorfer, 2014). Another popular DGSSM approach is given by non-parametric state space models using Gaussian Processes (GPs) to model the hidden state space and its dynamics (Nickisch et al., 2018; Doerr et al., 2018).

The surge of neural latent variable models for performing inference on stochastic processes is a recent development in Deep Learning. Neural Processes (NPs)

(Garnelo et al., 2018a) try to combine the best of Neural Networks (NNs) and GPs. A GP models the uncertainty of its predictions and is highly flexible at test time, as it only requires a few context points for meaningful inference. However, GPs are computationally expensive at test time. A deep NN, on the other hand, does not estimate the uncertainty of its predictions and is inflexible at test time as it is not easily updated after training. The main advantage of NNs over GPs is their inference speed as they only require a forward pass. To address the trade-off between flexibility and computation at test time, NPs increase flexibility by moving some computation from training time to test time, estimating uncertainty and model distributions over functions, while still achieving runtime instead of , where is the number of context points and the number of target points (Garnelo et al., 2018b).

Our novel Recurrent NPs (RNPs) transfer benefits of NPs to deep generative state space models. RNPs are able to deal with non-stationary data by conditioning predictions on induced latent stochastic processes.

For example, consider a time series of 10 years of hourly temperature measurements. Daily cycles (warm days, cold nights) are superimposed by seasonal cycles (winter, summer). RNPs can derive slow latent time scales (reflecting winter and summer) despite fast real-world time scales (reflecting hourly observations). RNPs retain the benefits of (a) estimating uncertainty, (b) modelling distributions over functions, and (c) high flexibility at test time without increasing runtime. It enables efficient inference of the latent space by establishing conditional independence among subsequences in a given time series. We apply RNPs to various real-world one-step-look-ahead prediction tasks, illustrating their performance and benefits.

2 Background

To formally define a stochastic process, we require four components. Given a random function and any finite sequence , where , we define as the function values of

. The joint probability over the function values

is defined as , where is some realization of the random function . Since we want the joint probabilities to be proper marginal distributions of the stochastic process, they have to fulfill the following conditions: (i) Exchangeability: The joint probability is invariant to permutations on ; (ii) Consistency

: The distribution over a subsequence stays the same even after marginalizing out the rest of the full sequence. More importantly, the Kolmogorov existence theorem states that a collection of joint distributions satisfying these conditions will define a stochastic process. Exchangeability and consistency require us to define This implies that the observations

become iid conditioned upon . This result is a consequence of our requirement of exchangeability and de Finetti’s theorem (de Finetti, 1936) stating that exchangeable observations become conditionally independent given some latent variable, which in this case is the stochastic process . Neural Processes try to approximate the stochastic process with a neural network and the help of a latent variable , which results in: ((Garnelo et al., 2018a))

(1)

3 Recurrent Neural Process

We now proceed with the description of our model, first stating our assumptions and capturing them in our definitions of the joint probability. This leads us to a notion of latent time. To utilize this notion, we show how to map from latent time to observed time and how the mapping relates RNPs to NPs. The section ends with a definition of an appropriate loss to train our model.

3.1 Definitions and Assumptions

There is a class of random functions that contains all possible generating parameters of the observed time series . A time series predictor will need to estimate which of these random functions are behind the generating process by assigning a probability to each random function (Ortega et al., 2019). Here, is some indexing to distinguish between different random functions, for example the parameters that define the distribution over paths. Let denote a realization of the random function at time . A key assumption is that the time scale of the random function is not the same as the one of our observed time series. Therefore a proper time warping needs to be found that assigns the right time steps in the generating random functions to the observed time series. We denote the time line of the random functions by and the time line of the observed time series as . We denote a probability , where is the length of the observed time series and is the time step in a random function assigned to the observed when considering the generative process. The conditional variable was added such that the generating process is causally consistent. In other words, when was generated by , we do not allow to be generated by

. This probability distribution allows for dealing with changing generative parameters and thus with non-stationary time series.

3.2 Model Description

We can now formulate the generative process behind the observed time series as

(2)

where is an instantiation of the random function . The model trying to infer the generating processes behind an observed sequence will thus have to find the right parameters as well as the correct alignment of time steps. To make the following derivation clearer, we simplify Equation (2) as follows:

(3)

The posterior predictive distribution then looks as follows:

(4)

We see that assumes a similar role as the distribution in Equation (1). However, there are slight differences, as is our time in latent space and not the time in observed space. Conveniently acts here as an indicator function, telling us which time steps in the observed time series are aligned with the time steps in latent time. In other words denotes an alignment set. We can therefore approximate with an LSTM (Hochreiter and Schmidhuber, 1997), , that processes the relevant indicated by and model by any non-linear decoder , where

is a high-dimensional random vector parameterising

. The latent generative model then takes the following form:

(5)
(6)
(7)
(8)

In this case corresponds to the set of aligned time steps of latent time step . Equation (6) renders independent conditioned on , which in turn makes the subsequences of the observed time series independent, as seen in Equation (7). To learn the time mapping, we are not allowed to reset the hidden state after encoding, as otherwise, the encoding LSTM would never see multiple instances of the random function.
Note that Equation (1) is a special case of Equation (6) for a generating stochastic process where the latent time and the observed time align, that is, they tick with the same clock.

3.3 Elbo

Let be the variational posterior with which we can learn the non-linear decoder . The Evidence Lower Bound then looks as follows:

(9)

Equivalently to the NPs we split the dataset into a context set , and a target set ,

(10)

As is intractable we will use

(11)

Since we cannot directly observe the latent random functions we have to express this in terms of the observed data:

(12)

3.4 Implementation

An implementation of the proposed generative model involves several design choices. Equation (3) motivates the use of an LSTM encoder, because of the need to keep track of previous time alignments. This implies that the hidden state of the Encoder-LSTM is not reset for each encoding, because of the necessity to remember to which time step the encoder mapped the previous time step. We can use the same hidden state to encode the posterior distribution over the parameters, for which one could reset the encoder’s hidden state. We could also model the distribution from Equation (3) more explicitly using an attention mechanism (Bahdanau et al., 2014), as the suggests to query the latent space conditioned on the current time step. It might be an explanation of why attention works so well in the Attentive Neural Process.
To encode the correct generative random functions in our context representations, it is theoretically sufficient to pass through the context points with an LSTM once. The last hidden state then contains the necessary information about all generating random functions to be encoded in the latent space. In practice, however, this approach has downsides because the hidden state potentially has to capture multiple random functions that operate at different time scales. As derived in the previous section, it is sufficient to encode subsequences to capture all necessary information about the latent random functions. As we do not know which subsequences are sufficient, one possibility is to create all possible subsequences. Another option is to build in priors about the dataset in the form of hand chosen sequence lengths. In the aforementioned weather example one may assume that there will be generators that have hourly, daily, monthly and yearly seasonalities. Therefore it makes sense to encode subsequences with these lengths. Therefore, the encoder results in:

(13)
(14)

where denotes some subsequence of length that ends at any time point and is a representation of the (input, hidden state)-pair. As the hidden state is already encoding the corresponding input, in theory it is not necessary to include it explicitly. However, at least empirically, it helps the network to align the time. To guarantee permutation invariance we need to aggregate the representations: , where is the representation created when predicting at time step . Then we sample a global latent representation for our stochastic path:

(15)

For the decoder, there are also multiple design choices, at least in theory. Assuming we know the correct time alignment and all of the generating parameters, an MLP could be deployed as decoder which could one-shot predict the target sequence. However, usually we do not have that knowledge and typical generative processes are assumed to be highly stochastic in nature. Therefore we propose to use auto-regressive decoding, through an LSTM, to keep track of previous decisions and establish a meaningful uncertainty measure. For example, imagine a coin flip decides if the time series will rise or fall for the next 20 steps. An LSTM can keep track of past decisions and give a meaningful uncertainty measure for both cases. In the case of one-shot prediction, we cannot know the outcome of the coin flip, thus predicting a steady state with a diverging uncertainty measure. Depending on the use case, any of these behaviours might be preferable. This manifests in:

(16)
(17)

According to the posterior predictive distribution in Equation (4), the probability over the parameters should be re-estimated at each time step by including our prediction into the encoding. However, in practice, we fix the encoding in the beginning. In long-term prediction settings, it makes sense to update the encodings once in a while.

4 Related Work

4.1 Conditional Latent Variable Models

In short, conditional latent variable models try to learn the conditional probability distribution

, where are the context and are the target points and is some latent variable. The latter can be considered local or global, depending on how many times it is used to predict future values. In the RNP case, one might imagine using multiple latent variables, generated by LSTMs with different subsequences. The longer the subsequences, the more global latent variables become.
Previous work (Garnelo et al., 2018b) introduced the Conditional Neural Processes (CNPs), a first attempt at a conditional latent model combining the benefits of neural networks and GPs. With CNPs, however, it was not possible to generate different samples of a function given some context data because the model was missing a latent variable. Therefore the model could not properly propagate uncertainty through the network. This shortcoming was fixed by introducing a latent variable enabling global sampling (Garnelo et al., 2018a). The NP model consists of an encoder, an aggregator and a decoder. The encoder is responsible for creating representations of the context points , the aggregator introduces order invariance and helps keeping the runtime linear in context and target points by combining the representations into a single representation . Lastly, the decoder transforms the target locations and the context representation into predictions of the target values . The NP still had some empirical weaknesses as it predicts high uncertainty on context points even though the predictive distribution is conditioned on their encoding. Therefore, attention was implemented (Kim et al., 2019) over the representation to create a more meaningful representation . On a more theoretical side, it was possible to establish that, under some conditions, NPs and GPs are mathematically equivalent (Rudner et al., 2018). NPs are a special case of RNPs, where the observed time aligns perfectly with the latent time and each latent time step corresponds to exactly one observed time step.
Other notable conditional latent variable models include the Conditional VAE (Sohn et al., 2015), the Neural Statistician (Edwards and Storkey, 2016), the Variational Homoencoder (Hewitt et al., 2018) and Matching Networks (Vinyals et al., 2016).

4.2 State Space Models

A large variety of State Space Models were proposed in recent years. Most notable is the Recurrent VAE structure (Chung et al., 2015), which has been extended extensively (Krishnan et al., 2016, 2015). Other examples are the predictive state representations (Choromanski et al., 2018; Downey et al., 2017; Venkatraman et al., 2017), non-parametric state space models (Turner et al., 2010; Nickisch et al., 2018; Doerr et al., 2018), models working with linear Gaussian state spaces (Fraccaro et al., 2017; Rangapuram et al., 2018) and probabilistic graphical models (Johnson et al., 2016), as well as models inspired by a Bayesian Filtering perspective (Lim et al., 2019). A similar but agnostic view on time (Jayaraman et al., 2018) led to an approach orthogonal to the present work. Another idea to capture high-level time dynamics is the Neural History Compressor based on predictive coding (Schmidhuber, 1992).

5 Experiments

We conduct experiments on three different datasets. The artificially created sine wave dataset serves a demonstrative purpose. The Electricity dataset contains one long time series, whereas the Drives dataset contains one short time series used for nonlinear system identification. We chose these datasets to compare to previous results of other models and to illustrate that the number of data points is not crucial for RNP’s performance. Experimental details will be addressed in the supplementary material.

5.1 Parameterized Sine Wave

We create a dataset based on a sine wave , parameterizable by the random variables Amplitude, Frequency and Vertical Displacement. Furthermore, we define a generative process for our sine wave. For each parameter of the sine wave, a separate random function is specified. For example

(18)

on a latent time range (more details in the Appendix). We also fix a subsequence length for our context sequences. The resulting total sequence length will be with a set of subsequences . We then generate a time series of evenly spaced time steps over the interval. Then, sine waves are randomly sampled on this subspace to produce our dataset. Sine waves are generated by first instantiating the random functions corresponding to the parameters. Afterwards, each period of the sine wave is related to a time step in the random functions, c.f. Figure 0(a). Keep in mind that one time step might be related to multiple periods in the sine wave. For our evaluation, we set and use the first 6 subsequences for training, as validation and as test set. Each set is split in half and the first half is used as context sequences and the other half as target sequences. Hence we can guarantee unique context-target pairs for training, validation and testing. We demonstrate the model’s performance on one-step prediction seen in Figure 1.

(a) The changing amplitude and vertical displacement generate a sine wave. Here 1 latent time step corresponds to 1 period of the sine wave. The generative processes have the same latent time.
(b) An example of handcrafted subsequences used as context sequences for prediction. The multi-colored subsequences are context sequences; the green and dotted blue lines correspond to the predicted mean and the uncertainty, respectively.
Figure 1: The underlying intuition of RNPs (a) and the behaviour of RNPs on the sine waves datasets (b)

5.2 Electricity

The model is also evaluated on the UCI Individual Household Electric Power Consumption Dataset (Dua and Graff, 2017). This is a time series of 2’075’259 measurements collected between December 2006 and November 2010 at 1-minute intervals from a single household. Each measurement consists of 7 features related to power consumption. In order to compare to other works, we choose the feature “active power” as our target and the other features as inputs. We compare to earlier results (Lim et al., 2019).

5.3 Drives

The Coupled Electric Drives is a dataset for nonlinear system identification (Wigren, 2010). It consists of 500 time steps and was produced by two motors driving a pulley using a flexible belt, where the input is the sum of the voltages on the motors and the target is the speed of the belt. We compare our model to the results reported earlier (Mattos et al., 2015).

5.4 Metrics

We evaluate model accuracy using the Mean Squared Error (MSE) between the predicted mean and the target value. The MSEs are normalized based on LSTM performance, which allows for comparing to previous work. The prediction interval coverage probability (PICP) is a way of measuring the quality of the predicted uncertainty. We measure the performance on a 90% prediction interval. It is defined as follows:

(19)
(20)

where is the 5th percentile derived from predictions sampled from (Lim et al., 2019).

Electricity Drives
LSTM 1.000 1.000
VRNN 1.902 -
DKF 1.252 -
DSSM 1.131 -
RNF-LG 0.918 -
RNF-NP 0.856 -
MLP-NARX - 1.017
GP-NARX - 0.953
REVARB - 0.462
RNP 1.111 0.238
Table 2: PICP for One-Step Prediction
Electricity Drives
VRNN 0.986 -
DKF 1.000 -
DSSM 0.964 -
RNF-LG 0.960 -
RNF-NP 0.927 -
RNP 0.947 0.874
Table 1: Normalized MSE for One-Step Predictions
(a)

A sample of the predictive performance on the Electricity dataset. The green line depicts the mean; the blue dotted line depicts one standard deviation. The blue line is the target. The multi-colored segments are randomly sampled subsequences used as context.

(b) A second sample of the predictive performance on the Electricity dataset.
(c) The model is able to approximate the second half of the Drives dataset. However, it is relatively uncertain about the start point.
(d) The model is able to learn a sampled sine curve with changing frequency, amplitude and vertical displacement.
Figure 2: Predictive performances of the RNP on different datasets.

5.5 Results

The results in Table 2 and Table 2 suggest that the performance of the RNP model is comparable to that of other SSMs. The outcome for the Electricity dataset indicates that the conditioning of the LSTM decoder was deceptive, rather than informative. This could be due to multiple factors (we did not choose an exhaustive amount of subsequences or sample them intelligently). For Drives, RNPs outperform LSTM and suggest informative uncertainty measures. For a qualitative analysis, see Figures 1(a)-1(d). The model learns to capture the time series. However, there is unwanted behaviour. Similar to the original NPs, the RNPs assign too much uncertainty about context sequences encoded in the representations, which can be seen best in Figure 1(d), where the first 3 periods of the sine wave correspond to context sequences. This might be solvable through the attention. Figures 1(a)-1(b) show that the RNP is able to predict the start of the target sequence and adapt its uncertainty.

6 Discussion / Future Work

We introduced a family of models called Recurrent Neural Processes (RNPs), a generalization of Neural Processes (NPs) to sequences by introducing a notion of latent time, with a wide range of applications. RNPs can derive appropriate slow latent time scales from long sequences of quickly changing observations hiding certain long term patterns.

The theoretical framework derived for RNPs enables efficient inference of temporal context by establishing conditional independence among subsequences in a given time series. It also provides an appropriate loss function for training RNPs, and can be extended through techniques such as attention, in line with previous successful combinations of NPs and attention. The framework also provides a plausible explanation of why attention contributes so much to the performance of NPs.

RNPs lend themselves nicely to multistep prediction of very long sequences, because subsequences allow for accessing old information without saving it in the hidden state. Experimental results show that RNPs are able to create an informative latent space facilitating prediction tasks and nonlinear system identification tasks.

Acknowledgments

We would like to thank Pranav Shyam, Giorgio Giannone, Jan Eric Lenssen, David Ackermann and Heng Xin Fun for insightful discussions, and everyone at NNAISENSE for being part of such a conducive research environment.

References

Appendix

Overview

The Supplementary Material presents more details about the RNP’s implementation, the parameter search, and the experiments.

Experimental Details

For all three experiments, we performed a coarse hyperparameter search in a grid with the following possible value ranges. Minibatch size was always 256 except for the smaller Drives set, where it was 8. We trained the RNP using teacher forcing. The experiments were conducted on p3.16xlarge instances on the Amazon Web Services.

Possible Values
LSTM Hidden State Size 3, 16, 32, (64)
Context Sequence Length 5, 50, 150 / 20, 60, 80
Latent Vector Size 4, 32, 128
Bidirectional LSTM-Encoder Yes, No
LSTM Layers 1, 2
Table 3: Parameter space searched during hyperparameter search.

To manage the experiments and ensure reproducibility we used Sacred [Greff et al., 2017].

Sine

For the easier sine dataset, we tested our models on a long sequence of 450 time steps. The model worked best for small hidden state sizes and big latent vector sizes. We trained it for 200 epochs, defining our generative processes as:

(21)
(22)
(23)

Electricity

We reproduced the LSTM baseline model of previous work [Lim et al., 2019] as accurately as possible, to compare our model to the reported normalized performances. For dataset preprocessing and architectural details of the other models we refer to the appendix of the aforementioned paper.
We tried multiple context sequence lengths, {50, 150, 300}, to train and test the network, but length did not have a significant impact on performance. Experiments on this dataset profited from an increased hidden state and a decreased latent vector size. We trained the model for 130 epochs.

Drives

For the Drives dataset, we relied on results reported by previous work [Mattos et al., 2015]. Parameter search favored a small hidden state size and a larger latent vector size. Due to the dataset size, we trained the model on shorter context and target sequences of length 5 and 15 respectively. For testing, we used longer sequences to which the model was able to adapt. We trained it for 100’000 epochs.

(a) Learning curve of RNP on training and validation data of the Electricity dataset. Keep in mind that we do not optimize for MSE, but for log likelihood.
(b) Learning curve of LSTM on training and validation data of the Electricity dataset, optimized for MSE.
(c) Learning curve of RNP on training and validation data of the Drives dataset.
Figure 3: Training and Validation Metrics for RNP and LSTM on the Electricity ((a), (b)) and Drives dataset ((c))
LSTM SNP
Sine Electricity Drives Sine Electricity Drives
Hidden State Size 32 50 up to 2048 3 64 32
Context Sequence Length - - - 150 20 80
Test Sequence Length 450 50 250 450 60 250
Latent Vector Size - - - 32 4 32
Bidirectional LSTM-Encoder - - - No Yes No
LSTM Layers 1 1 up to 3 1 2 2
Learning Rate 0.1 0.1 - 0.001 0.01 0.0001
Table 4: We used these hyperparameters for our final performance comparison.
(a) We can see that the network is able to capture the time series in its bounds of one standard deviation.
(b) The model captures the system dynamics; the apparent lag in prediction is possibly due to training by teacher forcing.
Figure 4: Zooming in on RNP’s predictive performance on Electricity ((a)) and Drives ((b))

Architecture Details

Figure 5: This model represents one of many possible architectures within the RNP framework.

The architecture depicted in Figure 5 exhibits a stochastic and a deterministic path. Having both paths was reported to be beneficial [Le et al., ]. Each path has its own encoder. Subsequences fed into the left will be encoded by LSTMs providing sequences of hidden states.

Bidirectional paths yield two sequences of hidden states that one could encode in various ways. Our final model uses only the last hidden state and the corresponding input.

The codes are aggregated and fed into the LSTM decoder at each time step. Inferring the first hidden state of the decoder from the representation was found to be helpful.