1 Introduction
System identification, i.e. learning dynamics models from data (Ljung, 1998, 2010), is a key ingredient of modelpredictive control (Camacho & Alba, 2013)
and modelbased reinforcement learning (RL)
(Deisenroth & Rasmussen, 2011; Doerr et al., 2017b). StateSpace Models (SSMs) are one popular class of representations (Billings, 2013), which describe a system with input and output in terms of a latent Markovian state . Based on a transition model and an observation model , as well as process and measurement noise and , a timediscrete SSM is given by(1) 
Typically, in real systems, the latent state cannot be measured directly but has to be inferred from a series of noisy output observations. For linear SSMs, this inference problem and the model learning can be solved simultaneously by subspace identification (Van Overschee & De Moor, 2012). Efficient methods also exist for the deterministic, but nonlinear
counterpart, e.g. recurrent neural networks (RNNs)
(Hochreiter & Schmidhuber, 1997).However, for tasks such as learning control policies, probabilistic models enable safe learning and alleviate model bias (Deisenroth & Rasmussen, 2011). Robust training of probabilistic, nonlinear SSMs is a challenging and only partially solved problem, especially for higher dimensional systems (Frigola et al., 2013, 2014; Eleftheriadis et al., 2017; Svensson & Schön, 2017). This paper proposes the Probabilistic Recurrent StateSpace Model ^{1}^{1}1Code available after publication at: https://github.com/andreasdoerr/PRSSM . (PRSSM^{2}^{2}2Pronounced prism.
), a framework which tackles the key challenges preventing robust training of probabilistic, nonlinear SSMs. PRSSM takes inspiration from RNN model learning. In particular, the latent state transition model is unrolled over time, therefore accounting for temporal correlations whilst simultaneously allowing learning by backpropagation through time and mitigating the problem of latent state initialization. Grounded in the theory of Gaussian Processes (GPs), the proposed method enables probabilistic model predictions, inferring complex latent state distributions, and principled model complexity regularization. Furthermore, we propose an adapted form of a recognition model for the initial state distribution. This facilitates scalability through batch learning and learning of slow or unstable system dynamics.
In summary, the key contributions of this paper are:

Combining gradientbased and samplebased inference for efficient learning of nonlinear Gaussian process statespace models.

Tractable variational approximation, maintaining the true latent state posterior and temporal correlations.

Doubly stochastic inference scheme for scalability.

Recognition model, which allows robust training and prediction by initializing the latent state distribution.
Together, these contributions allow for robust training of the PRSSM. The proposed framework is evaluated on a set of realworld system identification datasets and benchmarked against a range of stateofthe art methods.
2 Related Work
Modeling the behavior of systems with only partially observable states has been an active field of research for many years and several schools of thought have emerged. Representations range from SSMs (Van Overschee & De Moor, 2012) over Predictive State Representations (PSRs) (Littman & Sutton, 2002; Singh et al., 2003; Rudary & Singh, 2004)
(MurraySmith & Girard, 2001; Girard et al., 2003; Likar & Kocijan, 2007; Billings, 2013), as well as hybrid versions combining these approaches (Mattos et al., 2015, 2016; Doerr et al., 2017a).Autoregressive (historybased) methods avoid the complex inference of a latent state and instead directly learn a mapping from a history of past inputs and observations to the next observation, i.e. . These models face the issue of feeding back observation noise into the dynamics model. Recent work addresses this problem by either actively accounting for input noise (McHutchon & Rasmussen, 2011) or reverting to a hybrid, autoregressive formulation in a latent, but noise free state (Mattos et al., 2016; Doerr et al., 2017a). Such models can be made deep and trained in a recurrent manner as presented in (Mattos et al., 2015). In theory, a horizon identical to the true latent state dimensionality (dimension of ) is sufficient to model all relevant dependencies of the system under consideration (Ljung, 1998). In practice, however, autoregressive models typically need a much larger history horizon to cope with noisy observations and arbitrary sampling frequencies.
Thus, in this paper, we focus on SSMs based on a compact, Markovian state representations. Furthermore, SSMs allow the direct application of many existing control algorithms, which rely on the explicit representation of the latent state. Within the field of latent state models, exact solutions for state inference and model learning are known for linear SSMs and can be obtained by the well known Kalman filter/smoother
(Kalman, 1960) and subspace identification (Van Overschee & De Moor, 2012). In the case of nonlinear latent state transition dynamics, both deterministic and probabilistic variants are active fields of research.Deterministic variants such as Long ShortTerm Memory (LSTM) models have been shown to be powerful representations for tasks such as natural language processing
(Venugopalan et al., 2014) or text understanding (Sutskever et al., 2011). However, for the purpose of system identification and control, probabilistic predictions are often required to make model errors explicit (Deisenroth & Rasmussen, 2011). PRSSM learning can be interpreted as a probabilistic version of the learning procedure in these deep recurrent models. Both procedures share the explicit unrolling of transition and observation model. Errors between the predicted and the observed system output are propagated back over time. Therefore, the transition dynamics is to be learned but the latent state (distribution) is implicitly given. This way, the challenging initialization and optimization of latent state variables is prevented. In contrast to deep recurrent models, the PRSSM loss and model regularization is automatically obtained from the model. Furthermore, PRSSMs obtain predictive distributions and the proposed initial state recognition model facilitates learning on shorter subtrajectories and unstable systems, which is not possible in deep recurrent models.Gaussian Process StateSpace Models (GPSSMs) are a popular class of probabilistic SSMs (Wang et al., 2008; Ko & Fox, 2009; Turner et al., 2010; Frigola et al., 2013, 2014; Eleftheriadis et al., 2017). The use of GPs allows for a fully Bayesian treatment of the modeling problem resulting in an automatic complexity tradeoff, which regularizes the learning problem. Filtering and smoothing in GPSSMs, has already been covered extensively: deterministic (e.g. linearization) as well as stochastic (e.g. particles) methods are presented in (Ko & Fox, 2009; Deisenroth et al., 2012). These methods, however, assume an established system model which is generally not available without prior knowledge. In this work, the latent state smoothing distribution is given implicitly and optimized jointly during model learning.
Approaches to probabilistic GPSSMs mainly differ in their approximations to the model’s joint distribution (e.g. when solving for the smoothing distribution or for the observation likelihood). One class of approaches aims to solve for the true distribution which requires samplebased methods, e.g. Particle Markov Chain Monte Carlo (PMCMC), as in
(Frigola et al., 2013, 2014). These methods are close to exact but computationally inefficient and intractable for higher latent state dimensions or larger datasets. A second class of approaches is based on variational inference and mean field approximations (Mattos et al., 2015; Föll et al., 2017). These methods, however, operate on latent autoregressive models which can be initialized by the observed output time series, such that the learned latent representation acts as a smoothed version of the observations. In Markovian latent spaces, no such prior information is available and therefore initialization is nontrivial. Model optimization based on mean field approximations empirically leads to highly suboptimal local solutions. Bridging the gap between both classes, recent methods strive to recover (temporal) latent state structure. In (Eleftheriadis et al., 2017), a linear, timevarying latent state structure is enforced as a tractable compromise between the true nonlinear dependencies and no dependencies as in mean field variational inference. However, to facilitate learning, a complex recognition model over the linear timevarying dynamics is required. In contrast, the proposed PRSSM can efficiently incorporate the true dynamics by combining sampling and gradientbased learning.3 Gaussian Process StateSpace Model
This section presents the general model background for GPSSMs. Following a short recap of GPs in Sec. 3.1 and a specific sparse GP prior in Sec. 3.2, PRSSM as one particular GPSSM is introduced in Sec. 3.3.
3.1 Gaussian Process
A GP (Williams & Rasmussen, 2005) is a distribution over functions that is fully defined by a mean function and covariance function . For each finite set of points from the function’s domain, the corresponding function evaluations are jointly Gaussian as given by
(2) 
with mean vector
having elements and covariance matrix with entries . Given observed function values at input locations , the GP predictive distribution at a new input location is obtained as the conditional distribution(3) 
with posterior mean and variance
(4)  
(5) 
where denotes the scalar or vector of covariances for each pair of elements in and . In this work, the squared exponential kernel with Automatic Relevance Determination (ARD) (Williams & Rasmussen, 2005) with hyperparameters is employed. Due to the proposed samplingbased inference scheme (cf. Sec. 4), any other differentiable kernel might be incorporated instead.
3.2 GP Sparsification
Commonly, the GP prediction in (3) is obtained by conditioning on all training data , . To alleviate the computational cost, several sparse approximations have been presented (Snelson & Ghahramani, 2006). By introducing inducing GP targets at pseudo input points , which are jointly Gaussian with the latent function , the true GP predictive distribution is approximated by conditioning only on this set of inducing points,
(6)  
(7) 
The predicted function values consequently become mutually independent given the inducing points.
3.3 PRSSM Model Definition
The PRSSM is built upon a GP prior on the transition function and a parametric observation model . This is a common model structure, which can be assumed without loss of generality over (1), since any observation model can be absorbed into a sufficiently large latent state (FrigolaAlcade, 2015). Eliminating the nonparametric observation model, however, mitigates the problem of ‘severe nonidentifiability’ between transition model and observation model (Frigola et al., 2014). Independent GP priors are employed for each latent state dimension given individual inducing points and .
In the following derivations, the system’s latent state, input and output at time are denoted by , , and , respectively. The shorthand denotes the transition model’s input at time . The output of the transition model is denoted by . A time series of observations from time to time (including) is abbreviated by (analogously for the other model variables).
The joint distribution of all PRSSM random variables is given by
(8)  
where and . A graphical model of the resulting PRSSM is shown in Fig. 1.
The individual contributions to (8) are given by the observation model and the transition model, which are now described in detail. The observation model is governed by
(9) 
with observation function
(10) 
In particular, the matrix is chosen to select the first entries of by defining with
being the identity matrix. This model is suitable for observation spaces that are lowdimensional compared to the latent state dimensionality, i.e.
, which is often the case for physical systems with a restricted number of sensors. The first latent state dimensions can therefore be interpreted as noise free sensor measurements. For highdimensional observation spaces (e.g. images), a more involved observation model (e.g. a neural network) may be seamlessly incorporated into the presented framework as long as is differentiable. Process noise is modeled as(11) 
The transition dynamics is described independently for each latent state dimension by
. This probability is given by the sparse GP prior (
7) and predictive distribution (6), where and . The initial system state distributionis unknown and has to be estimated.
4 PRSSM Inference
Computing the log likelihood or a posterior based on (8) is generally intractable due to the nonlinear GP dynamics model in the latent state. However, the log marginal likelihood (evidence) can be bounded from below by the Evidence Lower BOound (ELBO) (Blei et al., 2017). This ELBO is derived via Jensen’s inequality by introducing a computationally simpler, variational distribution to approximate the model’s true posterior distribution (cf. eq. (8)). In contrast to previous work (Frigola et al., 2014; Mattos et al., 2015; Eleftheriadis et al., 2017), the proposed approximation explicitly incorporates the true temporal correlations in the latent state, whilst being scalable to large datasets. The inference scheme is inspired by doubly stochastic variational inference for deep GPs as presented in (Salimbeni & Deisenroth, 2017).
4.1 Variational Sparse GP
PRSSM employs a variational sparse GP (Titsias, 2009) based on a variational distribution on the GP’s inducing outputs as previously used in (Frigola et al., 2014; Eleftheriadis et al., 2017)
. For the standard regression problem, the inducing output distribution can be optimally eliminated and turns out to be a Gaussian distribution. Eliminating the inducing outputs, however, results in dependencies between inducing outputs and data which, in turn, leads to a complexity of
, where is the number of data points and the number of inducing points (Titsias, 2009). Unfortunately, this complexity is still prohibitive for large datasets. Therefore, we resort to an explicit representation of the variational distribution over inducing outputs as previously proposed in (Hensman et al., 2013). This explicit representation enables scalability by utilizing stochastic gradientbased optimization since individual GP predictions become independent given the explicit inducing points. Following a meanfield variational approximation the inducing output distribution is given as for each latent state dimension with diagonal variance . Marginalizing out the inducing outputs, the GP predictive distribution is obtained as Gaussian with mean and variance given by(12)  
(13)  
(14) 
4.2 Variational Approximation
In previous work (Mattos et al., 2015), a factorized variational distribution is considered based on a meanfield approximation for the latent states . Their variational distribution is given by
This choice, however, leads to several caveats: (i) The number of model parameters grows linearly with the length of the time series since each latent state is parametrized by its individual distribution for every time step. (ii) Initializing the latent state is nontrivial since the observation mapping is unknown and generally not bijective. (iii) The model design does not represent correlations between time steps. Instead, these correlations are only introduced by enforcing pairwise couplings during the optimization process. The first two problems have been addressed in (Mattos et al., 2015; Eleftheriadis et al., 2017) by introducing a recognition model, e.g. a BiRNN^{3}^{3}3A bidirectional RNN operates on a sequence from left to right and vice versa to obtain predictions based on past and future inputs., which acts as a smoother which can be learned through backpropagation and which allows to obtain the latent states given the input/output sequence.
The issue of representing correlations between time steps, however, is currently an open problem which we aim to address with our proposed model structure. Properly representing these correlations is a crucial step in making the optimization problem tractable in order to learn GPSSMs for complex systems.
For PRSSM, the variational distribution is given by
(15)  
with
In contrast to previous work, the proposed variational distribution does not factorize over the latent state but takes into account the true transition model, based on the sparse GP approximation from (8). In previous work, stronger approximations have been required to achieve an analytically tractable ELBO. This work, however, deals with the more complex distribution by combining sampling and gradientbased methods.
In (Frigola et al., 2014), the variational distribution over inducing outputs has been optimally eliminated. This leads to a smoothing problem in a second system requiring computationally expensive, e.g. samplebased, smoothing methods. Instead, we approximate the distribution by a Gaussian, which is the optimal solution in case of sparse GP regression (cf. (Titsias, 2009)).
The PRSSM model parameters include the variational parameters for the initial state and inducing points as well as deterministic parameters for noise models and GP kernel hyperparameters: . Note that in the PRSSM, the number of parameters grows only with the number of latent dimensions, but not with the length of the time series.
4.3 Variational Evidence Lower Bound
Following standard variational inference techniques (Blei et al., 2017), the ELBO is given by
(16) 
Maximizing the ELBO is equivalent to minimizing (Blei et al., 2017), therefore this is a way to optimize the approximated model parameter distribution with respect to the intractable, true model parameter posterior.
Based on (8) and (15) and using standard variational calculus, the ELBO (16) can be transformed into
(17) 
The first part is the expected loglikelihood of the observed system outputs based on the observation model and the variational latent state distribution . This term captures the capability of the learned latent state model to explain the observed system behavior. The second term is a regularizer on the inducing output distribution that penalizes deviations from the GP prior. Due to this term, PRSSM automatically trades off data fit against model complexity. A detailed derivation of the ELBO can be found in the supplementary material.
4.4 Stochastic Gradient ELBO Optimization
Training the proposed PRSSM requires maximizing the ELBO in (17) with respect to the model parameters . While the second term, as KL between two Gaussian distributions, can be easily computed, the first term requires evaluation of an expectation with respect to the latent state distribution . Since the true nonlinear, latent dynamics is maintained in the variational approximation (15), analytic evaluation of is still intractable. To make this process tractable, the Markovian structure of the unobserved states and the sparse GP approximation can be exploited to enable a differentiable, samplingbased estimation of the expectation term. Specifically, the marginal latent state distribution at time is conditionally independent of past time steps, given the previous state distribution and the explicit representation of GP inducing points. Samples can therefore be obtained by recursively drawing from the sparse GP posterior in (14) for . Drawing samples from a Gaussian distribution can be made differentiable with respect to its parameters , using the reparametrisation trick (Kingma & Welling, 2013) by first drawing samples and then computing
(18) 
with and . The gradient is propagated back through time due to this reparamatrization and unrolling of the latent state. Using (18
), an unbiased estimator of the first term in the ELBO in (
17) is given by(19) 
Based on the stochastic ELBO evaluation, analytic gradients of (17
) can be derived to facilitate stochastic gradientdescentbased model optimization.
4.5 Model Predictions
After model optimization based on the ELBO (17), model predictions for a new input sequence and initial latent state can be obtained based on the approximate, variational posterior distribution in (15). In contrast to (Mattos et al., 2015)
, no approximations such as moment matching are required for model predictions. Instead, the complex latent state distribution is approximated based on samples as in (
18). The predicted observation distribution can then be computed from the latent distribution according to the observation model in (9). Instead of a fixed, uninformative initial latent state, a learned recognition model can be utilized to find a more informative model initialization (cf. 5).5 Extensions for Large Datasets
Optimizing the ELBO (17) based on the full gradient is prohibitive for large datasets and long trajectories. Instead, a stochastic optimization scheme based on minibatches of subtrajectories is introduced.
Directly optimizing the initial latent state distribution for each subtrajectory would lead to a full parametrization of the latent state which is undesirable as described in Sec. 4.2. Instead, we propose a parametric recognition model, which initializes the latent state . In recent work on SSMs (Mattos et al., 2015; Eleftheriadis et al., 2017), a recognition model is introduced to parametrize the smoothing distribution . In this work, a similar idea is employed but only to model the initial latent state
(20)  
(21) 
The initial latent state distribution is approximated by a Gaussian, where mean and variance are modeled by a recognition model . The recognition model acts as a smoother, operating on the first elements of the system input/output data to infer the first latent state. Instead of directly optimizing during training, errors are propagated back into the recognition model , which is parametrized by .
Additionally, while can be inferred during training, no information about the initial latent state is available during prediction. Thus, the recognition model is also required to enable predictions on test sequences, where the initial latent state is unknown.
6 Experimental Evaluation
Onestepahead, autoregressive  Multistepahead, latent space autoregressive  Markovian statespace models  

Task  GPNARX  NIGP  REVARB 1  REVARB 2  MSGP  SSGPSSM  PRSSM 
Actuator  0.627 (0.005)  0.599 (0)  0.438 (0.049)  0.613 (0.190)  0.771 (0.098)  0.696 (0.034)  0.502 (0.031) 
Ballbeam  0.284 (0.222)  0.087 (0)  0.139 (0.007)  0.209 (0.012)  0.124 (0.034)  411.6 (273.0)  0.073 (0.007) 
Drives  0.701 (0.015)  0.373 (0)  0.828 (0.025)  0.868 (0.113)  0.451 (0.021)  0.718 (0.009)  0.492 (0.038) 
Furnace  1.201 (0.000)  1.205 (0)  1.195 (0.002)  1.188 (0.001)  1.277 (0.127)  1.318 (0.027)  1.249 (0.029) 
Dryer  0.310 (0.044)  0.268 (0)  0.851 (0.011)  0.355 (0.027)  0.146 (0.004)  0.152 (0.006)  0.140 (0.018) 
Sarcos  0.169 ()  n.a.  n.a.  n.a.  n.a.  n.a.  0.049 () 
In the following, we present insights into the PRSSM optimization schemes, a comparison to stateoftheart model learning methods on real world datasets and results from a large scale experiment.
6.1 PRSSM Learning
For small datasets (i.e. short training trajectory lengths), the model can be trained based on the full gradient of the ELBO in (17). A comparison of the model predictions before and after training with the full ELBO gradient is visualized in Fig. 2.
Empirically, three major shortcomings of the full gradientbased optimization schemes are observed: (i) Computing the full gradient for long trajectories is expensive and prone to the wellknown problems of exploding and vanishing gradients (Pascanu et al., 2013). (ii) An uninformative initial state is prohibitive for unstable systems or systems with slowly decaying initial state transients. (iii) Momentumbased optimizers (e.g. Adam) exhibit fragile optimization performance and are prone to overfitting.
The proposed method addresses these problems by employing the stochastic ELBO gradient based on minibatches of subtrajectories and the initial state recognition model (cf. Sec. 5). Fig. 3 visualizes the initial state distribution and the corresponding predictive output distribution for the fully trained model based on the full gradient (top row), as well as for the model based on the stochastic gradient and recognition model (bottom row). The transient dynamics and the associated model uncertainty is clearly visible for the first 15 time steps until the initial transient decays and approaches the true system behavior. In contrast, the learned recognition model almost perfectly initializes the latent state, leading to much smaller deviations in the predicted observations and far less predictive uncertainty. Notice how the recognition model is most certain about the distribution of the first latent state dimension (orange), which is directly coupled to the observation through the parametric observation model (cf. (9)). The uncertainty for the remaining, latent states, in contrast, is slightly higher.
Comparing the full ELBO gradientbased model learning and the stochastic version with the recognition model, the stochastic model learning is far more robust and counteracts the overfitting tendencies in the full gradientbased model learning. A comparison of the model learning progress for both methods is depicted in the supplementary material. Due to the improved optimization robustness and the applicability to larger datasets, the stochastic, recognitionmodelbased optimization scheme is employed for the model learning benchmark presented in the next section.
6.2 Model Learning Benchmark
The performance of PRSSM is assessed in comparison to stateoftheart model learning methods on several realworld datasets as previously utilized by (Mattos et al., 2015). The suite of reference methods is composed of: Onestep ahead autoregressive GP models: GPFITC (Snelson & Ghahramani, 2006) and NIGP (McHutchon & Rasmussen, 2011). Multistepahead autoregressive and recurrent GP models in latent space: REVARB based on 1 respectively 2 hidden layers (Mattos et al., 2015) and MSGP (Doerr et al., 2017a). GPSSMs, based on a full Markovian state: SSGPSSM (Svensson & Schön, 2017) and the proposed PRSSM. Currently, no published and runnable code exists for the model learning frameworks presented in (Turner et al., 2010; Frigola et al., 2013, 2014; Eleftheriadis et al., 2017). Reproducing these methods is not straightforward and outside the scope of this work. To enable a fair comparison, all methods is given access to a predefined amount of input/output data for initialization. Details about the benchmark methods, their configuration, as well as the benchmark datasets can be found in the supplementary material.
The benchmark results are summarized in Tab. 1. A detailed visualization of the resulting model predictions on the Drives dataset is shown in Fig. 4. For the onestepahead models (GPNARX, NIGP), two variants are used to obtain longterm predictive distributions: Propagating the mean (no uncertainty propagation) and approximating the true posterior by a Gaussian using exact moment matching (Girard et al., 2003). The results show that PRSSM consistently outperforms the SSGPSSM learning method. Similarly, performance is improved in comparison to baseline methods (GPNARX and NIGP). In the ensemble of models based on longterm optimized autoregressive structure (REVARB, MSGP), no method is clearly superior. However, the performance of PRSSM is in all cases close to the one of the best performing method. Note that PRSSM demonstrates robust model learning performance and does not exhibit severe failure any dataset as some of the contestants do.
6.3 Large Scale Experiment
To evaluate the scalability, results are provided for the forward dynamics model of the SARCOS 7 degree of freedom robotic arm. The task is characterized by 60 experiments of length 337 (approx 20.000 datapoints), 7 input, and 7 output dimensions. PRSSM is set up with a latent state dimensionality
. From the set of reference methods, only the GPNARX model can be adapted to run efficiently on this dataset without major efforts. The details about the task and method configuration are given in the supplementary material. A visualization of the model predictions is shown in Fig 5 and prediction RMSEs are listed in Tab. 1. The results show that PRSSM is able to robustly and accurately learn the forward dynamics for all system outputs from all experimental data. In contrast, the GPNARX baseline achieves worse predictions and fails to predict the remaining five joints (not shown).7 Conclusion
In this work, we presented Probabilistic Recurrent StateSpace Models (PRSSM), a novel model structure and efficient inference scheme for learning probabilistic, Markovian statespace models. Based on GP priors and doubly stochastic variational inference, a novel model optimization criterion is derived, which is closely related to the one of powerful, but deterministic, RNNs or LSTMs. By maintaining the true latent state distribution and thereby enabling longterm gradients, efficient inference in latent spaces becomes feasible. Furthermore, a novel recognition model enables learning of unstable or slow dynamics as well as scalability to large datasets. Robustness, scalability and high performance in model learning is demonstrated on realworld datasets in comparison to stateoftheart methods.
A limitation of PRSSM is its dependency on an apriori fixed latent state dimensionality. This shortcoming could potentially be resolved by a sparsity enforcing latent state prior, which would suppress unnecessary latent state dimensions.
Acknowledgements
This research was supported in part by National Science Foundation grants IIS1205249, IIS1017134, EECS0926052, the Office of Naval Research, the Okawa Foundation, and the MaxPlanckSociety.
References
 Billings (2013) Billings, Stephen A. Nonlinear system identification: NARMAX methods in the time, frequency, and spatiotemporal domains. John Wiley & Sons, 2013.
 Blei et al. (2017) Blei, David M, Kucukelbir, Alp, and McAuliffe, Jon D. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859–877, 2017.
 Camacho & Alba (2013) Camacho, Eduardo F and Alba, Carlos Bordons. Model predictive control. Springer Science & Business Media, 2013.

Deisenroth & Rasmussen (2011)
Deisenroth, Marc P and Rasmussen, Carl Edward.
PILCO: A modelbased and dataefficient approach to policy search.
In
Proceedings of the 28th International Conference on Machine Learning (ICML)
, pp. 465–472, 2011.  Deisenroth et al. (2012) Deisenroth, Marc Peter, Turner, Ryan Darby, Huber, Marco F, Hanebeck, Uwe D, and Rasmussen, Carl Edward. Robust filtering and smoothing with gaussian processes. IEEE Transactions on Automatic Control, 57(7):1865–1871, 2012.
 Doerr et al. (2017a) Doerr, Andreas, Daniel, Christian, NguyenTuong, Duy, Marco, Alonso, Schaal, Stefan, Marc, Toussaint, and Trimpe, Sebastian. Optimizing longterm predictions for modelbased policy search. In Conference on Robot Learning (CORL), pp. 227–238, 2017a.
 Doerr et al. (2017b) Doerr, Andreas, NguyenTuong, Duy, Marco, Alonso, Schaal, Stefan, and Trimpe, Sebastian. Modelbased policy search for automatic tuning of multivariate PID controllers. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2017b.
 Eleftheriadis et al. (2017) Eleftheriadis, Stefanos, Nicholson, Tom, Deisenroth, Marc, and Hensman, James. Identification of Gaussian process state space models. In Advances in Neural Information Processing Systems (NIPS), pp. 5315–5325, 2017.
 Föll et al. (2017) Föll, Roman, Haasdonk, Bernard, Hanselmann, Markus, and Ulmer, Holger. Deep recurrent Gaussian process with variational sparse spectrum approximation. arXiv preprint arXiv:1711.00799, 2017.
 Frigola et al. (2013) Frigola, Roger, Lindsten, Fredrik, Schön, Thomas B, and Rasmussen, Carl E. Bayesian inference and learning in Gaussian process statespace models with particle MCMC. In Advances in Neural Information Processing Systems (NIPS), pp. 3156–3164, 2013.
 Frigola et al. (2014) Frigola, Roger, Chen, Yutian, and Rasmussen, Carl Edward. Variational Gaussian process statespace models. In Advances in Neural Information Processing Systems (NIPS), pp. 3680–3688, 2014.
 FrigolaAlcade (2015) FrigolaAlcade, Roger. Bayesian time series learning with Gaussian processes. University of Cambridge, 2015.
 Girard et al. (2003) Girard, Agathe, Rasmussen, Carl Edward, QuinoneroCandela, J, MurraySmith, R, Winther, O, and Larsen, J. Multiplestep ahead prediction for non linear dynamic systems—a Gaussian process treatment with propagation of the uncertainty. In Advances in Neural Information Processing Systems (NIPS), volume 15, pp. 529–536, 2003.
 Hensman et al. (2013) Hensman, James, Fusi, Nicolo, and Lawrence, Neil D. Gaussian processes for big data. arXiv preprint arXiv:1309.6835, 2013.
 Hochreiter & Schmidhuber (1997) Hochreiter, Sepp and Schmidhuber, Jürgen. LSTM can solve hard long time lag problems. In Advances in Neural Information Processing Systems (NIPS), pp. 473–479, 1997.
 Kalman (1960) Kalman, R.E. A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82(1):35–45, 1960.
 Kingma & Welling (2013) Kingma, Diederik P and Welling, Max. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
 Ko & Fox (2009) Ko, Jonathan and Fox, Dieter. GpBayesFilters: Bayesian filtering using Gaussian process prediction and observation models. Autonomous Robots, 27(1):75–90, 2009.
 Kocijan et al. (2005) Kocijan, Juš, Girard, Agathe, Banko, Blaž, and MurraySmith, Roderick. Dynamic systems identification with Gaussian processes. Mathematical and Computer Modelling of Dynamical Systems, 11(4):411–424, 2005.
 Likar & Kocijan (2007) Likar, Bojan and Kocijan, Juš. Predictive control of a gas–liquid separation plant based on a Gaussian process model. Computers & chemical engineering, 31(3):142–152, 2007.
 Littman & Sutton (2002) Littman, Michael L and Sutton, Richard S. Predictive representations of state. In Advances in Neural Information Processing Systems (NIPS), pp. 1555–1561, 2002.
 Ljung (1998) Ljung, Lennart. System identification. In Signal analysis and prediction, pp. 163–173. Springer, 1998.
 Ljung (2010) Ljung, Lennart. Perspectives on system identification. Annual Reviews in Control, 34(1):1–12, 2010.
 Mattos et al. (2015) Mattos, César Lincoln C, Dai, Zhenwen, Damianou, Andreas, Forth, Jeremy, Barreto, Guilherme A, and Lawrence, Neil D. Recurrent Gaussian processes. arXiv preprint arXiv:1511.06644, 2015.
 Mattos et al. (2016) Mattos, César Lincoln C, Damianou, Andreas, Barreto, Guilherme A, and Lawrence, Neil D. Latent autoregressive Gaussian processes models for robust system identification. IFACPapersOnLine, 49(7):1121–1126, 2016.
 McHutchon & Rasmussen (2011) McHutchon, Andrew and Rasmussen, Carl Edwards. Gaussian process training with input noise. In Advances in Neural Information Processing Systems (NIPS), pp. 1341–1349, 2011.
 Moor (2017) Moor, De. DaISy: Database for the identification of systems. http://homes.esat.kuleuven.be/~smc/daisy/, 2017. [Online; accessed 30Jan2017].
 MurraySmith & Girard (2001) MurraySmith, Roderick and Girard, Agathe. Gaussian process priors with ARMA noise models. In Irish Signals and Systems Conference, Maynooth, pp. 147–152, 2001.
 Narendra & Parthasarathy (1990) Narendra, Kumpati S and Parthasarathy, Kannan. Identification and control of dynamical systems using neural networks. IEEE Transactions on neural networks, 1(1):4–27, 1990.
 Nørgaard (2000) Nørgaard, Magnus. Hydraulic actuator dataset. http://www.iau.dtu.dk/nnbook/systems.html, 2000. [Online; accessed 30Jan2017].
 Pascanu et al. (2013) Pascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning (ICML), pp. 1310–1318, 2013.
 Rudary & Singh (2004) Rudary, Matthew R and Singh, Satinder P. A nonlinear predictive state representation. In Advances in Neural Information Processing Systems (NIPS), pp. 855–862, 2004.
 Salimbeni & Deisenroth (2017) Salimbeni, Hugh and Deisenroth, Marc. Doubly stochastic variational inference for deep Gaussian processes. In Advances in Neural Information Processing Systems (NIPS), pp. 4591–4602, 2017.
 Singh et al. (2003) Singh, Satinder P, Littman, Michael L, Jong, Nicholas K, Pardoe, David, and Stone, Peter. Learning predictive state representations. In Proceedings of the 20th International Conference on Machine Learning (ICML), pp. 712–719, 2003.
 Snelson & Ghahramani (2006) Snelson, Edward and Ghahramani, Zoubin. Sparse Gaussian processes using pseudoinputs. In Advances in Neural Information Processing Systems (NIPS), volume 18, pp. 1257, 2006.
 Sutskever et al. (2011) Sutskever, Ilya, Martens, James, and Hinton, Geoffrey E. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML), pp. 1017–1024, 2011.
 Svensson & Schön (2017) Svensson, Andreas and Schön, Thomas B. A flexible state–space model for learning nonlinear dynamical systems. Automatica, 80:189–199, 2017.

Titsias (2009)
Titsias, Michalis K.
Variational learning of inducing variables in sparse Gaussian
processes.
In
Proceedings of the 12th International Conference on Artificial Intelligence and Statistics (AISTATS)
, volume 5, pp. 567–574, 2009.  Turner et al. (2010) Turner, Ryan, Deisenroth, Marc, and Rasmussen, Carl. Statespace inference and learning with Gaussian processes. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 868–875, 2010.
 Van Overschee & De Moor (2012) Van Overschee, Peter and De Moor, BL. Subspace identification for linear systems: Theory—Implementation—Applications. Springer Science & Business Media, 2012.
 Venugopalan et al. (2014) Venugopalan, Subhashini, Xu, Huijuan, Donahue, Jeff, Rohrbach, Marcus, Mooney, Raymond, and Saenko, Kate. Translating videos to natural language using deep recurrent neural networks. arXiv preprint arXiv:1412.4729, 2014.
 Vijayakumar & Schaal (2000) Vijayakumar, Sethu and Schaal, Stefan. LWPR: An O(n) algorithm for incremental real time learning in high dimensional space. In Proceedings of the 17th International Conference on Machine Learning (ICML), 2000.
 Wang et al. (2008) Wang, Jack M, Fleet, David J, and Hertzmann, Aaron. Gaussian process dynamical models for human motion. IEEE transactions on pattern analysis and machine intelligence, 30(2):283–298, 2008.
 Wigren (2010) Wigren, Torbjörn. Inputoutput data sets for development and benchmarking in nonlinear identification. Technical report, Department of Information Technology, Uppsala University, 2010.
 Williams & Rasmussen (2005) Williams, Christopher KI and Rasmussen, Carl Edward. Gaussian processes for machine learning. MIT Press, 2005.
Appendix A Probabilistic Recurrent StateSpace Model
a.1 Evidence Lower Bound (ELBO)
Summarizing the model assumptions from the main paper, the model’s joint distribution is given by
(22) 
The variational distribution over the unknown model variables is defined as
(23) 
Together, the derivation of the ELBO is given below in (24) to (29).
(24)  
(25)  
(26)  
(27)  
(28)  
(29) 
In the ELBO, as derived in (29), the last term is a regularization on the initial state distribution. For the full gradientbased optimization in the main paper, an uninformative initial distribution is chosen and fixed, such that the third term is dropped. In the stochastic optimization scheme, this term acts as a regularization preventing the recognition model to become overconfident in its predictions.
a.2 Model Configuration
The PRSSM exhibits a large number of model (hyper) parameters which need to be initialized. However, empirically, most of these model parameters can be initialized to a default setting as given in Tab. 2. This default configuration has been employed for all benchmark experiments presented in the main paper.
Parameter  Initialization 

Inducing inputs  
Inducing outputs  
Process noise  
Sensor noise  
kernel hyperparameters  
Parameter  Initialization 

Inducing points  
State samples  
Subtrajectories  
Latent space 
The PRSSM’s latent state dynamics model and noise models are configured to initially exhibit a random walk behavior. This behavior is clearly visible for the prediction based on the untrained model in Fig. 2 of the main paper. The GP prior is approximating the identity function based on an identity mean function and almost zero inducing outputs (up to a small Gaussian noise term to avoid singularities). The inducing inputs are spread uniformly over the function’s domain. The noise processes are initializes such as to achieve high correlations between latent states over time (i.e. small process noise magnitude). At the same time, a larger observation noise is required to obtain a inflation of predictive uncertainty over time. This inflation of predictive uncertainty is again clearly visible in Fig. 2 of the main paper. Both noise terms are chosen in a way to obtain numerically stable gradients for both the sample based log likelihood and the backpropagation through time in the ELBO evaluation.
The number of samples used in the ELBO approximation, number of inducing points in the GP approximation and batch size are, in contrast, a tradeoff between model accuracy and computational speed. The proposed default configuration empirically showed good performance whilst being computationally tractable.
Two tuning parameters remain, which are problem specific and have to be chosen for each dataset individually. Depending on the true system’s timescales/sampling frequency and system order, the length of subtrajectories for minibatching and the latent state dimensionality have to be specified manually. For the benchmark datasets we choose and .
Appendix B Model Learning Benchmark Details
In the main paper, the proposed PRSSM’s longterm predictive performance is compared to several stateoftheart methods. The benchmark is set up similar to the evaluation presented in (Doerr et al., 2017a). Details about the individual benchmark methods, their configuration and the employed datasets can be found in the following sections. Minor adjustments with respect to the set up in (Doerr et al., 2017a) will be pointed out in the following. These have been introduced to enable fair comparison between all benchmark methods.
,  

Actuator (Nørgaard, 2000)  512  512  10 
Ballbeam (Moor, 2017)  500  500  10 
Drives (Wigren, 2010)  250  250  10 
Furnace (Moor, 2017)  148  148  3 
Dryer (Moor, 2017)  500  500  2 
b.1 Benchmark Methods
The proposed PRSSM is evaluated in comparison to methods from three classes: onestep ahead autoregressive models (GPNARX, NIGP), multistep ahead autoregressive models in latent space (REVARB, MSGP) and Markov statespace models (SSGPSSM). To enable a fair comparison, all methods have access to a predefined amount of input/output data for initialization.
(i) GPNARX (Kocijan et al., 2005): The system dynamics is modeled as with a GP prior on . The GP has a zero mean function and a squared exponential kernel with automatic relevance determination. The kernel hyperparameters, signal variance and lengthscales, are optimized based on the standard maximum likelihood objective. A sparse approximation (Snelson & Ghahramani, 2006), based on 100 inducing inputs is employed. Moment matching (Girard et al., 2003) is employed to obtain a longterm predictive distribution.
(ii) NIGP (McHutchon & Rasmussen, 2011)
: Noise Input GPs (NIGP) account for uncertainty in the input by treating input points as deterministic and inflating the corresponding output uncertainty, leading to state dependent noise, i.e. heteroscedastic GPs. The experimental results are based on the publicly available Matlab code. Since no sparse version is available, training is performed on the full training dataset. Training on the full dataset is however not possible for larger datasets and provides an advantage to NIGP. Experiments based on a random data subset of size 100 lead to decreased performance in the order of the GPNARX results or worse.
(iii) REVARB (Mattos et al., 2015): Recurrent Variational Bayes (REVARB) is a recent proposition to optimize the lower bound to the logmarginal likelihood using variational techniques. This framework is based on the variational sparse GP framework (Titsias, 2009), but allows for computation of (time)recurrent GP structures and deep GP structures (stacking multiple GPlayers in each timestep). For our benchmark, we run REVARB using one (REVARB1) respectively two (REVARB2) hidden layers, where each layer is provided with 100 inducing inputs. We closely follow the original setup as presented by (Mattos et al., 2015), performing 50 initial optimization steps based on fixed variances and up to 10000 steps based on variable variances. Unlike for the other benchmark methods, the autoregressive history of REVARB implicitly becomes longer when introducing additional hidden layers.
(iv) MSGP (Doerr et al., 2017a): MSGP is a GPNARX model operating in a latent, noise free state, which is trained by optimizing its longterm predictions. The experimental results are obtained according to the configuration described in (Doerr et al., 2017a), again using 100 inducing points and moment matching.
(v) SSGPSSM (Svensson & Schön, 2017): The SparseSpectrum GPSSM is employing a sparse spectrum GP approximation to model the system’s transition dynamics in a Markovian, latent space. The available Matlab implementation is restricted to a 2D latent space. In the experimental results, a default configuration is employed as given by: . The variables are defined as given in the code published for (Svensson & Schön, 2017).
b.2 Benchmark Datasets
The benchmarks datasets are composed of popular system identification datasets from related work (Narendra & Parthasarathy, 1990; Kocijan et al., 2005; Mattos et al., 2016). They incorporate measured input output data from technical systems like hydraulic actuators, furnaces, hair dryers or electrical motors. For all of these problems, both inputs and outputs are onedimensional . However, the system’s true state is higher dimensional such that an autoregressive history or an explicit latent state representation is required to capture the relevant dynamics. The number of historic inputs and outputs for the autoregressive methods is fixed apriori for each dataset as previously used in other publications. For model training, datasets are normalized to zero mean and variance one based on the available training data. References to the individual datasets, training and test trajectory length, and the utilized history for the GPNARX models are summarized in Tab. 4.
Appendix C Additional Results
c.1 Optimization Schemes Comparison
In Fig. 6, the RMSE and the negative log likelihood, which is obtained for the model’s longterm prediction, is depicted over learning iterations for the training (solid line) and test (dotted line) set from the Drives dataset. The full gradient optimization (blue) obtains smaller training loss in comparison to the stochastic optimization scheme for both RMSE and negative log likelihood. The resulting test performance however indicates similar performance in terms of RMSE whilst showing clear overfitting of the fullgradientbased model in terms of log likelihood. Additionally, optimizing, based on the full gradient, is much more delicate and less robust as indicated by the spikes in loss and the higher variance of incurred optimization progress. Fig. 6 depicts mean (lines) and minimum to maximum intervals (shaded areas) of incurred loss, based on five independent model trainings.
c.2 Detailed Benchmark Results
Onestepahead autoregressive  Multistepahead autoregressive in latent space  Markovian StateSpace Models  
Data unnormalized + Mean prediction  Default configuration  
Task  GPNARX  NIGP  REVARB 1  REVARB 2  MSGP  SSGPSSM  PRSSM 
Actuator  0.645 (0.018)  0.752 (0)  0.496 (0.057)  0.565 (0.047)  0.771 (0.098)  0.696 (0.034)  0.502 (0.031) 
Ballbeam  0.169 (0.005)  0.165 (0)  0.138 (0.001)  0.073 (0.000)  0.124 (0.034)  411.550 (273.043)  0.073 (0.007) 
Drives  0.579 (0.004)  0.378 (0)  0.718 (0.081)  0.282 (0.031)  0.451 (0.021)  0.718 (0.009)  0.492 (0.038) 
Furnace  1.199 (0.001)  1.195 (0)  1.210 (0.034)  1.945 (0.016)  1.277 (0.127)  1.318 (0.027)  1.249 (0.029) 
Dryer  0.278 (0.003)  0.281 (0)  0.149 (0.017)  0.128 (0.001)  0.146 (0.004)  0.152 (0.006)  0.140 (0.018) 
Data unnormalized + Moment matching  Default configuration  
Task  GPNARX  NIGP  REVARB 1  REVARB 2  MSGP  SSGPSSM  PRSSM 
Actuator  0.633 (0.018)  0.601 (0)  0.430 (0.026)  0.618 (0.047)  0.771 (0.098)  0.696 (0.034)  0.502 (0.031) 
Ballbeam  0.077 (0.000)  0.078 (0)  0.131 (0.005)  0.073 (0.000)  0.124 (0.034)  411.550 (273.043)  0.073 (0.007) 
Drives  0.688 (0.003)  0.398 (0)  0.801 (0.032)  0.733 (0.087)  0.451 (0.021)  0.718 (0.009)  0.492 (0.038) 
Furnace  1.198 (0.002)  1.195 (0)  1.192 (0.002)  1.947 (0.032)  1.277 (0.127)  1.318 (0.027)  1.249 (0.029) 
Dryer  0.284 (0.003)  0.280 (0)  0.878 (0.016)  0.123 (0.002)  0.146 (0.004)  0.152 (0.006)  0.140 (0.018) 
Data normalization + Mean prediction  Default configuration  
Task  GPNARX  NIGP  REVARB 1  REVARB 2  MSGP  SSGPSSM  PRSSM 
Actuator  0.665 (0.014)  0.791 (0)  0.506 (0.092)  0.559 (0.069)  0.771 (0.098)  0.696 (0.034)  0.502 (0.031) 
Ballbeam  0.357 (0.199)  0.154 (0)  0.141 (0.004)  0.206 (0.008)  0.124 (0.034)  411.550 (273.043)  0.073 (0.007) 
Drives  0.564 (0.029)  0.369 (0)  0.605 (0.027)  0.376 (0.026)  0.451 (0.021)  0.718 (0.009)  0.492 (0.038) 
Furnace  1.201 (0.001)  1.205 (0)  1.196 (0.002)  1.189 (0.001)  1.277 (0.127)  1.318 (0.027)  1.249 (0.029) 
Dryer  0.282 (0.001)  0.269 (0)  0.123 (0.001)  0.113 (0)  0.146 (0.004)  0.152 (0.006)  0.140 (0.018) 
Data normalization + Moment matching  Default configuration  
Task  GPNARX  NIGP  REVARB 1  REVARB 2  MSGP  SSGPSSM  PRSSM 
Actuator  0.627 (0.005)  0.599 (0)  0.438 (0.049)  0.613 (0.190)  0.771 (0.098)  0.696 (0.034)  0.502 (0.031) 
Ballbeam  0.284 (0.222)  0.087 (0)  0.139 (0.007)  0.209 (0.012)  0.124 (0.034)  411.550 (273.043)  0.073 (0.007) 
Drives  0.701 (0.015)  0.373 (0)  0.828 (0.025)  0.868 (0.113)  0.451 (0.021)  0.718 (0.009)  0.492 (0.038) 
Furnace  1.201 (0.000)  1.205 (0)  1.195 (0.002)  1.188 (0.001)  1.277 (0.127)  1.318 (0.027)  1.249 (0.029) 
Dryer  0.310 (0.044)  0.268 (0)  0.851 (0.011)  0.355 (0.027)  0.146 (0.004)  0.152 (0.006)  0.140 (0.018) 
In Tab. 5, detailed results are provided for the benchmark experiments. The reference learning methods in the presented benchmark are highly deceptive to changes in the data preprocessing and the longterm prediction method. Therefore, results are detailed for GPNARX, NIGP, REVARB 1, and REVARB 2 for all combinations of normalized/unnormalized training data and mean or moment matching predictions. The results for methods MSGP, SSGPSSM and PRSSM are always computed for the normalized datasets using the method specific propagation of uncertainty schemes.
Obtaining uncertainty estimates is one key requirement for employing the longterm predictions, e.g. in modelbased control. Therefore, only the predictive results based on the approximate propagation of uncertainty through moment matching is considered in the main paper, although better results in RMSE are sometimes obtained from employing only the mean predictions. A comparison of the predictive results based on mean and moment matching predictions on the Drives dataset is shown in Fig. 5. The results from the unnormalized datasets and moment matching are in line with the results published in (Doerr et al., 2017a).
c.3 Large Scale Experiment Details
The Sarcos task is based on a publicly available dataset comprising joint positions, velocities, acceleration and torques of a seven degreesoffreedom SARCOS anthropomorphic robot arm. This dataset has been previously used in (Vijayakumar & Schaal, 2000; Williams & Rasmussen, 2005) in the task of learning the system’s inverse dynamics, therefore mapping joint position, velocities, and accelerations to the required joint torques. This task can be framed as a standard regression problem, which is solved in a supervised fashion. In contrast, in this paper, we consider the task of learning the forward dynamics, i.e. predicting the joint positions given a sequence of joint torques. The system output is therefore given by the seven joint positions (). Joint velocities and acceleration, as latent states, are not available for learning but have to be inferred. The system input is given by the seven joint torques ().
The original training dataset (44.484 datapoints) recorded at 100 Hz has been downsampled to 50 Hz. It is split into 66 independent experiments as indicated by the discontinuities in the original timeseries data. Six out of 66 experiments have been utilized for testing whereas the other 60 experiments remain for training. None of the reference methods from the model learning benchmark is outofthebox applicable to this large scale dataset. To obtain a baseline, the sparse GPNARX model has been trained on a subset of training experiments (400 inducing points, approx. 2000 training data points). The PRSSM can be directly trained on the full training dataset utilizing its stochastic, minibatched optimization scheme. PRSSM is setup similar to the configuration described in the benchmark experiment but based on a 14 dimensional latent state (). Longterm prediction results on one of the test experiments are visualized in Fig. 7. PRSSM robustly predicts the robot arm motions for all joints and clearly improves over the GPNARX baseline. In contrast, the GPNARX baseline can not predict the dynamics on 5 out of 7 joints.