1 Introduction
Modern machine learning methods have demonstrated stateofart performance in representing complex functions in a variety of applications. Nevertheless, the translation of complex learning methods in natural sciences and in the clinical domain is still challenged by the need of interpretable solutions. To this end, several approaches have been proposed in order to constrain the solution dynamics to plausible forms such as boundedness
(Da Veiga & Marrel, 2012), monotonicity (Riihimäki & Vehtari, 2010), or mechanistic behaviors (Alvarez et al., 2013). This is a crucial requirement to provide a more precise and realistic description of natural phenomena. For example, monotonicity of the interpolating function is a common assumption when modeling disease progression in neurodegenerative diseases
(Lorenzi et al., 2017; Donohue et al., 2014), while biophysical or mechanistic models are necessary when analyzing and simulating experimental data in bioengineering (Vyshemirsky & Girolami, 2007; Konukoglu et al., 2011).However, accounting for the complex properties of biological systems in datadriven modeling approaches poses important challenges. For example, functions are often nonsmooth and characterized by nonstationaries which are difficult to encode in “shallow” models. Complex cases can arise already in classical ode systems for certain configurations of the parameters, where functions can exhibit sudden temporal changes (Goel et al., 1971; FitzHugh, 1955)
. Within this context, approaches based on stationary models, even when relaxing the smoothness assumptions, may lead to suboptimal results for both data modeling (interpolation), and estimation of dynamics parameters. To provide insightful illustrations of this problem we anticipate the results of Section 4.4.1 and Figure
5. Moreover, the application to real data requires to account for the uncertainty of measurements and underlying model parameters, as well as for the – often large – dimensionality characterizing the experimental data. Within this context, deep probabilistic approaches may represent a promising modeling tool, as they combine the flexibility of deep models with a systematic way to reason about uncertainty in model parameters and predictions. The flexibility of these approaches stems from the fact that deep models implement compositions of functions, which considerably extend the complexity of signals that can be represented with “shallow” models (LeCun et al., 2015). Meanwhile, their probabilistic formulation introduces a principled approach to quantify uncertainty in parameters estimation and predictions, as well as to model selection problems (Neal, 1996; Ghahramani, 2015).In this work, we aim at extending deep probabilistic models to account for constraints on their dynamics. In particular, we focus on a general and flexible formulation capable of imposing a rich set of constraints on functions and derivatives of any order. We focus on: i) equality constraints on the function and its derivatives, required when the model should satisfy given physical laws implemented through a mechanistic description of a system of interest; and ii) inequality constraints, arising in problems where the class of suitable functions is characterized by specific properties, such as monotonicity or convexity/concavity (Riihimäki & Vehtari, 2010).
In case of equality constraints, we tackle the challenge of parameters inference in Ordinary Differential Equations (ode). Exact parameter inference of ode models is computationally expensive due to the need for repeatedly solving odes within the Bayesian setting. To this end, previous works attempted to recover tractability by introducing approximate solutions of odes (see, e.g., Macdonald & Husmeier (2015) for a review). Following these ideas, we introduce “soft” constraints through a probabilistic formulation that penalizes functions violating the ode on a set of virtual inputs. Note that this is in contrast to previous approaches, such as the ones proposed with probabilistic ode solvers (Wheeler et al., 2014; Schober et al., 2014), where a given dynamics is strictly enforced to the model posterior. By deriving a lower bound on the model evidence, we enable the use of stochastic variational inference to achieve endtoend posterior inference over model and constraint parameters.
In what follows we shall focus on a class of deep probabilistic models implementing a composition of Gaussian processes (gps) (Rasmussen & Williams, 2006) into Deep Gaussian Processes (dgps) (Damianou & Lawrence, 2013)
. More generally, our formulation can be straightforwardly extended to probabilistic Deep Neural Networks (
dnns) (Neal, 1996). On the practical side, our formulation allows us to take advantage of automatic differentiation tools, leading to flexible and easytoimplement methods for inference in constrained deep probabilistic models. As a result, our method scales linearly with the number of observations and constraints. Furthermore, in the case of meanfield variational inference, it also scales linearly with the number of parameters in the constraints. Finally, it can easily be parallelized/distributed and exploit GPU computing.Through an indepth series of experiments, we demonstrate that our proposal achieves stateoftheart performance in a number of constrained modeling problems while being characterized by attractive scalability properties. The paper is organized as follows: Section 2 reports on related work, whereas the core of the methodology is presented in Section 3. Section 4 contains an indepth validation of the proposed model against the stateoftheart. We demonstrate the application of equality constraints in the challenging problem of parameter inference in ode, while we showcase the application of inequality constraints in the monotonic regression of count data. Additional insights and conclusions are given in Section 5. Results that we could not fit in the manuscript are deferred to the supplementary material.
2 Related Work
Equality constraints where functions are enforced to model the solution of ode systems have been considered in a variety of problems, particularly in the challenging task of accelerated inference of ode parameters. Previous approaches to accelerate ode parameter optimization involving interpolation date back to Varah (1982). This idea has been developed in several ways, including splines, gps, and Reproducing Kernel Hilbert spaces. Works that employ gps as interpolants have been proposed in Ramsay et al. (2007), Liang & Wu (2008), Calderhead et al. (2009), and Campbell & Steele (2012). Such approaches have been extended to introduce a novel formulation to regularize the interpolant based on the ode system, notably Dondelinger et al. (2013); Barber & Wang (2014). An indepth analysis of the model in Barber & Wang (2014) is provided by Macdonald et al. (2015). Recently, Gorbach et al. (2017) extended previous works by proposing meanfield variational inference to obtain an approximate posterior over ode parameters. Our work improves previous approaches by considering a more general class of interpolants than “shallow” gps, and proposes a scalable framework for inferring the family of interpolating functions jointly with the parameters of the constraint, namely ode parameters.
Another line of research that builds on gradient matching approaches uses a Reproducing Kernel Hilbert space formulation. For example, González et al. (2014) proposes to exploit the linear part of odes to accelerate the interpolation, while Niu et al. (2016) exploits the quadratic dependency of the objective with respect to the parameters of the interpolant to improve the computational efficiency of the ode regularization. Interestingly, inspired by Calandra et al. (2016), the latter approach was extended to handle nonstationarity in the interpolation through warping (Niu et al., 2018). The underlying idea is to estimate a transformation of the input domain to account for nonstationarity of the signal, in order to improve the fitting of stationary gp interpolants. A key limitation of this approach is the lack of a probabilistic formulation, which prevents one from approximating the posterior over ode parameters. Moreover, the warping approach is tailored to periodic functions, thus limiting the generalization to more complex signals. In our work, we considerably improve on these aspects by effectively modeling the warping through gps/dgps that we infer jointly with ode parameters.
Inequality constraints on the function derivatives have been considered in several works such as in Meyer (2008); Groeneboom & Jongbloed (2014); Mašić et al. (2017); Riihimäki & Vehtari (2010); Da Veiga & Marrel (2012); Salzmann & Urtasun (2010). In particular, the gp setting provides a solid and elegant theoretical background for tackling this problem; thanks to the linearity of differentiation, both mean and covariance functions of highorder derivatives of gps can be expressed in closed form, leading to exact formulations for linearlyconstrained gps (Da Veiga & Marrel, 2012). In case of inequality constraints on the derivatives, instead, this introduces nonconjugacy between the likelihood imposing the derivative constraint and the gp prior, thus requiring approximations (Riihimäki & Vehtari, 2010). Although this problem can be tackled through sampling schemes or variational inference methods, such as Expectation Propagation (Minka, 2001), scalability to large dimensions and sample size represents a critical limitation. In this work, we extend these methods by considering a more general class of functions based on dgps, and develop scalable inference that makes our method applicable to large data and dimensions.
3 Methods
3.1 Equality constraints in probabilistic modeling
In this section we provide a derivation of the posterior distribution of our model when we introduce equality constraints in the dynamics. Let be a set of observed multivariate variables associated with measuring times collected into ; the extension where the variables are measured at different times is notationally heavier but straightforward. Let be a multivariate interpolating function with associated noise parameters , and define similarly to to be the realization of at . In this work, will be either modeled using a gp, or deep probabilistic models based on dgps. We introduce functional constraints on the dynamics of the components of by specifying a family of admissible functions whose derivatives of order evaluated at the inputs satisfy some given constraint
Here the constraint is expressed as a function of the input, the function itself, and highorder derivatives up to order . The constraint also includes as dynamics parameters that should be inferred. We are going to consider the intersection of all the constraints for a set of indices comprising pairs of interest
To keep the notation uncluttered, and without loss of generality, in the following we will assume that all the terms are evaluated at ; we can easily relax this by allowing for the constraints to be evaluated at different sampling points than . As a concrete example, consider the constraints induced by the LotkaVolterra ode system (more details in the experiments section); for this system, , and the family of functions is identified by the conditions
where the products are elementwise.
Denote by the set of realizations of and of its derivatives at any required order evaluated at timed . We define the constrained regression problem through two complementary likelihoodbased elements: a data attachment term , and a term quantifying the constraint on the dynamics, , where is the associated noise parameter. To solve the inference problem, we shall determine a lower bound for the marginal
(1)  
where
Note that is in fact completely identified by .
Equation (1) requires specifying suitable models for both likelihood and functional constraints. This problem thus implies the definition of noise models for both observations and model dynamics. In the case of continuous observations, the likelihood can be assumed to be Gaussian:
(2) 
where is a suitable multivariate covariance. Extensions to other likelihood functions are possible, and in the experiments we show an application to regression on counts where the likelihood is Poisson with rates equal to the exponential of the elements of .
Concerning the noise model for the derivative observations, we assume independence across the constraints so that
(3) 
We can again assume a Gaussian likelihood:
(4) 
or, in order to account for potentially heavytailed error terms on the derivative constraints, we can assume a Studentt distribution:
(5) 
where . We test these two noise models for in the experiments.
3.2 Inequality constraints in probabilistic modeling
In the case of inequality constraints we can proceed analogously as in the previous section. In particular, we are interested in the class of functions satisfying the following conditions:
For example, a monotonic univariate regression problem can be obtained with a constraint of the form .
In this case, the model dynamics can be enforced by a logistic function:
(6) 
where the parameter controls the strength of the monotonicity constraint.
3.3 Optimization and inference in constrained regression with dgps
After recalling the necessary methodological background, in this section we derive an efficient inference scheme for the model posterior introduced in Section 3.1.
To recover tractability, our scheme leverages on recent advances in modeling and inference in dgps through approximation via random feature expansions (Rahimi & Recht, 2008; Cutajar et al., 2017). Denoting with the gprandom variables at layer , an (approximate) dgp is obtained by composing gps approximated by Bayesian linear models, . The socalled random features
are obtained by multiplying the layer input by a random matrix
and by applying a nonlinear transformation . For example, in case of the standard rbf covariance, the elements inare Gaussian distributed with covariance function parameterized through the lengthscale of the
rbf covariance. The nonlinearity is obtained through trigonometric functions, , while the prior over the elements of is standard normal. As a result, the interpolant becomes a Bayesian Deep Neural Network (dnn), where for each layer we have weights and, and activation functions
applied to the input of each layer multiplied by the weights .3.3.1 Derivatives in dgps with random feature expansions
To account for function derivatives consistently with the theory developed in Cutajar et al. (2017), we need to extend the random feature expansion formulation of dgp
s to highorder derivatives. Fortunately, this is possible thanks to the chain rule and to the closure under linear operations of the approximated
gps. More precisely, the derivatives of a “shallow” gp model with form can still be expressed through linear composition of matrixvalued operators depending on and only: . The computational tractability is thus preserved and the gp function and derivatives are identified by the same sets of weights and . The same principle clearly extends to dgp architectures where the derivatives at each layer can be combined following the chain rule to obtain the derivatives of the output function with respect to the input.3.3.2 Variational lower bound
In the constrained dgp setting, we are interested in carrying out inference of the functions and of the associated covariance parameters at all layers. Moreover, we may want to infer any dynamics parameters that parameterize the constraint on the derivatives. Within this setting, the inference of the latent variables in the marginal (1) is generally not tractable. Nevertheless, the Bayesian dnn structure provided by the random feature approximation allows the efficient estimation of its parameters, and the tractability of the inference is thus recovered.
In particular, let , , and be the collections of all , , and covariance and likelihood parameters, respectively. Recalling that we can obtain random features at each layer by sampling the elements in from a given prior distribution, we propose to tackle the inference problem through variational inference of the parameters and . We could also attempt to infer , although in this work we are going to assume them sampled from the prior with fixed randomness, which allows us to optimize covariance parameters using the reparameterization trick (option priorfixed in Cutajar et al. (2017)). We also note that we could infer, rather than optimize, ; we leave this for future work.
Using Jensen’s inequality, the variational approach allows us to obtain a lower bound on the logmarginal likelihood of equation (1), as follows:
(7)  
The distribution acts as a variational approximation and is assumed to be Gaussian, factorizing completely across weights and layers :
(8) 
Extensions to approximations where we relax the factorization assumption are possible. Similarly, we are going to assume to be Gaussian, and will assume no factorization, so that .
4 Experiments
This section reports an indepth validation of the proposed method on a variety of benchmarks. We are going to study the proposed variational framework for constrained dynamics in dgp models for ode parameter estimates using equality constraints, and compare it against stateoftheart methods. We will then consider the application of inequality constraints for a regression problem on counts, which was previously considered in the literature of monotonic gps.
4.1 Settings for the proposed constrained dgp
We report here the configuration that we used across all benchmarks for the proposed method. Due to the generally low sample size used across experiments (in most cases ), unless specified otherwise the tests were performed with a twolayer dgp , with dimension of the “hidden” gp layer equal to , and rbf kernels. The lengthscale of the rbf covariances was initialized to
, while the marginal standard deviation to
; the initial likelihood noise was set to . Finally, the initial ode parameters were set to the value of. The optimization was carried out through stochastic gradient descent with Adaptive moment Estimation (Adam)
(Kingma & Ba, 2017), through the alternate optimization of i) the approximated posterior over and likelihood/covariance parameters ( and ), and ii) likelihood parameters of ode constraints and the approximate posterior over ode parameters ( and ). We note that the optimization of the ode constraints parameters (the noise and scale parameters for Gaussian and Studentt likelihoods, respectively) is aimed at identifying in a fully datadriven manner the optimal tradeoff between data attachment (likelihood term) and regularity (constraints on the dynamics). In what follows, dgpt and dgpg respectively denote the model tested with Studentt and Gaussian noise models on the ode constraints.4.2 Equality constraints from ODE systems
The proposed framework was tested on a set of ode systems extensively studied in previous works: LotkaVolterra (Goel et al., 1971), FitzHughNagumo (FitzHugh, 1955), and protein biopathways from Vyshemirsky & Girolami (2007). For each experiment, we used the experimental setting proposed in previous studies (Niu et al., 2016; Macdonald & Husmeier, 2015). In particular, for each test, we identified two experimental configurations with increasing modeling difficulty (e.g. less samples, lower signaltonoise ratio, ). A detailed description of the models and testing parameters is provided in the supplementary material. The experimental results are reported for parameter inference and model estimation performed on different realizations of the noise.
4.2.1 Benchmark
We tested the proposed method against several reference approaches from the stateofart to infer parameters of ode systems.
RKG3: We tested the method presented in Niu et al. (2016) using the implementation in the R package KGode. This method implements gradient matching, where the interpolant is modeled using functions in Reproducing Kernel Hilbert spaces. This approach, for which ode parameters are estimated and not inferred, was shown to achieve stateoftheart performance on a variety of ode estimation problems. We used values ranging from to for the parameter that the method optimizes using crossvalidation.
Warp: In the R package KGode there is also an implementation of the warping approach presented in Niu et al. (2018). This method extends gradient matching techniques by attempting to construct a warping of the input where smooth Reproducing Kernel Hilbert spacesbased interpolants can effectively model nonstationary observations. The warping attempts to transform the original signal via assumptions on periodicity and regularity conditions. We used the default parameters and initialized the optimization of the warping function from a period equal to the interval where observations are available. Similarly to RKG3, ode parameters are estimated and not inferred.
AGM: We report results on the Approximate Gradient Matching (AGM) approach in Dondelinger et al. (2013), implemented in the recently released R package deGradInfer
. AGM implements a population Markov chain Monte Carlo approach tempering from the prior to the approximate posterior of
ode parameters based on an interpolation with gps. In the experiments we use parallel chains and we run them foriterations. In the implementation of AGM, the variance of the noise on the observations is assumed known and it is fixed; we expect this to give a slight advantage to this method.
MCMC: In the R package deGradInfer there is also an implementation of a population Markov chain Monte Carlo sampler where the ode is solved explicitly. In this case too we use parallel chains that we run for iterations. In contrast to AGM, in this implementation, the variance of the noise on the observations is learned together with ode parameters.
4.2.2 Results
LotkaVolterra 
FitzHughNagumo 
Biopathways 
Figure 1 shows the distribution of the root mean squared error (RMSE) across folds for each experimental setting (see supplement for details). We note that the proposed method consistently leads to better RMSE values compared to the reference approaches (except some folds in one of the FitzHughNagumo experiments, according to a MannWhitney nonparametric test), and that dgpt provides more consistent parameter estimates than dgpg
. This latter result may indicate a lower sensitivity to outliers derivatives involved in the functional constraint term. This is a crucial aspect due to the generally noisy derivative terms of nonparametric regression models. The distribution of the parameters for all the datasets tested in this study, which we report in the supplementary material, reveals that, unlike the nonprobabilistic methods RKG3 and WARP, our approach is capable of inferring
ode parameters yielding meaningful uncertainty estimation.4.3 Scalability test  large
We tested the scalability of the proposed method with respect to sample size. To this end, we repeated the test on the LotkaVolterra system with , and observations. For each instance of the model, the execution time was recorded and compared with the competing methods. All the experiments were performed on a Intel Core i5 MacBook. The proposed method scales linearly with (Figure 2), while it has an almost constant execution when ; we attribute this effect to overheads in the framework we used to code our method. For small , the running time of our method is comparable with competing methods, and it is considerably faster in case of large .
4.4 Scalability test  large
In order to assess the ability of the framework to scale to a large number of odes, we tested our method on the Lorenz96 system with increasing number of equations, to (Lorenz & Emanuel, 1998). To the best of our knowledge, the solution of this challenging problem via gradient matching approaches has only been previously attempted in Gorbach et al. (2017). We could not find an implementation of their method to carry out a direct comparison, so we are going to refer to the results reported in their paper. The system consists of a set of drift states functions recursively linked by the relationship:
where is the drift parameter. Consistently with the setting proposed in Gorbach et al. (2017); Vrettas et al. (2015), we set and generated 32 equally spaced observations over the interval seconds, with additive Gaussian noise . We performed two tests by training (i) on all the states, and (ii) by keeping one third of the states as unobserved, and by applying our method to identify model dynamics on both observed and unobserved states.
Figure 3 shows the average RMSE in the different experimental settings. As expected, the modeling accuracy is sensibly higher when trained on the full set of equations. Moreover, the RMSE is lower on observed states compared to unobserved ones. This is confirmed by visual inspection of the modeling results for sample training and testing states (Figure 4). The observed states are generally associated with lower uncertainty in the predictions and by an accurate fitting of the solutions (Figure 4, top). The model still provides remarkable modeling results on unobserved states (Figure 4, bottom), although with decreased accuracy and higher uncertainty. We are investigating the reasons for the posterior distribution over not covering the true value of the parameter across different experimental conditions.
4.4.1 Deep vs shallow
We explore here the capability of a dgp to accommodate for the data nonstationarity typical of ode systems. In particular, the tests are performed in two different settings with large and small sample size . By using the same experimental setting of Section 4.1, we sampled and points, respectively, from the FitzHughNagumo equations. The data is modeled with dgps composed by one (“shallow” gp), two and three layers, all with rbf covariances.
average rmse across parameters  
rbf  Matérn  
Layers  
80  0.86  0.85  2.16  0.97  0.72  0.79  0.77 
1000  0.66  0.52  0.53  0.87  0.63  0.70  0.74 
data fit rmse  
80  0.23  0.19  0.42  0.23  0.20  0.23  0.25 
1000  0.22  0.17  0.19  0.21  0.21  0.19  0.24 
Figure 5 shows the modeling results obtained on the two configurations. We note that the shallow gp consistently underfits the complex dynamics producing smooth interpolants. On the contrary, dgps provide a better representation of the nonstationarity. As expected, the threelayer dgp leads to suboptimal results in the lowsample size setting. Furthermore, in order to motivate the importance of nonstationarity, which we implement through dgps, we further compared against shallow gps with lower degrees of smoothness through the use of Matérn covariances with degrees .
The overall performance in parameter estimation and data fit is reported in Table 1. According to the results, a twolayer dgp provides the best solution overall in terms of modeling accuracy and complexity. Interestingly, the Matérn covariance, with an appropriate degree of smoothness, achieves superior performance in parameter estimation in case of low sample size. However, the nonstationarity implemented by the dgp outperforms the stationary Matérn in the data fit, as well as in the parameter estimation when the sample size is large. For an illustration of the data fit of the Matérn gp we refer the reader to the supplementary material. Crucially, these results indicate that our approach provides a practical and scalable way to learn nonstationarity within the framework of variational inference for deep probabilistic models.
4.5 Inequality constraints
We conclude our experimental validation by applying monotonic regression on counts as an illustration of the proposed framework for inequality constrains in dgp models dynamics. We applied our approach to the mortality dataset from (Broffitt, 1988), with a twolayer dgp initialized with an analogous setting to the one proposed in Section 4.1. In particular, the sample rates were modeled with with a Poisson likelihood of the form , and link function . Monotonicity on the solution was strictly enforced by setting . Figure 6 shows the regression results without (top) and with (bottom) monotonicity constraint. The effect of the constraint on the dynamics can be appreciated by looking at the distribution of the derivatives (right panel). In the monotonic case the gp derivatives lie on the positive part of the plane. This experiment leads to results compatible with those obtained with the monotonic gp proposed in (Riihimäki & Vehtari, 2010), and implemented in the GPstuff toolbox (Vanhatalo et al., 2013). However, our approach is characterized by appealing scalability properties and can implement monotonic constraints on dgps, which offer a more general class of functions than gps.
5 Conclusions
We introduced a novel generative formulation of deep probabilistic models implementing “soft” constraints on functions dynamics. The proposed approach was extensively tested in several experimental settings, leading to highly competitive results in challenging modeling applications, and favorably comparing with the stateoftheart in terms of modeling accuracy and scalability. Furthermore, the proposed variational formulation allows for a meaningful uncertainty quantification of both model parameters and predictions. This is an important aspect intimately related to the application of our proposal in real scenarios, such as in biology and epidemiology, where data is often noisy and scarce.
Although in this study we essentially focused on the problem of ode parameters inference and monotonic regression, the generality of our approach enables several other applications that will be subject of future investigations. We will focus on the extension to manifold valued data, such as spatiotemporal observations represented by graphs, meshes, and 3D volumes, occurring for example in medical imaging and system biology.
Acknowledgements
This work has been supported by the French government, through the UCAJEDI Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR15IDEX01 (project MetaImaGen). MF gratefully acknowledges support from the AXA Research Fund.
This work is dedicated to Mattia Filippone.
References
 Alvarez et al. (2013) Alvarez, M. A., Luengo, D., and Lawrence, N. D. Linear latent force models using Gaussian processes. IEEE transactions on pattern analysis and machine intelligence, 35(11):2693–2705, 2013.
 Barber & Wang (2014) Barber, D. and Wang, Y. Gaussian Processes for Bayesian Estimation in Ordinary Differential Equations. In Xing, E. P. and Jebara, T. (eds.), Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pp. 1485–1493, Bejing, China, June 2014. PMLR.
 Broffitt (1988) Broffitt, J. D. Increasing and increasing convex Bayesian graduation. Transactions of the Society of Actuaries, 40(1):115–48, 1988.
 Calandra et al. (2016) Calandra, R., Peters, J., Rasmussen, C. E., and Deisenroth, M. P. Manifold Gaussian Processes for regression. In 2016 International Joint Conference on Neural Networks, IJCNN 2016, Vancouver, BC, Canada, July 2429, 2016, pp. 3338–3345, 2016.

Calderhead et al. (2009)
Calderhead, B., Girolami, M., and Lawrence, N. D.
Accelerating Bayesian Inference over Nonlinear Differential Equations with Gaussian Processes.
In Koller, D., Schuurmans, D., Bengio, Y., and Bottou, L. (eds.), Advances in Neural Information Processing Systems 21, pp. 217–224. Curran Associates, Inc., 2009.  Campbell & Steele (2012) Campbell, D. and Steele, R. J. Smooth functional tempering for nonlinear differential equation models. Statistics and Computing, 22(2):429–443, March 2012.
 Cutajar et al. (2017) Cutajar, K., Bonilla, E. V., Michiardi, P., and Filippone, M. Random feature expansions for deep Gaussian processes. In Precup, D. and Teh, Y. W. (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 884–893, International Convention Centre, Sydney, Australia, August 2017. PMLR.
 Da Veiga & Marrel (2012) Da Veiga, S. and Marrel, A. Gaussian process modeling with inequality constraints. Annales de la faculté des sciences de Toulouse Mathématiques, 21(3):529–555, April 2012.

Damianou & Lawrence (2013)
Damianou, A. C. and Lawrence, N. D.
Deep Gaussian Processes.
In
Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2013, Scottsdale, AZ, USA, April 29  May 1, 2013
, volume 31 of JMLR Proceedings, pp. 207–215. JMLR.org, 2013.  Dondelinger et al. (2013) Dondelinger, F., Husmeier, D., Rogers, S., and Filippone, M. Ode parameter inference using adaptive gradient matching with gaussian processes. In Carvalho, C. M. and Ravikumar, P. (eds.), Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, volume 31 of Proceedings of Machine Learning Research, pp. 216–228, Scottsdale, Arizona, USA, 29 Apr–01 May 2013. PMLR.
 Donohue et al. (2014) Donohue, M. C., JacqminGadda, H., Le Goff, M., Thomas, R. G., Raman, R., Gamst, A. C., Beckett, L. A., Jack, C. R., Weiner, M. W., Dartigues, J.F., et al. Estimating longterm multivariate progression from shortterm data. Alzheimer’s & dementia: the journal of the Alzheimer’s Association, 10(5):S400–S410, 2014.
 FitzHugh (1955) FitzHugh, R. Mathematical models of threshold phenomena in the nerve membrane. The bulletin of mathematical biophysics, 17(4):257–278, 1955.
 Ghahramani (2015) Ghahramani, Z. Probabilistic machine learning and artificial intelligence. Nature, 521(7553):452–459, May 2015.
 Goel et al. (1971) Goel, N. S., Maitra, S. C., and Montroll, E. W. On the volterra and other nonlinear models of interacting populations. Reviews of modern physics, 43(2):231, 1971.
 González et al. (2014) González, J., Vujačić, I., and Wit, E. Reproducing kernel Hilbert space based estimation of systems of ordinary differential equations. Pattern Recognition Letters, 45:26–32, 2014.
 Gorbach et al. (2017) Gorbach, N. S., Bauer, S., and Buhmann, J. M. Scalable Variational Inference for Dynamical Systems. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 30, pp. 4809–4818. Curran Associates, Inc., 2017.
 Groeneboom & Jongbloed (2014) Groeneboom, P. and Jongbloed, G. Nonparametric Estimation under Shape Constraints: Estimators, Algorithms and Asymptotics. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2014.
 Kingma & Ba (2017) Kingma, D. P. and Ba, J. Adam: A Method for Stochastic Optimization, January 2017. arXiv:1412.6980.
 Konukoglu et al. (2011) Konukoglu, E., Relan, J., Cilingir, U., Menze, B. H., Chinchapatnam, P., Jadidi, A., Cochet, H., Hocini, M., Delingette, H., Jaïs, P., et al. Efficient probabilistic model personalization integrating uncertainty on data and parameters: Application to eikonaldiffusion models in cardiac electrophysiology. Progress in biophysics and molecular biology, 107(1):134–146, 2011.
 LeCun et al. (2015) LeCun, Y., Bengio, Y., and Hinton, G. Deep learning. Nature, 521(7553):436–444, 2015.
 Liang & Wu (2008) Liang, H. and Wu, H. Parameter Estimation for Differential Equation Models Using a Framework of Measurement Error in Regression Models. Journal of the American Statistical Association, 103(484):1570–1583, 2008. PMID: 19956350.
 Lorenz & Emanuel (1998) Lorenz, E. N. and Emanuel, K. A. Optimal sites for supplementary weather observations: Simulation with a small model. Journal of the Atmospheric Sciences, 55(3):399–414, 1998.
 Lorenzi et al. (2017) Lorenzi, M., Filippone, M., Frisoni, G. B., Alexander, D. C., and Ourselin, S. Probabilistic disease progression modeling to characterize diagnostic uncertainty: Application to staging and prediction in Alzheimer’s disease. NeuroImage, 2017.
 Macdonald & Husmeier (2015) Macdonald, B. and Husmeier, D. Gradient Matching Methods for Computational Inference in Mechanistic Models for Systems Biology: A Review and Comparative Analysis. Frontiers in Bioengineering and Biotechnology, 3:180, 2015.
 Macdonald et al. (2015) Macdonald, B., Higham, C., and Husmeier, D. Controversy in mechanistic modelling with Gaussian processes. In Bach, F. and Blei, D. (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 1539–1547, Lille, France, July 2015. PMLR.
 Mašić et al. (2017) Mašić, A., Srinivasan, S., Billeter, J., Bonvin, D., and Villez, K. Shape constrained splines as transparent blackbox models for bioprocess modeling. Computers & Chemical Engineering, 99:96–105, 2017.
 Meyer (2008) Meyer, M. C. Inference using shaperestricted regression splines. The Annals of Applied Statistics, 2(3):1013–1033, 2008.
 Minka (2001) Minka, T. P. Expectation Propagation for approximate Bayesian inference. In Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, UAI ’01, pp. 362–369, San Francisco, CA, USA, 2001. Morgan Kaufmann Publishers Inc.
 Neal (1996) Neal, R. M. Bayesian Learning for Neural Networks (Lecture Notes in Statistics). Springer, 1 edition, August 1996.
 Niu et al. (2016) Niu, M., Rogers, S., Filippone, M., and Husmeier, D. Fast Parameter Inference in Nonlinear Dynamical Systems using Iterative Gradient Matching. In Balcan, M. F. and Weinberger, K. Q. (eds.), Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pp. 1699–1707, New York, New York, USA, June 2016. PMLR.
 Niu et al. (2018) Niu, M., Macdonald, B., Rogers, S., Filippone, M., and Husmeier, D. Statistical inference in mechanistic models: time warping for improved gradient matching. Computational Statistics, 33(2):1091–1123, Jun 2018.
 Rahimi & Recht (2008) Rahimi, A. and Recht, B. Random Features for LargeScale Kernel Machines. In Platt, J. C., Koller, D., Singer, Y., and Roweis, S. T. (eds.), Advances in Neural Information Processing Systems 20, pp. 1177–1184. Curran Associates, Inc., 2008.
 Ramsay et al. (2007) Ramsay, J. O., Hooker, G., Campbell, D., and Cao, J. Parameter estimation for differential equations: a generalized smoothing approach. Journal of the Royal Statistical Society Series B, 69(5):741–796, 2007.
 Rasmussen & Williams (2006) Rasmussen, C. E. and Williams, C. Gaussian Processes for Machine Learning. MIT Press, 2006.
 Riihimäki & Vehtari (2010) Riihimäki, J. and Vehtari, A. Gaussian processes with monotonicity information. In Teh, Y. W. and Titterington, M. (eds.), Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pp. 645–652, Chia Laguna Resort, Sardinia, Italy, May 2010. PMLR.
 Salzmann & Urtasun (2010) Salzmann, M. and Urtasun, R. Implicitly Constrained Gaussian Process Regression for Monocular NonRigid Pose Estimation. In Lafferty, J. D., Williams, C. K. I., ShaweTaylor, J., Zemel, R. S., and Culotta, A. (eds.), Advances in Neural Information Processing Systems 23, pp. 2065–2073. Curran Associates, Inc., 2010.
 Schober et al. (2014) Schober, M., Duvenaud, D. K., and Hennig, P. Probabilistic ODE solvers with RungeKutta means. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N. D., and Weinberger, K. Q. (eds.), Advances in Neural Information Processing Systems 27, pp. 739–747. Curran Associates, Inc., 2014.
 Vanhatalo et al. (2013) Vanhatalo, J., Riihimäki, J., Hartikainen, J., Jylänki, P., Tolvanen, V., and Vehtari, A. GPstuff: Bayesian modeling with Gaussian processes. Journal of Machine Learning Research, 14(Apr):1175–1179, 2013.
 Varah (1982) Varah, J. M. A Spline Least Squares Method for Numerical Parameter Estimation in Differential Equations. SIAM Journal on Scientific and Statistical Computing, 3(1):28–46, 1982.
 Vrettas et al. (2015) Vrettas, M. D., Opper, M., and Cornford, D. Variational meanfield algorithm for efficient inference in large systems of stochastic differential equations. Physical Review E, 91(1):012148, 2015.
 Vyshemirsky & Girolami (2007) Vyshemirsky, V. and Girolami, M. A. Bayesian ranking of biochemical system models. Bioinformatics, 24(6):833–839, 2007.
 Wheeler et al. (2014) Wheeler, M. W., Dunson, D. B., Pandalai, S. P., Baker, B. A., and Herring, A. H. Mechanistic hierarchical Gaussian processes. Journal of the American Statistical Association, 109(507):894–904, 2014.
Appendix A Description of the ode systems considered in this work
LotkaVolterra (Goel et al., 1971). This ode describes a twodimensional process with the following dynamics:
and is identified by the parameters . Following (Niu et al., 2016) we generated a ground truth from numerical integration of the system with parameters over the interval and with initial condition . We generated two different configurations, composed by respectively 34 and 51 observations sampled at uniformly spaced points, and corrupted by zero mean Gaussian noise with standard deviation and respectively.
FitzHughNagumo (FitzHugh, 1955). This system describes a twodimensional process governed by 3 parameters, :
We reproduced the experimental setting proposed in (Macdonald & Husmeier, 2015), by generating a ground truth with , and by integrating the system numerically with initial condition . We created two scenarios; in the first one, we sampled 401 observations at equally spaced points within the interval , while in the second one we sampled only 20 points. In both cases we corrupted the observations with zeromean Gaussian noise with .
Biopathways (Vyshemirsky & Girolami, 2007). These equations describe a fivedimensional process associated with 6 parameters as follows:
We generated data by sampling 15 observations at times (Macdonald & Husmeier, 2015). The ode parameters were set to , and the initial values were . We generated two different scenarios, by adding Gaussian noise with and , respectively.
Appendix B Detailed results of the benchmark on ode parameter inference
Appendix C Interpolation results using Matérn covariance in shallow gps
We report here the result of interpolating FitzHughNagumo ode with gps with Matérn covariance. Note that the process is still stationary, but it allows for the modeling of nonsmooth functions when is small.
Comments
There are no comments yet.