# Frank-Wolfe Bayesian Quadrature: Probabilistic Integration with Theoretical Guarantees

There is renewed interest in formulating integration as an inference problem, motivated by obtaining a full distribution over numerical error that can be propagated through subsequent computation. Current methods, such as Bayesian Quadrature, demonstrate impressive empirical performance but lack theoretical analysis. An important challenge is to reconcile these probabilistic integrators with rigorous convergence guarantees. In this paper, we present the first probabilistic integrator that admits such theoretical treatment, called Frank-Wolfe Bayesian Quadrature (FWBQ). Under FWBQ, convergence to the true value of the integral is shown to be exponential and posterior contraction rates are proven to be superexponential. In simulations, FWBQ is competitive with state-of-the-art methods and out-performs alternatives based on Frank-Wolfe optimisation. Our approach is applied to successfully quantify numerical error in the solution to a challenging model choice problem in cellular biology.

## Authors

• 17 publications
• 33 publications
• 39 publications
• 36 publications
• ### A Bayesian Conjugate Gradient Method

A fundamental task in numerical computation is the solution of large lin...
01/16/2018 ∙ by Jon Cockayne, et al. ∙ 0

• ### Bayesian Probabilistic Numerical Methods

The emergent field of probabilistic numerics has thus far lacked clear s...
02/13/2017 ∙ by Jon Cockayne, et al. ∙ 0

• ### Probabilistic Integration: A Role in Statistical Computation?

A research frontier has emerged in scientific computation, wherein numer...
12/03/2015 ∙ by Francois-Xavier Briol, et al. ∙ 0

• ### Posterior Contraction and Credible Sets for Filaments of Regression Functions

A filament consists of local maximizers of a smooth function f when movi...
03/11/2018 ∙ by Wei Li, et al. ∙ 0

• ### Bayesian Probabilistic Numerical Integration with Tree-Based Models

Bayesian quadrature (BQ) is a method for solving numerical integration p...
06/09/2020 ∙ by Harrison Zhu, et al. ∙ 1

• ### PASS-GLM: polynomial approximate sufficient statistics for scalable Bayesian GLM inference

Generalized linear models (GLMs) -- such as logistic regression, Poisson...
09/26/2017 ∙ by Jonathan H. Huggins, et al. ∙ 0

• ### Efficient learning of smooth probability functions from Bernoulli tests with guarantees

We study the fundamental problem of learning an unknown, smooth probabil...
12/11/2018 ∙ by Paul Rolland, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Computing integrals is a core challenge in machine learning and numerical methods play a central role in this area. This can be problematic when a numerical integration routine is repeatedly called, maybe millions of times, within a larger computational pipeline. In such situations, the cumulative impact of numerical errors can be unclear, especially in cases where the error has a non-trivial structural component. One solution is to model the numerical error statistically and to propagate this source of uncertainty through subsequent computations. Conversely, an understanding of how errors arise and propagate can enable the efficient focusing of computational resources upon the most challenging numerical integrals in a pipeline.

Classical numerical integration schemes do not account for prior information on the integrand and, as a consequence, can require an excessive number of function evaluations to obtain a prescribed level of accuracy [21]. Alternatives such as Quasi-Monte Carlo (QMC) can exploit knowledge on the smoothness of the integrand to obtain optimal convergence rates [7]. However these optimal rates can only hold on sub-sequences of sample sizes

, a consequence of the fact that all function evaluations are weighted equally in the estimator

[24]. A modern approach that avoids this problem is to consider arbitrarily weighted combinations of function values; the so-called quadrature rules (also called cubature rules). Whilst quadrature rules with non-equal weights have received comparatively little theoretical attention, it is known that the extra flexibility given by arbitrary weights can lead to extremely accurate approximations in many settings (see applications to image de-noising [3] and mental simulation in psychology [13]).

Probabilistic numerics, introduced in the seminal paper of [6], aims at re-interpreting numerical tasks as inference tasks that are amenable to statistical analysis.111A detailed discussion on probabilistic numerics and an extensive up-to-date bibliography can be found at http://www.probabilistic-numerics.org. Recent developments include probabilistic solvers for linear systems [14] and differential equations [5, 28]. For the task of computing integrals, Bayesian Quadrature (BQ) [22] and more recent work by [20] provide probabilistic numerics methods that produce a full posterior distribution on the output of numerical schemes. One advantage of this approach is that we can propagate uncertainty through all subsequent computations to explicitly model the impact of numerical error [15]. Contrast this with chaining together classical error bounds; the result in such cases will typically be a weak bound that provides no insight into the error structure. At present, a significant shortcoming of these methods is the absence of theoretical results relating to rates of posterior contraction. This is unsatisfying and has likely hindered the adoption of probabilistic approaches to integration, since it is not clear that the induced posteriors represent a sensible quantification of the numerical error (by classical, frequentist standards).

This paper establishes convergence rates for a new probabilistic approach to integration. Our results thus overcome a key perceived weakness associated with probabilistic numerics in the quadrature setting. Our starting point is recent work by [2], who cast the design of quadrature rules as a problem in convex optimisation that can be solved using the Frank-Wolfe (FW) algorithm. We propose a hybrid approach of [2] with BQ, taking the form of a quadrature rule, that (i) carries a full probabilistic interpretation, (ii) is amenable to rigorous theoretical analysis, and (iii) converges orders-of-magnitude faster, empirically, compared with the original approaches in [2]

. In particular, we prove that super-exponential rates hold for posterior contraction (concentration of the posterior probability mass on the true value of the integral), showing that the posterior distribution provides a sensible and effective quantification of the uncertainty arising from numerical error. The methodology is explored in simulations and also applied to a challenging model selection problem from cellular biology, where numerical error could lead to mis-allocation of expensive resources.

## 2 Background

### 2.1 Quadrature and Cubature Methods

Let be a measurable space such that and consider a probability density defined with respect to the Lebesgue measure on . This paper focuses on computing integrals of the form for a test function where, for simplicity, we assume is square-integrable with respect to . A quadrature rule approximates such integrals as a weighted sum of function values at some design points :

 ∫Xf(x)p(x)dx≈n∑i=1wif(xi). (1)

Viewing integrals as projections, we write for the left-hand side and for the right-hand side, where and is a Dirac measure at . Note that

may not be a probability distribution; in fact, weights

do not have to sum to one or be non-negative. Quadrature rules can be extended to multivariate functions by taking each component in turn.

There are many ways of choosing combinations in the literature. For example, taking weights to be with points drawn independently from the probability distribution recovers basic Monte Carlo integration. The case with weights , but with points chosen with respect to some specific (possibly deterministic) schemes includes kernel herding [4] and Quasi-Monte Carlo (QMC) [7]. In Bayesian Quadrature, the points

are chosen to minimise a posterior variance, with weights

arising from a posterior probability distribution.

Classical error analysis for quadrature rules is naturally couched in terms of minimising the worst-case estimation error. Let be a Hilbert space of functions , equipped with the inner product and associated norm . We define the maximum mean discrepancy (MMD) as:

 MMD({xi,wi}ni=1)\coloneqqsupf∈H:∥f∥H=1∣∣p[f]−^p[f]∣∣. (2)

The reader can refer to [29] for conditions on that are needed for the existence of the MMD. The rate at which the MMD decreases with the number of samples is referred to as the ‘convergence rate’ of the quadrature rule. For Monte Carlo, the MMD decreases with the slow rate of (where the subscript specifies that the convergence is in probability). Let be a RKHS with reproducing kernel and denote the corresponding canonical feature map by , so that the mean element is given by . Then, following [29]

 MMD({xi,wi}ni=1)=∥μp−μ^p∥H. (3)

This shows that to obtain low integration error in the RKHS , one only needs to obtain a good approximation of its mean element (as : ). Establishing theoretical results for such quadrature rules is an active area of research [1].

Bayesian Quadrature (BQ) was originally introduced in [22] and later revisited by [11, 12] and [23]. The main idea is to place a functional prior on the integrand

, then update this prior through Bayes’ theorem by conditioning on both samples

and function evaluations at those sample points where . This induces a full posterior distribution over functions and hence over the value of the integral . The most common implementation assumes a Gaussian Process (GP) prior . A useful property motivating the use of GPs is that linear projection preserves normality, so that the posterior distribution for the integral is also a Gaussian, characterised by its mean and covariance. A natural estimate of the integral is given by the mean of this posterior distribution, which can be compactly written as

 ^pBQ[f]=zTK−1f. (4)

where and . Notice that this estimator takes the form of a quadrature rule with weights . Recently, [27] showed how specific choices of kernel and design points for BQ can recover classical quadrature rules. This begs the question of how to select design points . A particularly natural approach aims to minimise the posterior uncertainty over the integral , which was shown in [16, Prop. 1] to equal:

 vBQ({xi}ni=1)=p[μp]−zTK−1z=MMD2({xi,wBQi}ni=1). (5)

Thus, in the RKHS setting, minimising the posterior variance corresponds to minimising the worst case error of the quadrature rule. Below we refer to Optimal BQ (OBQ) as BQ coupled with design points chosen to globally minimise (5). We also call Sequential BQ (SBQ) the algorithm that greedily selects design points to give the greatest decrease in posterior variance at each iteration. OBQ will give improved results over SBQ, but cannot be implemented in general, whereas SBQ is comparatively straight-forward to implement. There are currently no theoretical results establishing the convergence of either BQ, OBQ or SBQ.

Remark: (5) is independent of observed function values

. As such, no active learning is possible in SBQ (i.e. surprising function values never cause a revision of a planned sampling schedule). This is not always the case: For example

[12] approximately encodes non-negativity of into BQ which leads to a dependence on in the posterior variance. In this case sequential selection becomes an active strategy that outperforms batch selection in general.

### 2.3 Deriving Quadrature Rules via the Frank-Wolfe Algorithm

Despite the elegance of BQ, its convergence rates have not yet been rigorously established. In brief, this is because is an orthogonal projection of onto the affine hull of , rather than e.g. the convex hull. Standard results from the optimisation literature apply to bounded domains, but the affine hull is not bounded (i.e. the BQ weights can be arbitrarily large and possibly negative). Below we describe a solution to the problem of computing integrals recently proposed by [2], based on the FW algorithm, that restricts attention to the (bounded) convex hull of .

The Frank-Wolfe (FW) algorithm (Alg. 1), also called the conditional gradient algorithm, is a convex optimization method introduced in [9]. It considers problems of the form where the function is convex and continuously differentiable. A particular case of interest in this paper will be when the domain is a compact and convex space of functions, as recently investigated in [17]. These assumptions imply the existence of a solution to the optimization problem.

At each iteration , the FW algorithm computes a linearisation of the objective function at the previous state along its gradient and selects an ‘atom’ that minimises the inner product a state and . The new state is then a convex combination of the previous state and of the atom . This convex combination depends on a step-size which is pre-determined and different versions of the algorithm may have different step-size sequences.

Our goal in quadrature is to approximate the mean element . Recently [2] proposed to frame integration as a FW optimisation problem. Here, the domain is a space of functions and taking the objective function to be:

 J(g)=12∥∥g−μp∥∥2H. (6)

This gives an approximation of the mean element and takes the form of half the posterior variance (or the MMD). In this functional approximation setting, minimisation of is carried out over , the marginal polytope of the RKHS . The marginal polytope is defined as the closure of the convex hull of , so that in particular . Assuming as in [18] that is uniformly bounded in feature space (i.e. , ), then is a closed and bounded set and can be optimised over.

In order to define the algorithm rigorously in this case, we introduce the Fréchet derivative of , denoted , such that for being the dual space of , we have the unique map such that for each , is the function mapping to . We also introduce the bilinear map which, for given by , is the rule giving .

A particular advantage of this method is that it leads to ‘sparse’ solutions which are linear combinations of the atoms [2]. In particular this provides a weighted estimate for the mean element:

 ^μFW\coloneqqgn=n∑i=1(n∏j=i+1(1−ρj−1)ρi−1)¯gi\coloneqqn∑i=1wFWi¯gi, (7)

where by default which leads to all when . A typical sequence of approximations to the mean element is shown in Fig. 1 (left), demonstrating that the approximation quickly converges to the ground truth (in black). Since minimisation of a linear function can be restricted to extreme points of the domain, the atoms will be of the form for some . The minimisation in over from step 2 in Algorithm 1 therefore becomes a minimisation in over and this algorithm therefore provides us design points. In practice, at each iteration , the FW algorithm hence selects a design point which induces an atom and gives us an approximation of the mean element . We denote by this approximation after iterations. Using the reproducing property, we can show that the FW estimate is a quadrature rule:

 (8)

The total computational cost for FW is . An extension known as FW with Line Search (FWLS) uses a line-search method to find the optimal step size at each iteration (see Alg. 1). Once again, the approximation obtained by FWLS has a sparse expression as a convex combination of all the previously visited states and we obtain an associated quadrature rule. FWLS has theoretical convergence rates that can be stronger than standard versions of FW but has computational cost in . The authors in [10] provide a survey of FW-based algorithms and their convergence rates under different regularity conditions on the objective function and domain of optimisation.

Remark: The FW design points are generally not available in closed-form. We follow mainstream literature by selecting, at each iteration, the point that minimises the MMD over a finite collection of points, drawn i.i.d from . The authors in [18] proved that this approximation adds a term to the MMD, so that theoretical results on FW convergence continue to apply provided that sufficiently quickly. Appendix A provides full details. In practice, one may also make use of a numerical optimisation scheme in order to select the points.

## 3 A Hybrid Approach: Frank-Wolfe Bayesian Quadrature

To combine the advantages of a probabilistic integrator with a formal convergence theory, we propose Frank-Wolfe Bayesian Quadrature (FWBQ). In FWBQ, we first select design points using the FW algorithm. However, when computing the quadrature approximation, instead of using the usual FW weights we use instead the weights provided by BQ. We denote this quadrature rule by and also consider , which uses FWLS in place of FW. As we show below, these hybrid estimators (i) carry the Bayesian interpretation of Sec. 2.2, (ii) permit a rigorous theoretical analysis, and (iii) out-perform existing FW quadrature rules by orders of magnitude in simulations. FWBQ is hence ideally suited to probabilistic numerics applications.

For these theoretical results we assume that belongs to a finite-dimensional RKHS , in line with recent literature [2, 10, 17, 18]. We further assume that is a compact subset of , that and that is continuous on . Under these hypotheses, Theorem 1 establishes consistency of the posterior mean, while Theorem 2 establishes contraction for the posterior distribution.

###### Theorem 1 (Consistency).

The posterior mean converges to the true integral at the following rates:

 ∣∣p[f]−^p\emphFWBQ[f]∣∣≤MMD({xi,wi}ni=1)≤⎧⎪⎨⎪⎩2D2Rn−1% for FWBQ√2Dexp(−R22D2n)for FWLSBQ (9)

where the FWBQ uses step-size , is the diameter of the marginal polytope and gives the radius of the smallest ball of center included in .

Note that all the proofs of this paper can be found in Appendix B. An immediate corollary of Theorem 1 is that FWLSBQ has an asymptotic error which is exponential in and is therefore superior to that of any QMC estimator [7]. This is not a contradiction - recall that QMC restricts attention to uniform weights, while FWLSBQ is able to propose arbitrary weightings. In addition we highlight a robustness property: Even when the assumptions of this section do not hold, one still obtains atleast a rate for the posterior mean using either FWBQ or FWLSBQ [8].

Remark: The choice of kernel affects the convergence of the FWBQ method [15]. Clearly, we expect faster convergence if the function we are integrating is ‘close’ to the space of functions induced by our kernel. Indeed, the kernel specifies the geometry of the marginal polytope , that in turn directly influences the rate constant and associated with FW convex optimisation.

Consistency is only a stepping stone towards our main contribution which establishes posterior contraction rates for FWBQ. Posterior contraction is important as these results justify, for the first time, the probabilistic numerics approach to integration; that is, we show that the full posterior distribution is a sensible quantification (at least asymptotically) of numerical error in the integration routine:

###### Theorem 2 (Contraction).

Let be an open neighbourhood of the true integral and let . Then the posterior probability mass on vanishes at a rate:

 \emphprob(Sc)≤⎧⎪ ⎪⎨⎪ ⎪⎩2√2D2√πRγn−1exp(−γ2R28D4n2)%forFWBQ2D√πγexp(−R22D2n−γ22√2Dexp(R22D2n))for FWLSBQ (10)

where the FWBQ uses step-size , is the diameter of the marginal polytope and gives the radius of the smallest ball of center included in .

The contraction rates are exponential for FWBQ and super-exponential for FWLBQ, and thus the two algorithms enjoy both a probabilistic interpretation and rigorous theoretical guarantees. A notable corollary is that OBQ enjoys the same rates as FWLSBQ, resolving a conjecture by Tony O’Hagan that OBQ converges exponentially [personal communication]:

###### Corollary.

The consistency and contraction rates obtained for FWLSBQ apply also to OBQ.

## 4 Experimental Results

### 4.1 Simulation Study

To facilitate the experiments in this paper we followed [1, 2, 11, 18] and employed an exponentiated-quadratic (EQ) kernel . This corresponds to an infinite-dimensional RKHS, not covered by our theory; nevertheless, we note that all simulations are practically finite-dimensional due to rounding at machine precision. See Appendix E for a finite-dimensional approximation using random Fourier features. EQ kernels are popular in the BQ literature as, when is a mixture of Gaussians, the mean element is analytically tractable (see Appendix C). Some other pairs that produce analytic mean elements are discussed in [1].

For this simulation study, we took to be a 20-component mixture of 2D-Gaussian distributions. Monte Carlo (MC) is often used for such distributions but has a slow convergence rate in . FW and FWLS are known to converge more quickly and are in this sense preferable to MC [2]. In our simulations (Fig. 2, left), both our novel methods FWBQ and FWLSBQ decreased the MMD much faster than the FW/FWLS methods of [2]. Here, the same kernel hyper-parameters were employed for all methods to have a fair comparison. This suggests that the best quadrature rules correspond to elements outside the convex hull of . Examples of those, including BQ, often assign negative weights to features (Fig. S1 right, Appendix D).

The principle advantage of our proposed methods is that they reconcile theoretical tractability with a fully probabilistic interpretation. For illustration, Fig. 2 (right) plots the posterior uncertainty due to numerical error for a typical integration problem based on this . In-depth empirical studies of such posteriors exist already in the literature and the reader is referred to [3, 13, 22] for details.

Beyond these theoretically tractable integrators, SBQ seems to give even better performance as increases. An intuitive explanation is that SBQ picks to minimise the MMD whereas FWBQ and FWLSBQ only minimise an approximation of the MMD (its linearisation along ). In addition, the SBQ weights are optimal at each iteration, which is not true for FWBQ and FWLSBQ. We conjecture that Theorem 1 and 2 provide upper bounds on the rates of SBQ. This conjecture is partly supported by Fig. 1 (right), which shows that SBQ selects similar design points to FW/FWLS (but weights them optimally). Note also that both FWBQ and FWLSBQ give very similar result. This is not surprising as FWLS has no guarantees over FW in infinite-dimensional RKHS [17].

### 4.2 Quantifying Numerical Error in a Proteomic Model Selection Problem

A topical bioinformatics application that extends recent work by [19] is presented. The objective is to select among a set of candidate models for protein regulation. This choice is based on a dataset of protein expression levels, in order to determine a ‘most plausible’ biological hypothesis for further experimental investigation. Each

is specified by a vector of kinetic parameters

(full details in Appendix D). Bayesian model selection requires that these parameters are integrated out against a prior to obtain marginal likelihood terms . Our focus here is on obtaining the maximum a posteriori (MAP) model , defined as the maximiser of the posterior model probability (where we have assumed a uniform prior over model space). Numerical error in the computation of each term , if unaccounted for, could cause us to return a model that is different from the true MAP estimate and lead to the mis-allocation of valuable experimental resources.

The problem is quickly exaggerated when the number of models increases, as there are more opportunities for one of the terms to be ‘too large’ due to numerical error. In [19], the number of models was combinatorial in the number of protein kinases measured in a high-throughput assay (currently but in principle up to ). This led [19] to deploy substantial computing resources to ensure that numerical error in each estimate of was individually controlled. Probabilistic numerics provides a more elegant and efficient solution: At any given stage, we have a fully probabilistic quantification of our uncertainty in each of the integrals , shown to be sensible both theoretically and empirically. This induces a full posterior distribution over numerical uncertainty in the location of the MAP estimate (i.e. ‘Bayes all the way down’). As such we can determine, on-line, the precise point in the computational pipeline when numerical uncertainty near the MAP estimate becomes acceptably small, and cease further computation.

The FWBQ methodology was applied to one of the model selection tasks in [19]. In Fig. 3 (left) we display posterior model probabilities for each of candidates models, where a low number () of samples were used for each integral. (For display clarity only the first 50 models are shown.) In this low- regime, numerical error introduces a second level of uncertainty that we quantify by combining the FWBQ error models for all integrals in the computational pipeline; this is summarised by a box plot (rather than a single point) for each of the models (obtained by sampling - details in Appendix D). These box plots reveal that our estimated posterior model probabilities are completely dominated by numerical error. In contrast, when is increased through 50, 100 and 200 (Fig. 3, right and Fig. S2), the uncertainty due to numerical error becomes negligible. At we can conclude that model is the true MAP estimate and further computations can be halted. Correctness of this result was confirmed using the more computationally intensive methods in [19].

In Appendix D we compared the relative performance of FWBQ, FWLSBQ and SBQ on this problem. Fig. S1 shows that the BQ weights reduced the MMD by orders of magnitude relative to FW and FWLS and that SBQ converged more quickly than both FWBQ and FWLSBQ.

## 5 Conclusions

This paper provides the first theoretical results for probabilistic integration, in the form of posterior contraction rates for FWBQ and FWLSBQ. This is an important step in the probabilistic numerics research programme [15] as it establishes a theoretical justification for using the posterior distribution as a model for the numerical integration error (which was previously assumed [11, 12, 20, 23, 27, e.g.]). The practical advantages conferred by a fully probabilistic error model were demonstrated on a model selection problem from proteomics, where sensitivity of an evaluation of the MAP estimate was modelled in terms of the error arising from repeated numerical integration.

The strengths and weaknesses of BQ (notably, including scalability in the dimension of ) are well-known and are inherited by our FWBQ methodology. We do not review these here but refer the reader to [22] for an extended discussion. Convergence, in the classical sense, was proven here to occur exponentially quickly for FWLSBQ, which partially explains the excellent performance of BQ and related methods seen in applications [12, 23], as well as resolving an open conjecture. As a bonus, the hybrid quadrature rules that we developed turned out to converge much faster in simulations than those in [2], which originally motivated our work.

A key open problem for kernel methods in probabilistic numerics is to establish protocols for the practical elicitation of kernel hyper-parameters. This is important as hyper-parameters directly affect the scale of the posterior over numerical error that we ultimately aim to interpret. Note that this problem applies equally to BQ, as well as related quadrature methods [2, 11, 12, 20] and more generally in probabilistic numerics [28]. Previous work, such as [13], optimised hyper-parameters on a per-application basis. Our ongoing research seeks automatic and general methods for hyper-parameter elicitation that provide good frequentist coverage properties for posterior credible intervals, but we reserve the details for a future publication.

#### Acknowledgments

The authors are grateful for discussions with Simon Lacoste-Julien, Simo Särkkä, Arno Solin, Dino Sejdinovic, Tom Gunter and Mathias Cronjäger. FXB was supported by EPSRC [EP/L016710/1]. CJO was supported by EPSRC [EP/D002060/1]. MG was supported by EPSRC [EP/J016934/1], an EPSRC Established Career Fellowship, the EU grant [EU/259348] and a Royal Society Wolfson Research Merit Award.

## References

• [1] F. Bach. On the Equivalence between Quadrature Rules and Random Features. arXiv:1502.06800, 2015.
• [2] F. Bach, S. Lacoste-Julien, and G. Obozinski. On the Equivalence between Herding and Conditional Gradient Algorithms. In Proceedings of the 29th International Conference on Machine Learning, pages 1359–1366, 2012.
• [3] Y. Chen, L. Bornn, N. de Freitas, M. Eskelin, J. Fang, and M. Welling. Herded Gibbs Sampling. Journal of Machine Learning Research, 2015. To appear.
• [4] Y. Chen, M. Welling, and A. Smola. Super-Samples from Kernel Herding. In

Proceedings of the Conference on Uncertainty in Artificial Intelligence

, pages 109–116, 2010.
• [5] P. Conrad, M. Girolami, S. Särkkä, A. Stuart, and K. Zygalakis. Probability Measures for Numerical Solutions of Differential Equations. arXiv:1506.04592, 2015.
• [6] P. Diaconis. Bayesian Numerical Analysis. Statistical Decision Theory and Related Topics IV, pages 163–175, 1988.
• [7] J. Dick and F. Pillichshammer. Digital Nets and Sequences - Discrepancy Theory and Quasi-Monte Carlo Integration. Cambridge University Press, 2010.
• [8] J. C. Dunn. Convergence Rates for Conditional Gradient Sequences Generated by Implicit Step Length Rules. SIAM Journal on Control and Optimization, 18(5):473–487, 1980.
• [9] M. Frank and P. Wolfe. An Algorithm for Quadratic Programming. Naval Research Logistics Quarterly, 3:95–110, 1956.
• [10] D. Garber and E. Hazan. Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets. In Proceedings of the 32nd International Conference on Machine Learning, pages 541–549, 2015.
• [11] Z. Ghahramani and C. Rasmussen. Bayesian Monte Carlo. In Advances in Neural Information Processing Systems, pages 489–496, 2003.
• [12] T. Gunter, R. Garnett, M. Osborne, P. Hennig, and S. Roberts. Sampling for Inference in Probabilistic Models with Fast Bayesian Quadrature. In Advances in Neural Information Processing Systems, 2014.
• [13] J.B. Hamrick and T.L. Griffiths. Mental Rotation as Bayesian Quadrature. In NIPS 2013 Workshop on Bayesian Optimization in Theory and Practice, 2013.
• [14] P. Hennig. Probabilistic Interpretation of Linear Solvers. SIAM Journal on Optimization, 25:234–260, 2015.
• [15] P. Hennig, M. Osborne, and M. Girolami. Probabilistic Numerics and Uncertainty in Computations. Proceedings of the Royal Society A, 471(2179), 2015.
• [16] F. Huszar and D. Duvenaud. Optimally-Weighted Herding is Bayesian Quadrature. In Uncertainty in Artificial Intelligence, pages 377–385, 2012.
• [17] M. Jaggi. Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization. In Proceedings of the 30th International Conference on Machine Learning, volume 28, pages 427–435, 2013.
• [18] S. Lacoste-Julien, F. Lindsten, and F. Bach. Sequential Kernel Herding : Frank-Wolfe Optimization for Particle Filtering. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, pages 544–552, 2015.
• [19] C.J. Oates, F. Dondelinger, N. Bayani, J. Korkola, J.W. Gray, and S. Mukherjee. Causal Network Inference using Biochemical Kinetics. Bioinformatics, 30(17):i468–i474, 2014.
• [20] C.J. Oates, M. Girolami, and N. Chopin. Control Functionals for Monte Carlo Integration. arXiv: 1410.2392, 2015.
• [21] A. O’Hagan. Monte Carlo is Fundamentally Unsound. Journal of the Royal Statistical Society, Series D, 36(2):247–249, 1984.
• [22] A. O’Hagan. Bayes-Hermite Quadrature. Journal of Statistical Planning and Inference, 29:245–260, 1991.
• [23] M. Osborne, R. Garnett, S. Roberts, C. Hart, S. Aigrain, and N. Gibson. Bayesian Quadrature for Ratios. In Proceedings of the 15th International Conference on Artificial Intelligence and Statistics, pages 832–840, 2012.
• [24] A. B. Owen. A Constraint on Extensible Quadrature Rules. Numerische Mathematik, pages 1–8, 2015.
• [25] A. Rahimi and B. Recht. Random Features for Large-Scale Kernel Machines. Advances in Neural Information Processings Systems, 2007.
• [26] C. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
• [27] S. Särkkä, J. Hartikainen, L. Svensson, and F. Sandblom. On the Relation between Gaussian Process Quadratures and Sigma-Point Methods. arXiv:1504.05994, 2015.
• [28] M. Schober, D. Duvenaud, and P. Hennig. Probabilistic ODE solvers with Runge-Kutta means. In Advances in Neural Information Processing Systems 27, pages 739–747, 2014.
• [29] B. Sriperumbudur, A. Gretton, K. Fukumizu, B. Schölkopf, and G. Lanckriet. Hilbert Space Embeddings and Metrics on Probability Measures. Journal of Machine Learning Research, 11:1517–1561, 2010.
• [30] R.A. Weinberg. The Biology of Cancer, volume 1. Garland Science, 2006.

## Supplementary Material

### Appendix A: Details for the FWBQ and FWLSBQ Algorithms

A high-level pseudo-code description for the Frank-Wolfe Bayesian Quadrature (FWBQ) algorithm is provided below.

Frank-Wolfe Line-Search Bayesian Quadrature (FWLSBQ) is simply obtained by substituting the Frank-Wolfe algorithm with the Frank-Wolfe Line-Search algorithm. In this appendix, we derive all of the expressions necessary to implement both the FW and FWLS algorithms (for quadrature) in practice. All of the other steps can be derived from the relevant equations as highlighted in Algorithm 2 above.

The FW/FWLS are both initialised by the user choosing a design point . This can be done either at random or by choosing a location which is known to have high probability mass under . The first approximation to is therefore given by . The algorithm then loops over the next three steps to obtain new design points :

Step 1) Obtaining the new Frank-Wolfe design points .

At iteration , the step consists of choosing the point . Let denote the Frank-Wolfe weights assigned to each of the previous design points at this new iteration, given that we choose as our new design point. The choice of new design point is done by computing the derivative of the objective function and finding the point which minimises the inner product:

 argming∈G⟨g,(DJ)(gi−1)⟩× (11)

To do so, we need to obtain an equivalent expression of the minimisation of the linearisation of (denoted ) in terms of kernel values and evaluations of the mean element . Since minimisation of a linear function can be restricted to extreme points of the domain, we have that

 (12)

Then using the definition of we have:

 argminx∈X⟨Φ(x),(DJ)(gi−1)⟩×=argminx∈X⟨Φ(x),gi−1−μp⟩H, (13)

where

 ⟨Φ(x),gi−1−μp⟩H=⟨Φ(x),i−1∑l=1w(i−1)lΦ(xl)−μp⟩H=i−1∑i=1w(i−1)l⟨Φ(x),Φ(xl)⟩H−⟨Φ(x),μp⟩H=i−1∑l=1w(i−1)lk(x,xl)−μp(x). (14)

Our new design point is therefore the point which minimises this expression. Note that this equation may not be convex and may require us to make use of approximate methods to find the minimum . To do so, we sample points (where is large) independently from the distribution and pick the sample which minimises the expression above. From [18] this introduces an additive error term of size , which does not impact our convergence analysis provided that M(n) vanishes sufficiently quickly. In all experiments we took between and so that this error will be negligible.

It is important to note that sampling from is likely to not be the best solution to optimising this expression. One may, for example, be better off using any other optimisation method which does not require convexity (for example, Bayesian Optimization). However, we have used sampling as the result from [18] discussed above allows us to have a theoretical upper bound on the error introduced.

Step 2) Computing the Step-Sizes and Weights for the Frank-Wolfe and Frank-Wolfe Line-Search Algorithms.

Computing the weights assigned by the FW/FWLS algorithms to each of the design points is obtained using the equation:

 w(i)l=i∏j=l+1(1−ρj−1)ρl−1 (15)

Clearly, this expression depends on the choice of step-sizes . In the case of the standard Frank-Wolfe algorithm, this step-size sequence is a an input from the algorithm and so computing the weights is straightforward. However, in the case of the Frank-Wolfe Line-Search algorithm, the choice of step-size is optimized at each iteration so that minimises the most.

In the case of computing integrals, this optimization step can actually be obtained analytically. This analytic expression will be given in terms of values of the kernel values and evaluations of the mean element.

First, from the definition of

 (16)

Taking the derivative of this expression with respect to , we get:

 (17)

Setting this derivative to zero gives us the following optimum:

 (18)

Clearly, differentiating a second time with respect to gives , which is non-negative and so is a minimum. One can show using geometrical arguments about the marginal polytope that will be in [17].

The numerator of this line-search expression is

 (19)

Similarly the denominator is

 (20)

Clearly all expressions provided here can be vectorised for efficient computational implementation.

Step 3) Computing a new approximation of the mean element.

The final step consists of updating the approximation of the mean element, which can be done directly by setting:

 gi=(1−ρi)gi−1+ρi¯gi (21)

### Appendix B: Proofs of Theorems and Corollaries

###### Theorem (Consistency).

The posterior mean converges to the true integral at the following rates:

 ∣∣p[f]−^p\emphFWBQ[f]∣∣≤MMD({xi,wi}ni=1)≤⎧⎪⎨⎪⎩2D2Rn−1% for FWBQ√2Dexp(−R22D2n)for FWLSBQ

where the FWBQ uses step-size , is the diameter of the marginal polytope and gives the radius of the smallest ball of center included in .

The posterior mean in BQ is a Bayes estimator and so the MMD takes a minimax form [16]. In particular, the BQ weights perform no worse than the FW weights:

 MMD({xFWi,wBQi}ni=1)=infw∈RnMMD({xFWi,wi}ni=1)≤MMD({xFWi,wFWi}ni=1). (22)

Now, the values attained by the objective function along the path determined by the FW(/FWLS) algorithm can be expressed in terms of the MMD as follows:

 J(gn)=12∥∥^μFW−μp∥∥2H=12MMD2({xFWi,wFWi}ni=1). (23)

Combining (22) and (23) gives

 ∣∣p[f]−^pFWBQ[f]∣∣≤MMD({xFWi,wBQi}ni=1)∥∥f∥∥H≤21/2J1/2(gn), (24)

since . To complete the proof we leverage recent analysis of the FW algorithm with steps and the FWLS algorithm. Specifically, from [2, Prop. 1] we have that:

 J(gn)≤{2D4R2n−2for FW % with step size ρi=1/(i+1)D2exp(−R2n/D2)for FWLS (25)

where is the diameter of the marginal polytope and is the radius of the smallest ball centered at included in . ∎

###### Theorem (Contraction).

Let be an open neighbourhood of the true integral and let . Then the posterior probability mass on vanishes at a rate:

 \emphprob(Sc)≤⎧⎪ ⎪⎨⎪ ⎪⎩2√2D2√πRγn−1exp(−γ2R28D4n2)%forFWBQ,ρi=1/(i+1)2D√πγexp(−R22D2n−γ22√2Dexp(R22D2n))for FWLSBQ

where is the diameter of the marginal polytope and gives the radius of the smallest ball of center included in .

We will obtain the posterior contraction rates of interest using the bounds on the MMD provided in the proof of Theorem 1. Given an open neighbourhood of , we have that the complement is closed in . We assume without loss of generality that , since the posterior mass on is trivially zero when . Since is closed, the distance is strictly positive. Denote the posterior distribution by where we have that where and . Directly from the supremum definition of the MMD we have:

 ∣∣p[f]−mn∣∣≤σn∥∥f∥∥H. (26)

Now the posterior probability mass on is given by

 Mn=∫Scϕ(r|mn,σn)dr, (27)

where

is the p.d.f. of the posterior normal distribution. By the definition of

we get the upper bound:

 Mn ≤ ∫p[f]−γ−∞ϕ(r|mn,σn)dr+∫∞p[f]+γϕ(r|mn,σn)dr (28) = 1+Φ(p[f]−mnσn(∗)−γσn)−Φ(p[f]−mnσn(∗)+γσn). (29)

From (26) we have that the terms are bounded by as , so that asymptotically we have:

 Mn ≲ 1+Φ(−γ/σn)−Φ(γ/σn) (30) = erfc(γ/√2σn)∼(√2σn/√πγ)exp(−γ2/2σ2n). (31)

Finally we may substitute the asymptotic results derived in the proof of Theorem 1 for the MMD into (31) to complete the proof. ∎

###### Corollary.

The consistency and contraction rates obtained for FWLSBQ apply also to OBQ.

By definition, OBQ chooses samples that globally minimise the MMD and we can hence bound this quantity from above by the MMD of FWLSBQ:

 MMD({xOBQi,wBQi}ni=1)=inf{xi}ni=1∈XMMD({xi,wBQi}ni=1)≤MMD({xFWi,wBQi}ni=1). (32)

Consistency and contraction follow from inserting this inequality into the above proofs. ∎

### Appendix C: Computing the Mean Element for the Simulation Study

We compute an expression for in the case where is an exponentiated-quadratic kernel with length scale hyper-parameter :

 k(x,x′):=λ2exp(−∑di=1(xi−x′i)22σ2)=λ2(√2πσ)dϕ(x∣∣x′,Σσ), (33)

where is a d-dimensional diagonal matrix with entries , and where is a mixture of d-dimensional Gaussian distributions:

 p(x)=L∑l=1ρlϕ(x∣∣μl,Σl). (34)

(Note that, in this section only, denotes the th component of the vector .) Using properties of Gaussian distributions (see Appendix A.2 of [26]) we obtain

 (35)

where we have:

 a−1l=(2π)−d2∣∣Σσ+Σl∣∣−12exp(−12(x−μl)T(Σσ+Σl)−1(x−μl)). (36)

This last expression is in fact itself a Gaussian distribution with probability density function

and we hence obtain:

 μp(x):=λ2(√2πσ)dL∑l=1ρl ϕ(x|μl,Σl+Σσ). (37)

Finally, we once again use properties of Gaussians to obtain

 ∫∞−∞μp(x)p(x)dx=∫∞−∞[λ2(√2πσ)dL∑l=1ρl ϕ(x|μl,Σl+Σσ)]×[L∑m=1ρmϕ(x∣∣μm,Σm)]dx=λ2(√2πσ)dL∑l=1L∑m=1ρlρm∫∞−∞ϕ(x|μl,Σl+Σσ)ϕ(x∣∣μm,Σm)dx=λ2(√2πσ)dL∑l=1L∑m=1ρlρma−1lm=λ2(√2πσ)dL∑l=1L∑m=1ρlρmϕ(μl|μm,Σl+Σm+Σσ). (38)

Other combinations of kernel and density that give rise to an analytic mean element can be found in the references of [1].

### Appendix D: Details of the Application to Proteomics Data

Description of the Model Choice Problem

The ‘CheMA’ methodology described in [19] contains several elements that we do not attempt to reproduce in full here; in particular we do not attempt to provide a detailed motivation for the mathematical forms presented below, as this requires elements from molecular chemistry. For our present purposes it will be sufficient to define the statistical models and to clearly specify the integration problems that are to be solved. We refer the reader to [19] and the accompanying supplementary materials for a full biological background.

Denote by the dataset containing normalised measured expression levels and for, respectively, the unphosphorylated and phosphorylated forms of a protein of interest (‘substrate’) in a longitudinal experiment at time . In addition contains normalised measured expression levels for a set of possible regulator kinases (‘enzymes’, here phosphorylated proteins) that we denote by .

An important scientific goal is to identify the roles of enzymes (or ‘kinases’) in protein signaling; in this case the problem takes the form of variable selection and we are interested to discover which enzymes must be included in a model for regulation of the substrate . Specifically, a candidate model specifies which enzymes in the set are regulators of the substrate , for example . Following [19] we consider models containing at most two enzymes, as well as the model containing no enzymes.

Given a dataset and model , we can write down a likelihood function as follows:

 L(θi,Mi)=N∏n=1ϕ⎛⎝y∗S(tn+1)−y∗S(tn)tn+1−tn∣∣ ∣∣−V0y∗S(tn)y∗S(tn)+K0+∑Ej∈MiVjy∗Ej(tn)yS(tn)yS(tn)+Kj,σ2err⎞⎠. (39)

Here the model parameters are , where , , is the normal p.d.f. and the mathematical forms arise from the Michaelis-Menten theory of enzyme kinetics. The are known as ‘maximum reaction rates’ and the are known as ‘Michaelis-Menten parameters’. This is classical chemical notation, not to be confused with the kernel matrix from the main text. The final parameter defines the error magnitude for this ‘approximate gradient-matching’ statistical model.

The prior specification proposed in [19] and followed here is

 K ∼ ϕT(K∣∣1,2−1I), (40) σerr|K ∼ p(σerr)∝1/σerr, (41) V|K,σ ∼ ϕT(V∣∣1,Nσ2err(X(K)TX(K))−1), (42)

where denotes a Gaussian distribution, truncated so that its support is (since kinetic parameters cannot be non-negative). Here

is the design matrix associated with the linear regression that is obtained by treating the

as known constants; we refer to [19] for further details.

Due to its careful design, the likelihood in Eqn. 39 is partially conjugate, so the following integral can be evaluated in closed form:

 L(K,Mi)=∫∞0∫∞0L(θi,Mi)p(V,