1 Overview
In “A Bayesian conjugate gradient method”, Cockayne, Oates, Ipsen, and Girolami add to the recent body of work that provides probabilistic/inferential perspectives on deterministic numerical tasks and algorithms. In the present work, the authors consider a conjugate gradient (CG) method for the solution of a finitedimensional linear system for , given an assuredly invertible symmetric matrix and , with .
The authors derive and test an algorithm, BayesCG, that returns a sequence of normal distributions
for , starting from a prior distribution . This sequence of normal distributions is defined using a recursive relationship similar to that defining the classical CG method, and indeed the BayesCG mean coincides with the output of CG upon choosing — this choice is closely related to what the authors call the “natural prior covariance”, . The distribution is intended to be an expression of posterior belief about the true solution to the linear system under the prior belief given the first BayesCG search directions . Like CG, BayesCG terminates in at most steps, at which point its mean is the exact solution and it expresses complete confidence in this belief by having . The convergence and frequentist coverage properties of the algorithm are investigated in a series of numerical experiments.The field of probabilistic perspectives on numerical tasks has been enjoying a resurgence of interest in the last few years; see e.g. Oates and Sullivan (2019) for a recent survey of both historical and newer work. The authors’ contribution is a welcome addition to the canon, showing as it does how classical methods (in this case CG; cf. the treatment of Runge–Kutta methods for ODEs by Schober et al. (2014)) can be seen as point estimators of particular instances of inferential procedures. it is particularly encouraging to see contributions coming from authors with both statistical and numericalanalytical expertise, and the possibilities for generalisation and further work are most interesting.
2 Questions and directions for generalisation
The article raises a number of natural questions and directions for generalisation and further investigation, which Cockayne et al. might use their rejoinder to address.
Symmetry and generalisations.
It is interesting that, for the most part, BayesCG does not require that be symmetric, and works instead with . This prompts a question for the authors to consider in their rejoinder: How does BayesCG behave in the case that is square but not invertible? Could one even show that the BayesCG posterior concentrates, within at most iterations, to a distribution centred at the minimumnorm solution (in some norm on ) of the linear system? One motivation for this question is that, if such a result could be established, then BayesCG could likely be applied to rectangular and produce a sequence of normallydistributed approximate solutions to the minimumnorm leastsquares problem, i.e. a probabilisticallymotivated theory of Moore–Penrose pseudoinverses. (Recall that the Moore–Penrose pseudoinverse can be characterised as the solution operator for the minimumnorm least squares problem, i.e.,
for the usual norms on and .)
The authors could also comment on whether they expect BayesCG to generalise easily to infinitedimensional Hilbert spaces, analogously to CG (Fortuna, 1977, 1979; Málek and Strakoš, 2015), since experience has shown that analysis of infinitedimensional statistical algorithms can yield powerful dimensionindependent algorithms for the finitedimensional setting (Cotter et al., 2013; Chen et al., 2018). A more esoteric direction for generalisation would be to consider fields other than . The generalisation of BayesCG to would appear to be straightforward via a complex normal prior, but how about fields of finite characteristic?
Conditioning relations.
Could the authors use their rejoinder to provide some additional clarity about the relationship of BayesCG to exact conditioning of the prior normal distribution , and in particular whether BayesCG is exactly replicating the ideal conditioning step, approximating it, or doing something else entirely?
To make this question more precise, fix a weighted inner product on , e.g. the Euclidean, weighted, weighted, or weighted inner product. With respect to this inner product, let be the (selfadjoint) orthogonal projection onto the Krylov space and the (selfadjoint) orthogonal projection onto its orthogonal complement. The product measure
on is also a normal distribution on that expresses complete confidence about the true solution to the linear system in the directions of the Krylov subspace and reverts to the prior in the complementary directions; is the prior, and is a Dirac on the truth. The obvious question is, for some choice of weighted inner product, does BayesCG basically output , but in a clever way? Or is the output of BayesCG something else?
Uncertainty quantification: cost, quality, and terms of reference.
Cockayne et al. point out that the computational cost of BayesCG is a small multiple (a factor of three) of the cost of CG, and this would indeed be a moderate price to pay for highquality uncertainty quantification. However, based on the results presented in the paper, the UQ appears to be quite poorly calibrated. One could certainly try to overcome this shortcoming by improving the UQ. However, an alternative approach would be to choose a prior covariance structure that aggressively sparsifies and hence accelerates the linearalgebraic operations that BayesCG performs (particularly lines 7, 9, and 11 of Algorithm 1), while preserving the relatively poor UQ. Does this appear practical, in their view?
In fact, the whole question of “the UQ being well calibrated” is somewhat ill defined. Some might argue that, since is deterministic, there is no scope for uncertainty in this setting. However, It certainly makes sense to ask — as the authors do in Section 6.1 – whether, when the problem setup is randomised, the frequentist coverage of BayesCG lines up with that implied by the randomisation of the problem. The statistic that Cockayne et al. introduce is intersting, and already captures several scenarios in which the UQ is well calibrated and several in which it is not; I strongly encourage Cockayne et al. to address in their rejoinder the Bayesian accuracy of BayesCG, e.g., to exhaustively sample the true posterior on given and see whether this empirical distribution is well approximated by the BayesCG distribution .
As a related minor question, can BayesCG be seen as an (approximate Gaussian) filtering scheme for the given prior and the filtration associated to the data stream ?
Precision formulation.
As formulated, BayesCG expresses a Gaussian belief about the solution of the linear system in terms of mean and covariance; Gaussian measures can also be expressed in terms of their mean and precision. Do the authors have a sense of whether the BayesCG algorithm be formulated as a sequence of precision updates, or does the singularity of the covariance in the Krylov directions essentially forbid this?
One motivation for seeking a precision formulation would be to render the “natural prior” of Section 4.1 more tractable, since this prior has an easilyaccessible precision while accessing its covariance involves solving the original linear problem.
As the authors note, their “natural prior” is closely related to one introduced by Owhadi (2015) and applied in Cockayne et al. (2017)
. It seems that working with images of Gaussian white noise, such as this “natural prior”, is presently producing considerable analytical and computational strides forward
(Owhadi, 2017; Chen et al., 2018), and so this seems to be a topic worth further attention in the statistical community as a whole.3 Minor comments
Rank and trace estimates.
Proposition 3, which states that , seems to miss the point slightly. It would be good to have a companion results to the effect that , and a more quantitative result for the trace such as for some constant depending only on .
Posterior and Krylov spaces.
It seems natural to ask whether the posterior nullspace coincides with the Krylov space . Put another way, is the posterior column space the same as the orthogonal complement of the Krylov space, in the Euclidean or weighted inner product?
Square roots.
Is the square root introduced just before Proposition 5 required to be unique? Does it even matter whether a (unique) symmetric positive semidefinite square root or a Cholesky factor is chosen? This is related to the above discussion of symmetry and generalisations.
Interpretation of termination criteria.
In Section 5, the authors refer to “probabilistic termination criteria”. As a matter of semantics, the termination criterion that they propose is in fact deterministic, albeit based on a probabilistic interpretation of the algorithmic quantities.
Acknowledgements
TJS is supported by the Freie Universität Berlin within the Excellence Strategy of the German Research Foundation (DFG), including the ECMath/MATH+ transition project CH15 and project TrU2 of the Berlin Mathematics Research Center MATH+ (EXC2046/1, project ID 390685689).
References
 Chen et al. [2018] V. Chen, M. M. Dunlop, O. Papaspiliopoulos, and A. M. Stuart. Dimensionrobust MCMC in Bayesian inverse problems, 2018. arXiv:1803.03344.

Cockayne et al. [2017]
J. Cockayne, C. J. Oates, T. J. Sullivan, and M. Girolami.
Probabilistic numerical methods for PDEconstrained Bayesian
inverse problems.
In G. Verdoolaege, editor, Proceedings of the
36^{th}
International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering
, volume 1853 of AIP Conference Proceedings, pages 060001–1–060001–8, 2017. doi:10.1063/1.4985359.  Cotter et al. [2013] S. L. Cotter, G. O. Roberts, A. M. Stuart, and D. White. MCMC methods for functions: modifying old algorithms to make them faster. Statist. Sci., 28(3):424–446, 2013. doi:10.1214/13STS421.
 Fortuna [1977] Z. Fortuna. Superlinear convergence of conjugate gradient method in Hilbert space. In Theory of nonlinear operators (Proc. Fourth Internat. Summer School, Acad. Sci., Berlin, 1975), Abh. Akad. Wiss. DDR Abt. Math.Natur.Tech., 1977, 1, pages 313–318. AkademieVerlag, Berlin, 1977.
 Fortuna [1979] Z. Fortuna. Some convergence properties of the conjugate gradient method in Hilbert space. SIAM J. Numer. Anal., 16(3):380–384, 1979. doi:10.1137/0716031.
 Málek and Strakoš [2015] J. Málek and Z. Strakoš. Preconditioning and the conjugate gradient method in the context of solving PDEs, volume 1 of SIAM Spotlights. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2015. Chapter 5. doi:10.1137/1.9781611973846.ch5.
 Oates and Sullivan [2019] C. J. Oates and T. J. Sullivan. A modern retrospective on probabilistic numerics. Stat. Comput., 2019. To appear. arXiv:1901.04457.
 Owhadi [2015] H. Owhadi. Bayesian numerical homogenization. Multiscale Model. Simul., 13(3):812–828, 2015. doi:10.1137/140974596.
 Owhadi [2017] H. Owhadi. Multigrid with rough coefficients and multiresolution operator decomposition from hierarchical information games. SIAM Rev., 59(1):99–149, 2017. doi:10.1137/15M1013894.
 Schober et al. [2014] M. Schober, D. K. Duvenaud, and P. Hennig. Probabilistic ODE solvers with Runge–Kutta means. In Advances in Neural Information Processing Systems 27, 2014. https://papers.nips.cc/paper/5451probabilisticodesolverswithrungekuttameans.