1 Introduction
The problem of inferring unknown parameters associated to the solution of (partial) differential equations (PDEs) is referred to as an inverse problem. In such a context, when the forward problem is wellposed, the inverse problem is often illposed and challenging to solve, even numerically. The area has a long history and a large literature (see e.g.
[7, 14]) yet the intersection with statistics is still comparatively small, particularly considering the significant intersection, in terms of both methods and algorithms as well as objectives. If one adopts a Bayesian approach to solution of the inverse problem then the object of interest is a posterior distribution, and in particular expectations with respect to this distribution [8, 12]. While this provides an elegant solution and quantified uncertainty via welldefined target distribution, it is more challenging to solve than its deterministic counterpart, requiring at least a Hessian in addition to a maximum a posteriori estimator for a Laplace approximation, if not more expensive Monte Carlo methods. Here we assume solution of the Bayesian inverse problem (BIP) requires computationally intensive Monte Carlo methods for accurate estimation. We furthermore assume that the statistical model can only be defined up to some unknown parameters.
Consider a BIP with unknown and data , related through a PDE, and assume that the statistical model is known only up to some parameter (assumed finite dimensional). In other words, the posterior distribution takes the form
Due to sensitivity with respect to the parameter and strong correlation with the unknown
, such posterior distribution can be highly complex and very challenging to sample from, even using quite advanced Markov chain Monte Carlo (MCMC) algorithms. In this article, the unknown
is treated as a nuisance parameter and the goal is to maximize the marginal likelihood of the parametersIn such a scenario one is left with a finitedimensional optimization problem, albeit with an objective function that is not available analytically. This intractability arises from two sources:

first, for a given only a discretization of the likelihood can be evaluated;

second, the discretized marginal likelihood is a highdimensional integral which itself must be approximated.
Moreover, the associated gradient of the loglikelihood is not available, which may be of interest in optimization algorithms. In the following we will suppress the notation for fixed observation and present the method generally. In particular, we use the notation , where represents the finite measure of an infinitesimal volume element, which may or may not be Lebesgue measure, and . We will also denote its integral , and the posterior by .
In this article, we present a new scheme to provide finite variance estimates of the gradient of the loglikelihood that are unbiased. To be precise, let denote the gradient of the loglikelihood with no discretization bias. The proposed method provides an estimator such that , where is the expectation with respect to the randomization induced by our numerical approach. Moreover, the estimator is constructed so that one only needs access to finite resolution (discretized) approximations of the BIP. This scheme is of interest for several reasons:

Unbiased estimates of gradients help to facilitate stochastic gradient algorithms;

The method is easy to parallelize;

The method helps to provide a benchmark for other computations.
In terms of the first point, it is often simpler to verify the validity of stochastic gradient algorithms when the estimate of the noisy functional is unbiased. Whilst this is not always needed (see [13] for a special case, which does not apply in our context), it at least provides the user a peaceofmind when implementing optimization schemes. The second point is of interest, in terms of efficiency of application, especially relative to competing methods. The third point simply states that one can check the precision of biased methodology. We now explain the approach in a little more detail.
The method that we use is based upon a technique developed in [9]. In that article the authors consider the filtering of a class of diffusion processes, which have to be discretized. The authors develop a method which allows one to approximate the filtering distribution, unbiasedly and without any discretization error. The methodology that is used in [9] is a double randomization scheme based upon the approaches in [10, 11]. The work in [10, 11] provides a methodology to turn a sequence of convergent estimators into an unbiased estimator, using judicious randomization across the level of discretization. It is determined for the problem of interest in [9] that an additional randomization is required in order to derive efficient estimators, that is, estimators that are competitive with the existing stateoftheart methods in the literature. In this article we follow the basic approach that is used in [9], except that one cannot use the same estimation methodology for the current problem. An approach is introduced in [2] which enables application of the related deterministic multilevel Monte Carlo identity [15] to a sequential Monte Carlo (SMC) sampler [5, 6] for inference in the present context. In this article, we consider such a strategy to allow the application of the approach in [9] to unbiasedly estimate the gradient of the loglikelihood for BIPs. The method of [2] is one of the most efficient techniques that could be used for estimation of the gradient of the loglikelihood for BIPs. However, this method is subject to discretization bias. In other words, suppose is the gradient of the loglikelihood with a choice of discretization bias level, e.g. . The original method would produce an estimate for which . On the other hand, under assumptions, it is proven that the new method introduced here can produce an estimate with finite variance and without bias, i.e. . We also show that the cost to achieve a given variance is very similar to the multilevel SMC (MLSMC) approach of [2], with high probability. This is confirmed in numerical simulations. We furthermore numerically investigate the utility of our new estimator in the context of stochastic gradient algorithms, where it is shown that a huge improvement in efficiency is possible. Our approach is one of the first which can in general provide unbiased and finite variance estimators of the gradient of the loglikelihood for BIPs. A possible alternative would be the approach of [1], however, the methodology in that article is not as general as is presented here and may be more challenging to implement.
This article is structured as follows. In Section 2 we explain the generic problem to which our approach is applicable. In particular, a concrete example in the context of Bayesian inverse problems is described. In Section 3 we present our methodology and the proposed estimator. In Section 4 we show that our proposed estimator is unbiased and of finite variance and we consider the cost to obtain the estimate. In Section 5 several numerical examples are presented to investigate performance of the estimator in practice, including the efficiency of the estimator when used in in the relevant context of a stochastic gradient algorithm for parameter estimation. In appendix A the proofs of some of our theoretical results can be found.
2 Problem Setting
2.1 Generic Problem
Let be a measurable space, and define a probability measure on it as
where , and is a finite measure on . We are interested in computing
where we have defined . From here on, we will use the following shorthand notation for a measure on and a measurable integrable
which should be understood as a column vector of integrals.
In practice, we assume that we must work with an approximation of and . Let , and set
where . We are now interested in computing
It is assumed explicitly that
2.2 Example of Problem
We will focus on the following particular problem. Let with convex and . Consider the following PDE on :
(1)  
where
Define , with
i.i.d. (the uniform distribution on
). This determines the prior distribution for . The state space is . Let denote the weak solution of for parameter value . The following will be assumed.
, , and there is a such that .
Note that this assumption guarantees uniformly in , hence there is a welldefined (weak) solution which will be bounded in uniformly in , in an appropriate space, e.g. .
Define the following vectorvalued function
where are elements of the dual space (e.g. is sufficient), for . It is assumed that the data take the form
where
denotes the Gaussian random variable with mean
and covariance matrix , and denotes independence. The unnormalized density of for fixed is then given by(2) 
the normalized density is given by
where , and the quantity of interest is defined as
(3) 
2.2.1 Particular setup
Let and and consider . For the prior specification of , we set , and for , let if
is odd and
if is even. The observation operator is , and the parameter in observation noise covariance is taken to be .The PDE problem at resolution level is solved using a finite element method with piecewise linear shape functions on a uniform mesh of width , for . Thus, on the th level the finiteelement basis functions are defined as (for ):
To solve the PDE, is plugged into (1), and projected onto each basis element:
resulting in the following linear system:
where we introduce the matrix with entries , and vectors with entries and , respectively.
Define . Denote the corresponding approximated unnormalized density by
(4) 
and the approximated normalized density by
where . We further define
(5) 
3 Methodology for Unbiased Estimation
We now describe our methodology for computing an unbiased estimate of . For simplicity of exposition we will suppose that for , , where denotes the element of a vector and are the collection of bounded, measurable and realvalued functions on . This constraint is not needed for the numerical implementation of the method, but, shall reduce most of the technical exposition to follow. As remarked in the introduction, the basic approach follows that in [9] with some notable differences. We now detail how the approach will work.
3.1 Methodology in [9]
The underlying approach of [9] is a type of double randomization scheme. The first step is to use the singleterm estimator as developed in [11]. Suppose one wants to estimate , but, only has access to a methodology that can approximate for each fixed . Let be a positive probability mass function on and suppose that one can construct a sequence of random variables such that
(6)  
(7) 
and that
(8) 
where is the norm. Now if one draws , then is an unbiased and finite variance estimator of . It should be noted that (6)(7) are not necessary conditions, but are sufficient to ensure the unbiasedness of the estimator.
In the context of interest, it can be challenging to obtain a sequence of random variables which can possess the properties (6)(8). We will detail one possible approach at a highlevel and then explain in details how one can actually construct a simulation method to achieve this highlevel description.
3.2 High Level Approach
The objective of this section is to highlight the generic procedure that is used in [9] for producing estimates that satisfy (6)(7). The basic idea is to use another application of randomization to construct such unbiased estimators from a consistent sequence of estimators. In particular, consider a given increasing sequence with for each , and . Then, we suppose that one can construct sample Monte Carlo (type) estimators for , such that almost surely the following consistency results hold
(9)  
(10) 
For a given , we do not require and to be independent, nor do we require unbiasedness of the individual estimators as in
Now set
For given, set
Let , , be a positive probability mass function with . Now if
(11)  
(12) 
and , then
(13) 
will allow to satisfy (6)(7), where expectations are understood to be with respect to yet is suppressed in the notation. Moreover will have finite variances. This result follows as we are simply using the coupled sum estimator as in [11] and using [15, Theorem 5], for instance, to verify the conditions required.
3.3 Details of the Approach
We will now describe how to obtain the sequence for fixed.
3.3.1 MLSMC Method of [2]
To introduce our approach, we first consider the MLSMC method in [2] which will form the basis for our estimation procedure. Define for
and for , is a invariant Markov kernel; that is, for any
(14) 
Define for (the collection of probability measures on ),
(15) 
Noting that
(16) 
equations (14) and (15) lead to the recursion
Consider , and slightly modify the MLSMC algorithm used in [2] to keep the number of samples across layers fixed, up to some given level . Details are given in Algorithm 1.

Initialization: For sample from . If stop; otherwise set and goto step 2.

Resampling and Sampling: For sample from . This consists of sampling with probability mass function
and then sampling from . If stop; otherwise set and return to the start of 2.
This algorithm yields samples distributed according to the following joint law
(17) 
where for . One can compute an estimate of as
Following from (16), for , one can estimate with
The reason for using the samples generated at level to estimate as well as is to construct estimators which satisfying conditions such as (8). Standard results (for instance in [5]) allow one to prove that almost surely
Note that in general one has
where is an expectation associated to the probability in (17).
3.3.2 Approach for Constructing
In order to calculate our approximation, we will consider the following approach, which was also used in [9]. Given any , we will run Algorithm 2 in order to obtain .
3.4 Method
The new method is now presented in Algorithm 3.
For :

Generate and .

Run Algorithm 2 with and .
Return the estimate:
(23) 
The estimate of is given by (23). In Section 4 we will prove that it is both unbiased and of finite variance, as well as to investigate the cost of computing the estimate.
There are several points of practical interest to be made at this stage (the first two were noted already in [9]). First, the loop over the number of independent samples in Algorithm 3 can be easily parallelized. Second, one does not need to make and independent; this is only assumed for simplicity of presentation, but is not required. Third, the current method uses only the level marginal of (18). All the samples for and associated empirical measures (19) are discarded and only the level empirical measure is utilized. This differs from [2] where all the lower level empirical measures are used. It is possible these samples could be utilized to improve the accuracy of the method, but it is not necessary and so is not investigated further here. The potential efficiency of the double randomization scheme, as well as a discussion of the overall efficiency of the approach, is given in [9, Section 2.5].
4 Theoretical Results
Our main objective is to show that as defined in (13) with as in (20) will satisfy (6)(8). To that end, one must first show that satisfy (21)(22) which certainly verifies (6)(7) and then one must establish that (8) holds. We make the following assumptions.

For each , there exist such that

For each , , there exist a such that for any , ,

For each , there exists a such that for each
Comments
There are no comments yet.