## I Introduction

Recently, a new 3D imaging technology called magnetic resonance force microscopy (MRFM) has been developed. The
principles of MRFM were introduced by Sidles [Sidles1991, Sidles1992, Sidles1995] who described its potential for
achieving 3D atomic scale resolution. In 1992 and 1996, Rugar *et al.* [Rugar1992, Zuger1996] reported
experiments that demonstrated the feasibility of MRFM and produced the first MRFM images. More recently, MRFM
volumetric spatial resolutions of less than have been demonstrated for imaging a biological sample
[Degen2009]. The signal provided by MRFM is a so-called force map that is the D convolution of the atomic spin
distribution and the point spread function (PSF) [Chao2004]

. This formulation casts the estimation of the spin density from the force map as an inverse problem. Several approaches have been proposed to solve this inverse problem, i.e., to reconstruct the unknown image from the measured force map. Basic algorithms rely on Wiener filters

[Zuger1993, Zuger1994, Zuger1996] whereas others are based on iterative least squares reconstruction approaches [Chao2004, Degen2009, Degen2009compl]. More recently, this problem has been addressed within the Bayesian estimation framework [Ting2009, Dobigeon2009a].However, all of these reconstruction techniques require prior knowledge of the device response, namely the PSF. As shown by Mamin et al. [Mamin2003], this PSF is a function of several parameters specified by the physics of the device including: mass of cantilever probe, ferromagnetic constant of probe tip, and external field strength. Unfortunately, in practice the physical parameters that tune the response of the MRFM tip are only partially known. In such circumstances, the PSF used in the reconstruction algorithm is mismatched to the true PSF of the microscope and the quality of standard MRFM image reconstruction will suffer if one does not account for this mismatch. Estimating the unknown image and the partially known PSF jointly is usually referred to as semi-blind [Makni2004, Pillonetto2007] or myopic [Sarri1998, Chenegros2007] deconvolution, and this is the approach taken in this paper. The myopic image restoration problem was previously studied within a hierarchical Bayesian framework [Molina1994] with partially known blur functions in many applications, including natural and astronomical imaging [Galatsanos2000, Galatsanos2002]. This previous work [Galatsanos2000, Galatsanos2002] models the deviation of the PSF as uncorrelated zero mean Gaussian noise. The authors of [Molina2006] considered an extension of this model to a non-sparse, simultaneous autoregression (SAR) prior model for both the image and point spread function. Compared to [Molina2006], recent papers on single motion deblurring in photography [Fergus2006, Shan2008]

use heavier tailed distributions for the gradient of images and an exponential distribution for the PSF. In addition, the algorithm in

[Fergus2006] separately identifies the PSF using a multi-scale approach to perform conventional image restoration. The authors of [Shan2008] proposed an image prior to reduce ringing artifacts from blind deconvolution of photo images. This paper considers an alternative model, appropriate to MRFM but significantly different from photography, that imposes sparsity on the image and an empirical Bayes prior on the PSF.To mitigate the effects of PSF mismatch on MRFM sparse image reconstruction, a non-Bayesian alternating minimization
(AM) algorithm [Herrity2008b] was proposed by Herrity *et al.* which showed robust performance. In this
paper, we propose a hierarchical Bayesian approach to semi-blind image deconvolution that exploits prior information on
the PSF model. This is a semi-blind modification of the Bayesian MRFM reconstruction approach of Dobigeon et al.
[Dobigeon2009a]

that uses an adaptive tuning scheme to produce a Bayesian estimate of the PSF and a Bayesian reconstruction of the image. The contribution of this paper is a novel Bayesian approach to a joint estimation of PSFs and images. We represent the PSF on a truncated orthogonal basis, where the basis elements are the singular vectors in the singular value decomposition of the family of perturbed nominal PSFs. A Gaussian prior model specifies a log quadratic Bayes prior on deviations from the nominal PSF. Our approach is related to the recent papers of Tzikas

*et al.*[Tzikas2009] and Orieux

*et al.*[Orieux2010]. In [Tzikas2009] a pixel-wise, space-invariant Gaussian kernel basis is assumed with a gradient based image prior. Orieux

*et al.*introduced a Metropolis-within-Gibbs algorithm to estimate the parameters that tune the device response. The strategy [Orieux2010] focuses on reconstruction with smoothness constraints and requires recomputation of the entire PSF at each step of the algorithm. This is computationally expensive, especially for complex PSF models such as in the MRFM instrument. Here, we propose an alternative that consists of estimating the deviation from a given nominal PSF. More precisely, the nominal point response of the device is assumed known and the true PSF is modeled as a small perturbation about the nominal response. Since we only need to estimate linear perturbations about the nominal PSF relative to a low dimensional precomputed and truncated basis set, this leads to reduction in computational complexity and an improvement in convergence as compared to [Tzikas2009] and [Orieux2010]. We approximate the full posterior distribution of the PSF and the image using samples generated by a Markov Chain Monte Carlo algorithm. Simulations and comparisons to other state-of-the-art blind deconvolution algorithms are presented and quantify the advantages of our algorithm for myopic sparse image reconstruction. We then apply it to real MRFM tobacco virus data made available by our IBM collaborators.

## Ii Forward Imaging and PSF Model

Let denote the unknown -D positive spin density image to be recovered (e.g., or ) and denote the vectorized version of with . This image is to be reconstructed from a collection of measurements via the following noisy transformation:

(1) |

where is the -dimensional convolution operator or the mean response function , is a observation noise vector and is the kernel modeling the response of the imaging device.

A typical PSF for MRFM is shown in Mamin et al.[Mamin2003] for horizontal and vertical MRFM tip configurations. In (1) is an additive Gaussian noise sequence, independent of , distributed according to . The PSF is assumed to be known up to a perturbation about a known nominal :

(2) |

In the MRFM application the PSF is described by an approximate parametric function that depends on the experimental setup. Based on the physical parameters (gathered in the vector ) tuned during the experiment (external magnetic field, mass of the probe, etc.), an approximation of the PSF can be derived. However, due to model mismatch and experimental errors, the true PSF may deviate from the nominal PSF .

If a vector of the nominal values of parameters for the parametric PSF model is known, then direct estimation of a parameter deviation, , can be performed by evaluation of . However, in MRFM, as shown by Mamin et al. [Mamin2003], is a nonlinear function with many parameters that are required to satisfy ‘resonance conditions’ to produce a meaningful MRFM PSF. This makes direct estimation of the PSF difficult.

In this paper, we take a similar approach to the ‘basis expansions’ in [StatisticalLearning:2001, Chap. 5],
[Tzikas2009] for approximation of the PSF deviation as linear models. this deviation is that can be expressed as a linear combination of elements of an *a priori* known
basis , ,

(3) |

where is a set of basis functions for the PSF perturbations and , are unknown coefficients. To emphasize the influence of these coefficients on the actual PSF, will be denoted with . With these notations, (1) can be rewritten:

(4) |

where is an matrix that describes convolution by the PSF kernel .

We next address the problem of estimating the unobserved image and the PSF perturbation under sparsity constraints given the measurement and the bilinear function .

## Iii Hierarchical Bayesian model

### Iii-a Likelihood function

Under the hypothesis that the noise in (1) is Gaussian, the observation model likelihood function takes the form

(5) |

where denotes the standard norm: . This likelihood function will be denoted , where .

### Iii-B Parameter prior distributions

In this section, we introduce prior distributions for the parameters .

#### Iii-B1 Image prior

As the prior distribution for , we adopt a mixture of a mass at zero and a single-sided exponential distribution:

(6) |

where is the Dirac function, is a set of real open interval and is the indicator function of the set :

(7) |

By assuming the components to be a conditionally independent () given , the following conditional prior distribution is obtained for the image :

(8) |

This image prior is similar to the LAZE distribution (weighted average of a Laplacian probability density function (pdf) and an atom at zero) used, for example, in Ting

*et al.*[Ting2006icip, Ting2009]. As motivated by Ting

*et al.*and Dobigeon

*et al.*[Ting2009, Dobigeon2009a], the image prior in (6) has the interesting property of enforcing some pixel values to be zero, reflecting the natural sparsity of the MRFM images. Furthermore, the proposed prior in (6) ensures positivity of the pixel values (spin density) to be estimated.

#### Iii-B2 PSF parameter prior

We assume that the parameters are *a priori*

independent and uniformly distributed over known intervals associated with error tolerances centered at

. Specifically, define the interval(9) |

and define the distribution of as

(10) |

with . In our experiment, ’s are set to be large enough to be non-informative, i.e., an improper, flat prior.

#### Iii-B3 Noise variance prior

A non-informative Jeffreys’ prior is selected as prior distribution for the noise variance:

(11) |

This model is equivalent to an inverse gamma prior with a non-informative Jeffreys’ hyperprior, which can be seen by integrating out the variable corresponding to the hyperprior

[Dobigeon2009a].### Iii-C Hyperparameter priors

Define the hyperparameter vector associated with the image and noise variance prior distributions as

. In our hierarchical Bayesian framework, the estimation of these hyperparameters requires prior distributions in the hyperparameters. These priors are defined in [Dobigeon2009a] but for completeness their definitions are reproduced below.#### Iii-C1 Hyperparameter

A conjugate inverse-Gamma distribution is assumed for hyperparameter

:(12) |

with . The fixed hyperparameters and have been chosen to produce a vague prior, i.e., .

#### Iii-C2 Hyperparameter

A uniform distribution on the simplex is selected as prior distribution for the mean proportion of non-zero pixels:

(13) |

Assuming that the individual hyperparameters are independent the full hyperparameter prior distribution for can be expressed as:

(14) |

### Iii-D Posterior distribution

The posterior distribution of is:

(15) |

with

(16) |

where and have been defined in (5) and (14). The conjugacy of priors in this hierarchical structure allows one to integrate out the parameters , and the hyperparameter in the full posterior distribution (15), yielding:

(17) | ||||

where is the beta function and is the gamma function.

The next section presents the Metropolis-within-Gibbs algorithm [Robert1999] that generates samples distributed according to the posterior distribution . These samples are then used to estimate and .

##
Iv Metropolis-within-Gibbs algorithm

for semi-blind sparse image reconstruction

We describe in this section a Metropolis-within-Gibbs sampling strategy that allows one to generate samples distributed according to the posterior distribution in (17). As sampling directly from (17) is a difficult task, we will instead generate samples distributed according to the joint posterior . Sampling from this posterior distribution is done by alternatively sampling one of conditioned on all other variables [Bremaud2001, Dobigeon2009a]. The contribution of this work to [Dobigeon2009a] is to present an algorithm that simultaneously estimates both the image and PSF. The algorithm results in consistently stable output images and PSFs.

The main steps of our proposed sampling algorithm are given in subsections IV-A through IV-C (see also Algorithm 1).

### Iv-a Generation of samples according to

To generate samples distributed according to , it is convenient to sample according to by the following 3-step procedure.

#### Iv-A1 Generation of samples according to

The conditional posterior distribution of is

(18) |

where and . Therefore, generation of samples according to is achieved as follows:

(19) |

#### Iv-A2 Generation of samples according to

The joint posterior distribution (15) yields:

(20) |

#### Iv-A3 Generation of samples according to

The posterior distribution of each component () given all other variables is derived as:

(21) |

where stands for the vector whose th component has been removed and and are defined as the following:

(22) |

with and defined in Appendix A.

In (21),

stands for the pdf of the truncated Gaussian distribution defined on

with hidden mean and hidden variance . Therefore, from (21), is a Bernoulli truncated-Gaussian variable with parameter .To summarize, generation of samples distributed according to can be performed by updating the coordinates of using Gibbs moves (requiring generation of Bernoulli truncated-Gaussian variables). A pixel-wise fast and recursive sampling strategy is presented in Appendix A and an accelerated sparsity enforcing simulation scheme is described in Appendix B.

### Iv-B Generation of samples according to

The posterior distribution of the parameter conditioned on the unknown image , the noise variance and the other PSF parameters is

(23) |

with . We summarize in Algorithm 2 a procedure for generating samples distributed according to the posterior in (23) using a Metropolis-Hastings step with a random walk proposition [Robert1999] from a centered Gaussian distribution. In order to sample efficiently, the detailed procedure of how to choose an appropriate value of the variance of the proposal distribution for is described in Appendix C. At iteration of the algorithm, the acceptance probability of a proposed state is:

(24) |

with

(25) |

Computing the transformation at each step of the sampler can be computationally costly. Appendix A provides a recursive strategy to efficiently sample according to .

### Iv-C Generation of samples according to

Samples are generated according to the inverse gamma posterior

(26) |

## V Experiments

In this section we present simulation results that compare the proposed semi-blind Bayesian deconvolution algorithms
of Section IV with the non-blind method [Dobigeon2009a], the AM algorithm [Herrity2008b], and
other blind deconvolution methods. Here a nominal PSF was selected that corresponds to the mathematical MRFM
point response model proposed by Mamin *et al.* [Mamin2003].

Using our MCMC algorithm described in Sec. IV, the MMSE estimators of image and PSF are approximated by ensemble averages over the generated samples after the burn-in period. The joint MAP estimator is selected among the drawn samples, after the stationary distribution is achieved, such that it maximizes the posterior distribution [Marin2007].

Parameter | Value | |
---|---|---|

Description | Name | |

Amplitude of external magnetic field | ||

Value of in the resonant slice | ||

Radius of tip | ||

Distance from tip to sample | ||

Cantilever tip moment |
||

Peak cantilever oscillation oscillation | ||

Maximum magnetic field gradient |

### V-a Simulation on simulated sparse images

We performed simulations of MRFM measurements for PSF and image models similar to those described in
Dobigeon *et al.* [Dobigeon2009a]. The signal-to-noise ratio was set to dB. Several synthetic sparse images, one of which is depicted in Fig. 1(a), were used to produce the data and
were estimated using the proposed Bayesian method. The assumed PSF is generated following the physical model
described in Mamin *et al.* [Mamin2003] when the physical parameters are tuned to the values displayed in
Table I. This yields a -dimensional convolution kernel represented in
Fig. 2(a). We assume that the true PSF comes from the same physical model where the radius of the
tip and the distance from tip to sample have been mis-specified with error as and . This leads to the convolution kernel depicted in Fig. 2(b). The observed measurements , shown
Fig. 1(b) are a image of size .

The proposed algorithm requires the definition of basis vectors , , that span a subspace
representing possible perturbations . We empirically determined this basis using the following PSF
variational eigendecomposition approach. A set of experimental PSFs , , were
generated following the model described in Mamin *et al.* [Mamin2003] with parameters and randomly
drawn according to Gaussian distribution^{1}^{1}1 We used a PSF generator provided by Dan Rugar’s group at IBM
[Mamin2003]

. The variances of the Gaussian distributions are carefully tuned so that their standard deviations produce a minimal volume ellipsoid that contains the set of valid PSF’s of the form specified in

[Mamin2003]. centered at the nominal values ,, respectively. Then a standard principal component analysis (PCA) of the residuals

, by allowing the maximum variance over the parameters that produce valid MRFM PSFs, was used to identify principal axes that are associated with the basis vectors . The necessary number of basis vectors, here, was determined empirically by detecting a knee at the scree plot shown in Fig. 3. The first four eigenfunctions, corresponding to the first four largest eigenvalues, explain

of the observed perturbations. The 4 principal patterns of basis vectors are depicted in Fig. 4.The proposed semi-blind Bayesian reconstruction algorithm was applied to estimate both the sparse image and the PSF
coefficients of ’s, using the prior in (6). From the observation shown in Fig. 1(b)
the PSF estimated by the proposed algorithm is shown in Fig. 5(a) and is in good agreement with the true one.
The corresponding maximum *a posteriori* estimate (MAP) of the unknown image is depicted in
Fig. 6(a). The obtained coefficients of the PSF-eigenfunctions are close to true coefficients (Fig.
7).

### V-B Comparison to Other Methods

For comparison to a non-blind method, Fig. 6(b) shows the estimate using the Bayesian non-blind method [Dobigeon2009a] with a mismatched PSF. Fig. 6(c) shows the estimate generated by the AM algorithm. The nominal PSF described in Section V-A is used in the AM algorithm and hereafter in other semi-blind algorithms, and the parameter values of AM algorithm were set empirically according to the procedure in [Herrity2008b]. Our proposed algorithm appears to outperform the others (Fig. 6) while preserving fast convergence (Fig. 7).

Quantitative comparisons were obtained by generating different noises in independent trials for a fixed true image. Here, six different true images with six corresponding different sparsity levels () were tested. Fig. 8 presents the two histograms of the results with the six sets in the corresponding two error criteria, , , respectively, both of which indicate that our method performs better and is more stable than the other two methods.

Fig. 9 shows reconstruction error performance for several measures of error used in Ting *et
al.* [Ting2009] and Dobigeon *et al.* [Dobigeon2009a] to compare different reconstruction algorithms
for sparse MRFM images. Notably, compared to the AM algorithm that aims to compensate ‘blindness’ of the unknown PSF
and the previous non-blind method, our method reveals a significant performance gain under most of the investigated
performance criteria and sparsity conditions.

In addition to the AM and non-blind comparisons shown in Fig. 8, we made direct comparisons
between our sparse MRFM reconstruction method and several state-of-the-art blind image reconstruction methods
[Amizic2010, Almeida2010, Tzikas2009, Fergus2006, Shan2008]. In all cases the algorithms were
initialized with the nominal, mismatched PSF and were applied
to a sparse MRFM-type image like in Fig. 1.
For a fair comparison, we made a sparse prior modification in the image model of other algorithms. The total variation
(TV) based prior for the PSF suggested by Amizic *et al.* [Amizic2010] was also implemented. The obtained PSF
from this method was considerably worse than the one estimated by our proposed method (see Fig. 5(b))
resulting in a very poor quality image deconvolution^{2}^{2}2 Because this PSF is wrongly estimated and similar to the
2 delta function, the image estimate looks similar to the denoised version of observation, so we omit the image
estimate..

The recent blind deconvolution method proposed by Almeida *et al.* [Almeida2010] utilizes the ‘sharp
edge’ property in natural images, with initial, high regularization in order to effectively evaluate the PSF. This
iterative approach has a sequentially decreasing regularization parameter to reconstruct fine details of the image.
Adapted to sparse images, this method performs worse than our method, in terms of image and PSF estimation errors. The
PSF and image estimates from Almeida’s method are presented in Fig. 5(c) and 6(d),
respectively.

Tzikas *et al.* [Tzikas2009] uses a similar PSF model to our method using basis kernels. However, no
sparse image prior was assumed in [Tzikas2009] making it unsuitable for sparse reconstruction problems such as the
MRFM problem considered in the paper. For a fair comparison, we applied the suggested PSF model [Tzikas2009] along
with our sparse image prior. The results of PSF and image estimation and the performance graph are shown in
Fig. 5(d), Fig. 6(e), and Fig. 9, respectively.
In terms of PSF estimation error, our algorithm outperforms the others.

We also compared against the mixture model-based algorithm of Fergus *et al.* [Fergus2006], and the
related method of Shan *et al.* [Shan2008], which are proposed for blind deconvolution of shaking/motion
blurs and do not incorporate any sparsity penalization. When applied to the sparse MRFM reconstruction problem the
algorithms of [Fergus2006] and [Shan2008] performed very poorly (produced divergent or trivial solutions; not
shown due to space limitations). This poor performance is likely due to the fact that
the shape of the MRFM PSF and the sparse image model are significantly different from those in blind deconvolution of
camera shaking/motion blurs.
The generalized PSF model of [Fergus2006] and [Shan2008] with the sparse image prior is Tzikas’ model
[Tzikas2009], which is described above.

We used the Iterative Shrinkage/Thresholding (IST) algorithm with a true PSF to lower bound our myopic reconstruction
algorithm. The IST algorithm effectively reconstructs images with a sparsity constraint [Daubechies2004]. From
Fig. 9(b) the performance of our result is as good as that of the oracle IST. In Table II we
present comparisons of computation time^{3}^{3}3Matlab is used under Windows 7 Enterprise and HP-Z200 (Quad 2.66 GHz)
platform. of the proposed sparse semi-blind Bayes reconstruction to that of several other algorithms.

Proposed method | 19.06 |
---|---|

Bayesian nonblind [Dobigeon2009a] | 3.61 |

IST [Daubechies2004] | 0.09 |

AM [Herrity2008b] | 0.40 |

Almeida’s method [Almeida2010] | 5.63 |

Amizic’s method [Amizic2010] | 5.69 |

Tzikas’ method [Tzikas2009] | 20.31 |

### V-C Application to 3D MRFM image reconstruction

In this section, we apply the semi-blind Bayesian reconstruction algorithm to the D MRFM tobacco virus data [Degen2009] shown in Fig. 10. The necessary modification for our algorithm to apply to D data is simple because the extension of our D pixel-wise sampling method requires only one more added dimension to extend to 3D basis vectors and 3D convolution kernel. As seen in Appendix A, the voxel-wise update of a vectorized image can be generalized to D data. This scalability is another benefit of our algorithm. The implementation of AM algorithm is impractical due to its slow convergence rates [Herrity2008b]. Here we only consider Bayesian methods. The additive noise is assumed Gaussian consistently with [Rugar1992, Degen2009], so the noise model in paragraph III-A is applied here.

The PSF basis vectors were obtained from a standard PCA and the number of principal components (PC) in the PSF perturbation was selected as 4 based on detecting the knee in a scree plot. The same interpolation method as used in

[Dobigeon2009a] was adopted to account for unequal spatial sampling rates in the supplied data for the PSF domain and the image domain.In the PSF and image domains, along axis, the grid in PSF signal space is 3 times finer than the observation
sampling density, because the PSF sampling rate along z-axis is 3 times higher than the data sampling rate is. To
interpolate this lower sampled data, we implemented a version of the Bayes MC reconstruction that compensates for
unequal projection sampling in directions using the interpolation procedure of Dobigeon *et al.*
[Dobigeon2009a].

To demonstrate that the proposed 3D MCMC semi-blind reconstruction algorithm is capable of reconstruction in the presence of significant MRFM PSF mismatch, we first applied it to a simulated version of the experimental data shown in Fig. 10. We used the scanning electron microscope (SEM) virus image reported in Degen *et al.* [Degen2009] to create a synthetic 3D MRFM virus image, one slice of which is shown in Fig. 11(a).
This 3D image was then passed through the MRFM forward model, shown in Fig. 12(a), and 10dB Gaussian noise was added. The mismatched PSF (Fig. 12(b)) was used to initialize our proposed semi-blind reconstruction algorithm. After 40 iterations the algorithm reduced the initial normalized PSF error from to . Fig. 11(b) and Fig. 12(c) show the estimated image and the estimated PSF, respectively.

We next applied the proposed semi-blind reconstruction algorithm to the actual experimental data shown in Fig. 10 for which there is neither ground truth on the MRFM image or on the MRFM PSF. The image reconstruction results are shown in Fig. 13. The small difference between the nominal PSF and
the estimated PSF indicates that the estimated PSF is close to the assumed PSF. We empirically validated this small difference by verifying that multiple runs of the Gibbs sampler gave low variance PSF residual coefficients. We conclude from this finding that the PSF model of Degen *et al.* [Degen2009] is in fact nearly Bayes optimal for this experimental data. The blind image reconstruction shown in Fig. 13 is similar to the image reconstruction in Degen *et al.* [Degen2009] obtained from applying the Landweber reconstruction algorithm with the nominal PSF.

Using the MCMC generated posterior distribution obtained from the experimental MRFM data, we generated confidence intervals, posterior mean and posterior variance of the pixel intensities of the unknown virus image. The posterior mean and variance are presented in Fig.

14 for selected pixels. In addition, to demonstrate the match between the estimated region occupied by the virus particle and the actual region we evaluated Bayesian p-values for object regions. The Bayesian p-value for a specific region having non-zero intensity is where is a probability measure and is an indicator function at the th voxel. Assuming voxel-wise independence, the p-values are easily computed from the posterior distribution and provide a level of a posteriori confidence in the statistical significance of the reconstruction. We found that over 95% of the Bayesian p-values were greater than 0.5 for the non-zero regions of the reconstruction.### V-D Discussion

Joint identifiability is a common issue underlying all blind deconvolution methods. (e.g., scale ambiguity.) Even though the unicity of our solution is not proven, given the conditions that 1) does not cover a kernel of a delta function, , and 2) the PSF solution is restricted to this linear space of ’s, the equation (17) for the MAP criteria promises a reasonable solution that is not trivial such as . Due to this restriction and the sparse nature of the image to be estimated, we can reasonably expect that the solution provided by the algorithm is close to the true PSF. A study of unicity of the solution would be worthwhile but is beyond the scope of this paper as it would require study of the complicated and implicit fixed points of the proposed Bayes objective function.

Note that proposed sparse image reconstruction algorithm can be extended to exploit sparsity in other domains, such as the wavelet domain. In this case, if we define to be the transformation matrix on , the proposed semi-blind approach can be applied to reconstruct the transformed signal . However, instead of assigning the single sided exponential distribution as prior for , a double sided Laplacian distribution might be used to cover the negative values of the pixels. The estimation procedure for PSF coefficients, noise level, and hyperparameters would be the same. For image estimation, the vector used in (22) would be replaced with the th column of .

## Vi Conclusion

We have proposed an extension of the method of the non-blind Bayes reconstruction in Dobigeon *et al.*
[Dobigeon2009a] that simultaneously estimates a partially known PSF and the unknown but sparse image. The method
uses a prior model on the PSF that reflects a nominal PSF and uncertainty about the nominal PSF. In our algorithm the
values of the parameters of the convolution kernel were estimated by a Metropolis-within-Gibbs algorithm, with an
adaptive mechanism for tuning random-walk step size for fast convergence. Our approach can be used to empirically
evaluate the accuracy of assumed nominal PSF models in the presence of model uncertainty. In our sparse reconstruction
simulations, we demonstrated that the semi-blind Bayesian algorithm has improved performance as compared to the AM
reconstruction and other blind deconvolution algorithms and non-blind Bayes method under several criteria.

Possible extensions of the proposed method may include enforcing sparsity constraints on the result PSF and the eigenfunctions, by using sparse PCA type algorithms. Also, even with the selective sampling strategy that speeds up the sampling, the MCMC methods are slower than nonstochastic methods. This will be addressed in future work.

## Acknowledgement

The authors gratefully acknowledge Dr. Dan Rugar for providing the tobacco virus data and his insightful comments on this work. They also thank the reviewers for the comments, which helped the authors significantly improve the quality of this paper.

## Appendix A Fast recursive sampling strategy

In iterative MRFM algorithms such as AM and the proposed Bayesian method, repeated evaluations of the transformation can be computationally difficult. For example, at each iteration of the proposed Bayesian myopic deconvolution algorithm, one must generate from its conditional distribution , which requires the calculation of where is the vector whose th element has been replaced by . Moreover, sampling according to the conditional posterior distributions of and (26) and (23) requires computations of .

By exploiting the bilinearity of the transformation , we can reduce the complexity of the algorithm. We describe below a strategy, similar to those presented in [Dobigeon2009a, App. B], which only requires computation of at most times. First, let denote the identity matrix and its th column. In a first step of the analysis, the vectors ()

(27) |

and vectors ()

(28) |

are computed. Then one can compute the quantity and at any stage of the Gibbs sampler without evaluating , based on the following decomposition

(29) |

The resulting procedure to update the th coordinate of the vector is described in Algorithm 3 below.

At iteration of the Gibbs sampler, for update the th coordinate of the vector

via the following steps:

Note that in step 7. of the algorithm above, is recursively computed. Once all the coordinates have been updated, the current can be directly used to sample according to the posterior distribution of the noise variance in (26). Moreover, this quantity can be used to sample according to the conditional posterior distribution of in (23). More precisely, evaluating in the acceptance probability (25) can be recursively evaluated as follows

(30) |

## Appendix B Sparsity enforcing selective sampling

Since we have estimated the ‘overall sparsity’, , of from (19), we can expedite the sampling procedure of by selectively sampling only significant portions of entire pixels of . As a result, we expect of pixel domain of to be zero, which will not need to be re-sampled.

At time

, in order to approximate the quantile

of in (21) we evaluate the quantile value of the previously obtained . This approximation accelerates the computation because the exact calculation of requires sampling of all ’s. Let and . When for from (21) is less than , then is not updated or is set to zero. Because MCMC sampling is computationally expensive, especially for large size images, this suggestion can be restricted to the burn-in period to save computations.In our experiment, the selective sampling of applied after 3 or 4th iterations produce equally good results compared to the conventional MCMC sampling methods, while reducing computation time by for non-blind sparse reconstruction with a fixed PSF and by for the semi-blind sparse reconstruction.

## Appendix C Adaptive tuning of an acceptance rate in the random-walk sampling

For an efficient sampling of from the desired distribution , we need to properly tune the acceptance rate of the samples from the proposal distribution. A careful selection of a step size is critical for convergence of the method. For example, if the step size is too large, most of the iterations will be rejected and the sampling algorithm will be inefficient. On the other hand, if the step size is too small, most of the random walk moves are accepted but these moves are slow to cover the probable space of the desired distribution, and the method is once again inefficient.

The transition density of Metropolis-Hastings sampling is , where is the proposal density from and is the acceptance probability for the move from to . Here we denote by without a subscript for simplicity. We set to be a Gaussian density function of , denoted by with a mean and a variance , which produces a random walk sample path. Since is symmetrical, , as derived in (24). Then the acceptance probability from a parameter value is . The acceptance rate with a scale parameter , acting as a step size, can be expressed as .

We evaluate these integrations by using Monte Carlo methods; with , and with . In practice, this empirical version of the integration value is evaluated as

(31) |

after the burn-in period. Therefore we can evaluate the acceptance rate with by averaging the Boolean variables of , over a time-frame window of length with realizations . In short, we iteratively update to achieve an appropriate acceptance rate, , as described in Algorithm 4:

In practice, we fix the variance of the instrumental distribution at the end of a burn-in period. Consequently, the transition kernel will be fixed and this guarantees both ergodicity and stationary distribution. In our experiment, we set a conservative acceptance range, , referring to [Robert1999], and . This strategy can also be applied to the direct parameter estimation described in Appendix D.

## Appendix D Direct sampling of PSF parameter values

As described in Section I, in the MRFM experiments, the direct estimation of PSF parameters is difficult
because of the nonlinearity of and the slow evaluation of given a
candidate value . However, if is simple and evaluated quickly, then a direct sampling
of parameter values can be performed. To apply this sampling method, instead of calculating a linearized convolution
kernel, , we evaluate the exact model PSF, , in
(23) and (25). Also the proposed parameter vector
correspondingly replaces a coefficient vector and the updated PSF is used in
the estimation of other variables. This strategy turns out to be similar to the approach adopted by Orieux *et
al.* in [Orieux2010].