# Posterior Mean Super-resolution with a Causal Gaussian Markov Random Field Prior

We propose a Bayesian image super-resolution (SR) method with a causal Gaussian Markov random field (MRF) prior. SR is a technique to estimate a spatially high-resolution image from given multiple low-resolution images. An MRF model with the line process supplies a preferable prior for natural images with edges. We improve the existing image transformation model, the compound MRF model, and its hyperparameter prior model. We also derive the optimal estimator -- not the joint maximum a posteriori (MAP) or marginalized maximum likelihood (ML), but the posterior mean (PM) -- from the objective function of the L2-norm (mean square error) -based peak signal-to-noise ratio (PSNR). Point estimates such as MAP and ML are generally not stable in ill-posed high-dimensional problems because of overfitting, while PM is a stable estimator because all the parameters in the model are evaluated as distributions. The estimator is numerically determined by using variational Bayes. Variational Bayes is a widely used method that approximately determines a complicated posterior distribution, but it is generally hard to use because it needs the conjugate prior. We solve this problem with simple Taylor approximations. Experimental results have shown that the proposed method is more accurate or comparable to existing methods.

## Authors

• 4 publications
• 1 publication
• 2 publications
• ### Higher-order MRFs based image super resolution: why not MAP?

A trainable filter-based higher-order Markov Random Fields (MRFs) model ...
10/27/2014 ∙ by Yunjin Chen, et al. ∙ 0

• ### Image Super-Resolution via Sparse Bayesian Modeling of Natural Images

Image super-resolution (SR) is one of the long-standing and active topic...
09/19/2012 ∙ by Haichao Zhang, et al. ∙ 0

• ### Amortised MAP Inference for Image Super-resolution

Image super-resolution (SR) is an underdetermined inverse problem, where...
10/14/2016 ∙ by Casper Kaae Sønderby, et al. ∙ 0

• ### A Bayesian Nonparametric Approach to Image Super-resolution

Super-resolution methods form high-resolution images from low-resolution...
09/22/2012 ∙ by Gungor Polatkan, et al. ∙ 0

• ### Deep Mean-Shift Priors for Image Restoration

In this paper we introduce a natural image prior that directly represent...
09/12/2017 ∙ by Siavash Arjomand Bigdeli, et al. ∙ 0

• ### Super-Resolution with Deep Convolutional Sufficient Statistics

Inverse problems in image and audio, and super-resolution in particular,...
11/18/2015 ∙ by Joan Bruna, et al. ∙ 0

• ### Adaptive Regularization of Ill-Posed Problems: Application to Non-rigid Image Registration

We introduce an adaptive regularization approach. In contrast to convent...
06/17/2009 ∙ by Andriy Myronenko, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Super-resolution (SR) is an information processing technique that makes it possible to infer a spatially high-resolution (HR) image of a scene from corresponding multiple low-resolution (LR) images that are affected by warping, blurring, and noise. SR can be applied to a variety of images; e.g., still images extracted from several sequential video frames. SR needs the registration of LR images in addition to the image restoration of the registered LR images. Since the earliest work by Tsai and Huang [1], SR has been achieved using various methods [3, 10, 6, 8, 5, 2, 9, 4, 7] and good overviews of these methods are given in [11, 13, 12, 14, 15, 16]. Generally, SR is an ill-posed inverse problem because inverting the blur process without amplifying the effect of the noise is difficult [13]

. In other words, the degrees of freedom of the HR image and pixel-wise observation noise are always higher than the dimensionality of the observed LR images, so complete determination of an HR image is impossible. Therefore, the HR image is frequently inferred as the most preferable image within the framework of the probabilistic information processing, and we handle SR using this framework in this paper. The probabilistic information processing has three key features: 1) model, 2) objective function, and 3) optimization method. In the SR problem, the model includes the observation model and the prior model. The observation model consists of warping, blurring, downsampling, and noise models. The prior model, necessary for the Bayesian framework, mainly consists of an HR image prior, and sometimes includes both the hyperparameter prior for the HR image prior and the registration prior. The objective function evaluates how good or bad an estimator is. The estimator usually represents the inferred HR image, and sometimes includes auxiliary parameters; e.g., the registration parameters and edge information. The optimization method numerically maximizes/minimizes the objective function and determines the estimator. An optimization method is not necessary for simple problems in which an analytical exact solution can be obtained. In the probabilistic information processing, SR can be categorized according to these three key features.

To deal with warping, blurring, and downsampling, a linear transformation model is frequently used

[3, 6, 8, 10]. Warping is usually limited with planar rotation and parallel translation. Blurring is defined by using a point spread function (PSF); a square or Gaussian type PSF is common. Downsampling denotes sampling from an HR image to construct an LR image. Downsampling sometimes includes anti-aliasing. Since these three transformations are linear, they can be combined into a single transformation matrix. As for the noise model, pixel-independent additive white Gaussian noise (AWGN) is usually employed.

The Bayesian framework, especially the HR image prior, is quite useful for SR. The HR image prior provides appropriate smoothness between neighboring pixel luminances. A common type of HR image prior imposes an L2-norm penalty on differences between horizontally and vertically adjacent pixel luminances (the first derivative). The L1-norm of the first derivative is sometimes used, and it has the advantage of robust inference against outliers. The total variation (TV) prior

[10]

employs the L1-norm of the gradient vector. The Huber prior

[5] is a mixture prior of L1- and L2-norms. The SAR model [2, 17, 9] employs the response of a two-dimensional Laplacian filter (the second derivative). The Gaussian process prior [3]

has neighboring pixels spread according to a Gaussian distribution. Besides the degree of smoothness between neighboring pixels, information regarding the discontinuity, or equivalently, the edges or line process, is also useful for inference. A common type of prior implementing edges is the compound Markov random field (MRF) prior that was introduced by Geman & Geman

[18] and is widely used [4, 6, 8]. With respect to the compound MRF [19, 20] prior, the normalizing constant, or equivalently, the partition function, is usually difficult to calculate because it has an exponential calculation cost with respect to the dimensionality of the line process. Recently, Kanemura et al. [6, 8] confusingly introduced a “causal” type of Gaussian MRF prior whose calculation cost is polynomial. We try to improve this prior in this paper.

The SR estimator should be derived from an objective function. As the objective function, a posterior distribution has been widely employed. Since the posterior distribution usually includes both the HR image and registration parameters, the joint maximum a posteriori (MAP) solution [2] is a suitable estimator for this objective function. Other than the joint MAP, the use of the marginalized maximum likelihood (ML) [3, 6] or marginalized MAP [5] has been proposed. Tipping et al. [3] and Kanemura et al. [6, 8] determine the registration parameters by using ML inference, where the HR image is marginalized out, and determine the HR image by using MAP inference. Pickup et al. [5] determines the HR image by using MAP inference, wherein the registration uncertainties are marginalized out, and assumes that the registration parameters are pre-registered by using standard registration techniques. Marginalized ML is also called type-II ML, evidence approximation, or empirical Bayes. Marginalized ML has no registration prior, unlike marginalized MAP. Pickup et al. [5] reported that marginalized MAP is superior to both joint MAP and marginalized ML. We evaluate the accuracy of SR methods in terms of the L2-norm (mean square error) -based peak signal-to-noise ratio (PSNR). Therefore, we think it is natural to employ PSNR as the objective function. For this objective function, posterior mean (PM) is a suitable estimator. The variational Bayes [21] approach [10] seems to approximately determine the PM of the HR image, although the authors assume some registration parameters are known and use point-estimate model parameters obtained by ML inference. To determine the exact PM of the HR image, all parameters other than the HR image should be marginalized out over the joint posterior distribution.

The type of optimization method to use is not as substantial a problem as the choice of model and objective function, but it is still important. Since almost all good estimators cannot be exactly determined because of difficult analytical integration or an exponential calculation cost, some approximation methods need to be introduced. Also, parameter tuning is necessary in many numerical optimization methods; e.g., of the initial value and the step-width settings in gradient methods. Specifically, in early work done on image restoration, an annealing method was used for the joint MAP solution [18, 22]. For marginalized ML and marginalized MAP solutions, the scaled conjugate gradients algorithm was used [3, 5]

. In recent work, the variational expectation-maximization (EM) algorithm has been applied, which includes the gradient method in the M step

[6, 8]. The variational Bayes approach has also been applied [10]. This method includes nested optimization of the majorization-minimization approach. This majorization-minimization approach seems to affect both the HR image prior and the estimator. Specifically, it modifies the TV prior to include a discontinuity parameter (called local spatial activity). In addition, this parameter is point-estimated when the HR image is inferred.

In this paper, we propose a new SR method that employs a “causal” Gaussian MRF prior and utilizes variational Bayes to calculate the optimal estimator, PM, with respect to the objective function of the L2-norm-based PSNR. This is a straightforward approach, but it was not proposed earlier possibly because an important limitation of variational Bayes is that a conjugate prior is needed. We solve this problem through simple Taylor approximations. In Section II, we define models, where we introduce a novel unified warping, blurring and downsampling model, an improved HR image prior, an improved hyperparameter prior, and a registration prior. In Section III, we employ PSNR as the objective function and derive the optimal estimator, PM, from this objective function. In Section IV, we determine the PM by using variational Bayes and Taylor approximations. In Section V, we evaluate the proposed method by comparing it with existing methods. We discuss the proposed method in Section VI and conclude in Section VII.

## Ii Model

### Ii-a Definitions

First, we define the gamma, Bernoulli, and Gaussian distributions used in this paper:

 Gamma(x;a,b) ≡baΓ(a)xa−1\rme−bx(x>0), Bernoulli(x;μ) ≡μx(1−μ)1−x(x∈{0,1}), \calN(\bx;\bmu,\bSigma) ≡|2\rmpi\bSigma|−12\rme−12(\bx−\bmu)⊤\bSigma−1(\bx−\bmu)(\bx∈Rd),

Here, is the gamma function, denotes the determinant of a given matrix, superscript denotes the transpose, is the real number field, and is the dimension of . The logistic function and Kullback-Leibler (KL) divergence from distributions to are respectively defined as

 logistic(x) ≡11+\rme−x, DKL(p(\bx)∥q(\bx)) ≡⟨lnp(\bx)q(\bx)⟩p(\bx),

where the angle brackets denote the expectation of with respect to a distribution . Additionally, denotes the trace of a given matrix. denotes a diagonal matrix.

is an identity matrix of appropriate size.

is a zero vector or a zero matrix of appropriate size. All the vectors in this paper are column vectors. The

denotes the L2-norm of a given vector. At this point, these variables have absolutely nothing to do with the variables that appear later.

### Ii-B Observation Model

Our task is to estimate an HR grayscale image, , from the observed multiple LR grayscale images, . Images and are regarded as lexicographically stacked vectors. The number of pixels for each LR image, , is assumed to be less than that of the HR image, ; i.e., . We do this estimation using an SR technique whose resolution enhancement factor is . Although we define the range of a pixel luminance value as infinite, we use for black, for white, and values between and for gradual gray.

The image observation process is modeled as shown in Fig. 1; the HR image is geometrically warped, blurred, downsampled, and corrupted by noise to form the observed LR image :

 \byl≡\bW(\bphil)\bx+\bepsilonl, (1)

or, more strictly,

 p(\bY|\bx,β,\bPhi)≡L∏l=1\calN(\byl;\bW(\bphil)\bx,β−1\bI). (2)

The

is AWGN with precision (inverse variance)

. Here, is the transformation matrix that is simultaneously used for warping, blurring, and downsampling. It is defined as

 \bW(\bphil)j,i (3) →χ(θ,→o,→ζ,→ξ) ≡[cosθsinθ−sinθcosθ](α→ζ−→o)−→ξ, (4)

where represents the extent of the summation (explained in the next paragraph), and the vectors and respectively denote the two-dimensional positions of the -th pixel of the original HR image and the -th pixel of the observed LR image. We define the center of each image as the origin and the size of each pixel is by . For example, regarding an HR image with pixels, each represents . and represent the warping parameters of the -th LR image: the rotational motion parameter and translational motion parameter. The Gaussian distribution in (3) represents a Gaussian PSF that defines the blur, and represents its precision parameter. In this paper, we assume also differs for each observed image. These transformation parameters are packed into , which is defined as

 \bPhi≡{\bphil}Ll=1, \bphil≡[ϕl,k]4k=1≡[θl,[→ol]h,[→ol]v,γl]⊤, (5)

where subscripts and , respectively, denote horizontal and vertical positions on the image.

In previous works [3, 6, 8], the extent of was defined as the extent of the HR image. According to this definition, however, the shape of the PSF is no longer Gaussian. For example, at the corner of the HR image, the shape is not omnidirectional but limited in a way such as that of a quadrant. In this paper, the extent of is defined as infinite, and the luminance values outside the HR image are defined as (middle gray). This normalization term faithfully represents the Gaussian PSF. We also found that this normalization term is exactly given by using the elliptic theta function , and we can rewrite as

 \bW(\bphil)j,i =\calN(→χ(θl,→ol,→ζj,→ξi);0,γ−1l\bI)ϑ3([→χ(θl,→ol,→ζj,→ξi)]h,\rme−2\rmpi2γl)ϑ3([→χ(θl,→ol,→ζj,→ξi)]v,\rme−2\rmpi2γl), (6) ϑ3(u,q)≡1+2∞∑n=1qn2cos2n\rmpiu. (7)

The elliptic theta function includes an infinite series, but it is easily determined numerically because the convergence is quite fast. In (II-B), the normalization term (the denominator of the right-hand side) seems to depend on because includes , but this is not true. Because the elliptic theta function is a periodic function with respect to the argument with period , and can only take discrete values with step size for the horizontal and vertical directions, the normalization term has the same value with respect to .

### Ii-C HR Image Prior

Here, we introduce a “causal” Gaussian MRF prior for the HR image and additional latent variables. These latent variables are called the line process that controls the local correlation among pixel luminances. The introduction of the latent variables enables explicit expression of the possible discontinuity in the HR image. The line process,

, consists of binary variables

for all adjacent pixel pairs and . Its size equals . We define the prior as

 p(\bx,\bmeta|λ,ρ,κ)≡p(\bx|\bmeta,ρ,κ)p(\bmeta|λ) (8) =exp[−λ∑i∼j(1−ηi,j)−ρ2∑i∼jηi,j(xi−xj)2−κ2∥\bx∥22 +12ln∣∣∣\bA(\bmeta,ρ,κ)2\rmpi∣∣∣+N\bmetalnlogistic(λ)], (9)

where

 p(\bmeta|λ) ≡∏i∼jBernoulli(ηi,j;logistic(λ)), (10) p(\bx|\bmeta,ρ,κ) ≡\calN(\bx;0,\bA(\bmeta,ρ,κ)−1), (11) \bA(\bmeta,ρ,κ)i,j ≡⎧⎨⎩ρ∑k∼iηi,k+κ,i=j,−ρηi,j,i∼j,0,otherwise. (12)

Here, the summation is taken over all pairs of adjacent pixels. The notation means that the -th and -th pixels are adjacent in the upward, downward, leftward, and rightward directions. The line process switches the local characteristics of the prior. It indicates whether two adjacent pixels take similar values or independent values. When , the -th and the -th pixels are strongly smoothed according to the quadratic penalty, whereas there is no smoothing when . The hyperparameter is an edge penalty parameter that prevents from excessively taking edges. Note that is restricted to positive values because a negative leads to a reward rather than a penalty for taking edges. is a smoothness parameter that prevents the differences in adjacent pixel luminances from becoming large, and is a contrast parameter that prevents from taking an improperly large absolute value. On the other hand, in previous works [6, 8], is assumed to be , which results in an improper normalizing constant (see Discussion). is the precision matrix of .

We have defined the introduced causal Gaussian MRF prior in the joint distribution form of

and , i.e., . We call such a model “causal” because seems to cause . The MRF model is defined as having the property

 p(xi|\bx∖xi,\bmeta)=p(xi|\bx\calL(i),\bmetai,\calL(i)) (13)

in this case; i.e., the conditional distribution of a random variable,

, given all other variables, and , equals the conditional distribution of the random variable, , given its “neighboring” variables, and . If this conditional distribution is a Gaussian distribution, such an MRF is called a Gaussian MRF.

The “compound” MRF prior is usually defined in the form of the Gibbs distribution [18],

 \tp(\bx,\bmeta)≡exp(−\tH(\bx,\bmeta))∑\bmeta∫exp(−\tH(\bx,\bmeta))d\bx, (14)

which is based on some microstate energy function, or equivalently, a Hamiltonian, such as

 \tH(\bx,\bmeta) ≡λ∑i∼j(1−ηi,j)+ρ2∑i∼jηi,j(xi−xj)2+κ2∥\bx∥22. (15)

In addition to the property of (13), a compound MRF also has the property of

 \tp(ηi,j|\bx,\bmeta∖ηi,j)=\tp(ηi,j|xi,xj), (16)

whereas the introduced “causal” Gaussian MRF prior does not. Therefore, we do not call the introduced prior a “compound” MRF prior, even though (8) and (14) have similar forms. Furthermore, the introduced “causal” Gaussian MRF prior is a generative model, whereas the “compound” MRF is not. A generative model has the advantage of reducing the calculation cost (see Discussion).

### Ii-D Hyperparameter Prior

Generally, prior distributions should be non-informative unless we have explicit reasons because an informative prior leads to heuristics. Actually, we define the prior distributions for the hyperparameters of the HR image prior to be as non-informative as possible:

 p(λ,ρ,κ,β)≡Gamma(λ;a(0)λ,b(0)λ)Gamma(ρ;a(0)ρ,b(0)ρ) p(λ,ρ,κ,β)≡×Gamma(κ;a(0)κ,b(0)κ)Gamma(β;a(0)β,b(0)β), (17) a(0)λ≡10−2,b(0)λ≡10−2, a(0)ρ≡10−2,b(0)ρ≡10−2, a(0)κ≡10−2,b(0)κ≡10−2, a(0)β≡10−2,b(0)β≡10−2. (18)

For a gamma distribution, the number of effective prior observations in the Bayesian framework is equal to two times parameter

. As shown in the Appendix, the number of observations for the hyperparameter is in this SR. Also, that for and is , and that for is . Therefore, the above settings – e.g., – are considered sufficiently non-informative. Superscript is added because we use these parameters as the initial values of variational Bayes later.

### Ii-E Registration Prior

For the registration parameters including the blurring parameter, we also define the corresponding prior as

 p(\bPhi)≡L∏l=1\calN(\bphil;\bmu(0)\bphil,\bSigma(0)\bphil), (19) \bmu(0)\bphil≡[0,0,0,12/α2], \bSigma(0)\bphil≡diag[10−3,100,100,10−3]. (20)

For the rotational motion parameter , the prior assumes degree (). This assumption is considered suitable for this SR task. Similarly, an assumption of pixels for translational motion parameters and is considered suitable. For blurring parameter , is taken to be the value equivalent to the anti-aliasing of the scale factor .

## Iii Objective Function and Estimator

### Iii-a Peak Signal-to-Noise Ratio (PSNR)

First, we confirm that the joint distribution of all random variables can now be explicitly given as

 p(\bY,\bz) =p(\bY|\bx,β,\bPhi)p(\bx,\bmeta|λ,ρ,κ)p(λ,ρ,κ,β)p(\bPhi), (21) \bz ≡[\bx,\bmeta,[λ,ρ,κ,β],\bPhi], (22)

Once the joint distribution is obtained, we can derive all the marginal and conditional distributions; e.g., the posterior distribution and joint distribution of the HR and LR images .

One of the most commonly used evaluation functions of the inferred image is the L2-norm (mean square error) -based PSNR. It is defined as

 PSNR(\hbx;\bx)≡10log10221N\bx∥\hbx−\bx∥22, (23)

where is the estimator of the HR image and is the true HR image. Since only LR images, , are available for the estimator, we sometimes explicitly express it as a function form, . Now, our objective function (functional) to be maximized regarding the estimator is defined as

 (24)

This is because we prefer good estimator performance on average over various HR images and the corresponding LR images. Here, we assume that the occurrence rate of HR and LR images exactly coincides with the model we just introduced.

### Iii-B Posterior Mean (PM)

Using the above objective function, we can explicitly derive the best estimator of the HR image as the PM,

 (25)

Here, we used the well-known fact that the PM coincides with the minimum mean square error estimator in Bayesian framework. Note that needs marginalization of all parameters other than over . If the PM of the line process or other model parameters is necessary, it can also be determined in the same manner.

## Iv Optimization Method

### Iv-a Variational Bayes

Though we could derive the optimal estimator, we cannot obtain the analytical solutions of the posterior distribution and marginalized posterior distribution . Consequently, we have to rely on approximations. Here, we employ variational Bayes.

Variational Bayes [21] provides a trial distribution that approximates the true posterior. We impose a factorization assumption on the trial distribution,

 q(\bz)≡q(\bx)q(\bmeta)q(λ,ρ,κ,β)q(\bPhi). (26)

Note that, at this moment, the distribution family of each factorized distribution is not limited. We identify the optimal trial distribution that minimizes the KL divergence between the trial and the true distributions as the best approximation of the true distribution:

 \hq(\bz)≡\argminq(\bz)DKL(q(\bz)∥p(\bz|\bY)). (27)

Actually, the trial distribution that minimizes the KL divergence, not from to but from to coincides with the product of the exact marginal distributions as

 \argminq(\bz)DKL(p(\bz|\bY)∥q(\bz))=∏ip(zi|\bY), (28)

but this minimization is difficult to calculate.

Under the factorization assumption of the trial distribution and the extremal condition of the KL divergence, each optimal trial distribution should satisfy the self-consistent equations,

 \hq(zi) ∝exp⟨lnp(\bz|\bY)⟩∏j≠i\hq(zj). (29)

In the common style of variational Bayes [23, 10], this equation is solved by making repetitive updates,

 q(0)(zi) ≡p(zi), (30) q(t+1)(zi) ∝exp⟨lnp(\bz|\bY)⟩∏j≠iq(t)(zj). (31)

Each factorized trial distribution is supposed to converge to the optimal distribution. Sometimes, some s are used instead of s for the distribution on the right-hand side of (31). It depends on the hierarchical structure of the model. Similarly, some s may not be necessary.

### Iv-B Taylor Approximations

Although variational Bayes is a widely used general framework, its application is difficult in practice because it requires a conjugate prior. The prior distributions we have introduced are not conjugate priors. However, we have found that simple Taylor approximations make them conjugate and enable the analytical exact expectations in (31).

Here, to simplify the notation, we define the mean values of the latent variables , the hyper parameters , and the registration parameters over the trial distributions in the step number of the updates of variational Bayes as , , , , , .

Specifically, we use first-order Taylor approximations for three non-linear terms. is approximated around ,

 \bW(\bphil)≃\bW(t)l+4∑k=1[\bphil−\bmu(t)\bphil]k\bW′(t)l,k, (32)

where

 \bW(t)l ≡\bW(\bmu(t)\bphil), (33) \bW′(t)l,k ≡∂\bW(\bphil)∂ϕl,k∣∣∣\bphil=\bmu(t)\bphil. (34)

Similarly, is approximated around ,

 ln∣∣\bA(\bmeta,ρ,κ)∣∣≃ln∣∣\bA(\bmu(t)\bmeta,μ(t)ρ,μ(t)κ)∣∣ +\tr\bA(\bmu(t)\bmeta,μ(t)ρ,μ(t)κ)−1[μ(t)ρ\bA(\bmeta−\bmu(t)\bmeta,1,0) +(lnρ−lnμ(t)ρ)μ(t)ρ\bA(\bmu(t)\bmeta,1,0)+(lnκ−lnμ(t)κ)μ(t)κ\bI]. (35)

We also use a similar approximation around . In addition, is approximated around ,

 lnlogistic(λ)≃lnlogistic(μ(t)λ) +(lnλ−lnμ(t)λ)μ(t)λlogistic(−μ(t)λ). (36)

### Iv-C Update Equations

The trial distributions are obtained from (30)-(32), (IV-B), and (IV-B), as follows:

 q(t)(\bmeta) =∏i∼jBernoulli(ηi,j;μ(t)ηi,j), (37) q(t)(\bx) =\calN(\bx;\bmu(t)\bx,\bSigma(t)\bx), (38) q(t)(λ,ρ,κ,β) =Gamma(λ;a(t)λ,b(t)λ)Gamma(ρ;a(t)ρ,b(t)ρ) ×Gamma(κ;a(t)κ,b(t)κ)Gamma(β;a(t)β,b(t)β), (39) q(t)(\bPhi) =L∏l=1\calN(\bphil;\bmu(t)\bphil,\bSigma(t)\bphil). (40)

For (30) and (31), we update those distributions as follows. First, we compute using . Second, we compute using . Finally, we compute using and using . Here, we simply compute only the parameters of those distributions because we can compute the expectations in (31) analytically by using Taylor approximations in (32), (IV-B), and (IV-B). Specific update equations are described in the Appendix.

For the initial parameters of the trial distributions of and , we use non-informative values,

 \bmu(0)\bmeta≡0, \bmu(0)\bx≡0, \bSigma(0)\bx≡0. (41)

For the initial parameters for , , , and , we use the same values as their prior’s values.

We obtain the well-approximated PM of as . Realistically, instead of , we use when the following convergence conditions hold for and each ,

 1N\bx∥\bmu(t+1)\bx−\bmu(t)\bx∥22 <10−4, 1LL∑l=1(μ(t+1)ϕl,k−μ(t)ϕl,k)2[\bsigma2\bphi]k <10−4(k=1,2,3,4), (42)

where we defined as the scaling constant.

## V Experimental Results

The proposed method was evaluated using five gray-scale images with a size of pixels, as shown in Fig. 2. From each image, images with a size of pixels were created by using (1), (2) with the settings of the parameters , , and as the following. The resolution enhancement factor was 4. The transformation parameter was randomly created according to the prior distribution in (19). The noise level parameter was set for signal-to-noise ratios (SNR) of , , and dB for each image. Samples of the created images are shown in Fig. 3.

Figure 4 shows the images estimated under SNRdB. The resolution of each image appeared to be better than the corresponding observed image in Fig. 3.

Table I

lists the quantitative results compared to those from the methods of bilinear interpolation, Kanemura

et al. [6], and Babacan et al. [10]. Note that we added a slight modification to these methods because they employ slightly different models. For example, the original method [10] assumes the blurring parameter is known, so we set as the mean value of the true distribution for this method. Also, we introduced a strong prior for in the Kanemura method [6]

in contrast to the original method, because this parameter sometimes becomes negative. We evaluated the results with regard to the expectation and the standard deviation of the improvement in signal-to-noise ratio (ISNR) over

experiments on each image and for each SNR. ISNR is the relative PSNR defined as

 ISNR≡PSNR(\hbx;\bx)−PSNR(\tbx;\bx), (43)

where is the true HR image, is the image estimated by the proposed method, and is the image estimated by the compared method. A higher ISNR value means better improvement of the estimate against the estimate of the compared method. We see that the ISNRs of the proposed method were mostly higher than those of the other methods, except for the comparison with the Babacan’s method in Pepper image.

Table II lists the root mean square errors (RMSE) of our method and the other methods. To evaluate the estimated registration parameters, we took the RMSEs over experiments ( experiments 5 images) for each noise level. Of course, a lower RMSE value means a better estimate. We see that the RMSEs of the proposed method were mostly higher than those of the other methods.

The calculation times of the proposed method was about minutes on an Intel Core i7 2600 processor. The proposed method was a little slower than the method of Babacan et al. [10] and a little faster than the method of Kanemura et al. [6].

## Vi Discussion

With regard to the observation model, we used a linear transformation and AWGN. The use of the linear transformation model is advantageous since an arbitrary transformation matrix can be employed because of the Taylor approximation. The transformation matrix can be constructed by multiplying three matrices: the warping, blurring, and downsampling matrices [10]. A disadvantage of this is that sub-pixel errors might accumulate. We prefer matrix construction via a continuous function [3]. We improved the construction by introducing an elliptic theta function for the normalizing constant in (II-B). This normalizing constant provides fair pixel weights for both marginal and central areas of the HR image and faithfully represents the Gaussian PSF.

With regard to the HR image prior, we used a causal type of prior, which was first introduced by Kanemura et al. [6, 8]. The microstate energy function, or equivalently, the Hamiltonian, -based compound MRF prior of (14), offers the advantage of easy construction, but it usually has an exponential calculation cost, , for the normalizing constant or, equivalently, the partition function, and this is an obstacle to direct calculation of the PM solution. The MAP solution has been used in work elsewhere because it is not affected by the normalizing constant. In contrast, the introduced causal type of prior of (8) has only a polynomial calculation cost , which enables us to successfully apply the variational Bayes method to this problem.

With regard to the hyperparameter priors, we also improved the existing method. As the edge penalty parameter , Kanemura et al. [6] implicitly assumed , which leads to a negative and consequently results in an edge-strewn image. We assumed by setting its prior according to a gamma distribution, resulting in an appropriate inference. As the smoothness parameter , they practically fixed the value of with a strongly informative prior. We chose a non-informative prior for . We show the box and whisker plot of the PM for each hyperparameter over experiments on each image under SNRdB noise in Fig. 5. As can be seen, the inferred value of the PM of showed wide variation, with an approximately 10-fold maximum-to-minimum ratio, depending on the original image. This result can be interpreted as meaning it is worth inferring in each HR image. Furthermore, and respectively showed approximately 2-fold and 4-fold ranges of variation. Regarding the contrast parameter , they assumed , which leads to , and this results in an improper normalizing constant. While we assume , which leads to a proper normalizing constant, we can consequently take the term of into account in the update equations of the variational Bayes.

With regard to the prior distribution for the blurring parameter , we used a Gaussian distribution even though is a positive real number. This is because we selected a simpler expression. We tried using the prior of the gamma distribution as , but the improvement was small. One disadvantage of this model is that a non-informative setting for this prior may lead to a nonsense result where the inferred is negative. Moreover, we employed a somewhat informative prior for . This is because the blurring parameter and smoothness hyperparameter are somewhat complementary. This means that simultaneous estimation of and is difficult. Tipping et al. [3] and Kanemura et al. [6] fixed , and Babacan et al. [9] fixed .

With regard to the estimator, we logically derived the optimal estimator PM from the objective function of the L2-norm-based PSNR. The widely used joint MAP estimator can be considered the optimal estimator for the all-or-none type objective function,

 \argmax\hbz⟨δ(\hbz−\bz)⟩p(\bz|\bY)=\argmax\bzp(\bz|\bY), (44)

where is the Dirac delta or Kronecker delta function. Generally, this type of objective function is nonsensical for continuous variables because it is measure zero. If all the random variables in the posterior distribution are discrete, or if we can assume some smoothness of the posterior distribution, a joint MAP solution will have meaning. Instead of the L2-norm-based objective function of PSNR, the L1-norm (mean absolute error) -based PSNR is sometimes employed. In such cases, the median of the posterior distribution is generally the optimal estimator. In the case of the marginalized ML, or equivalently, type-II ML or empirical Bayes, for example, the registration parameters and other hyperparameters are firstly inferred as:

 [\hlambda,\hrho,\hkappa,\hbeta,\hbPhi]≡\argmaxλ,ρ,κ,β,\bPhip(\bY|λ,ρ,κ,β,\bPhi). (45)

If these parameters have priors, such a method is called marginalized MAP. The HR image and sometimes the edge information are then inferred to as MAP,

 \hbx≡\argmax\bxmax\bmetap(\bx,\bmeta|\bY,\hlambda,\hrho,\hkappa,\hbeta,\hbPhi), (46)

or PM. For such a two-step inference, it is difficult to calculate back the objective function.

With regard to the Taylor approximation for the transformation matrix , we used the first-order approximation in (32) because it is more stable than the second-order approximation. This first-order approximation was proposed by Villena et al. [9]. The second-order approximation was proposed by Pickup et al. [5], and they obtained good results. We also tried the second-order approximation, but it sometimes made the algorithm unstable because it sometimes failed to produce a positive definite matrix for the covariance matrix .

With regard to the Taylor approximation for and , we introduced the first-order approximation around and , respectively, in (IV-B) and (IV-B). Note that the Taylor expansion not with respect to , but with respect to is our key idea to solve the conjugate prior problem. Indeed, we could successfully derive the terms originating from in update equations ((52), (60), and (62) in Appendix). Kanemura et al. [6, 8] ignored the term of because of the high calculation cost, and this would result in less accurate inference. As for , we implicitly assumed that is not a binary vector but a continuous vector and did the differentiation. This assumption is based on (12). If we make another assumption – i.e., replacement of with in (12) – (12) has the same meaning, but the result of the Taylor approximation will differ from the current form.

With regard to the experimental results, the proposed method outperforms the other methods in terms of the ISNR for most images and noise levels. Moreover, its estimation of the registration parameters was more accurate than the other methods were for most conditions. Therefore, we conclude the proposed method is on the whole superior to the other methods. Compared with bilinear interpolation and Kanemura’s method, the superiority of the proposed method was clear. Compared with the Babacan’s method, the superiority of the proposed method was rather slight. Especially, in the case of the Pepper image in dB noise, the porposed method was worse than the Babacan’s method. This inferiority is considered to be caused by unstable estimation of and , where Babacan’s method fixed the value of to the true expected value in our implementation. Intuitively, the Pepper image is smoother than the other images and has fewer edges. Therefore, this feature is considered to be less preferable for complementary parameters of and .

With regard to the calculation cost, the proposed algorithm requires . This calculation cost order is given by two matrix inversions: in (55) and in (52) and (62) (see Appendix). We found that a simple approximation such as considering all the off-diagonal elements to be zero reduces the calculation time but obviously degrades accuracy. We hope to solve this problem in our future work.

## Vii Conclusion

In this paper, we proposed a Bayesian image super-resolution (SR) method with a causal Gaussian Markov random field (MRF) prior. We improved existing models with respect to three points: 1) the combined transformation model through a preferable normalization term using the elliptic theta function, 2) the causal Gaussian MRF model through introduction of a contrast parameter , which provides an effective normalizing constant including , and 3) the hyperparameter prior model through application of a gamma distribution for the edge penalty parameter , which prevents an unfavorable edge-strewn image. We then logically derived the optimal estimator, that is, not the joint maximum a posteriori (MAP) or marginalized maximum likelihood (ML) but the posterior mean (PM), from the objective function of the L2-norm (mean square error) -based peak signal-to-noise ratio (PSNR). The estimator is numerically determined by using variational Bayes. We solved the conjugate prior problem in variational Bayes by introducing three Taylor approximations. Other than these approximations, we did not use any approximations such as ignoring the term . Experimental results showed that the proposed method is mostly superior to existing methods in accuracy.

Here, we show the details of the variational Bayes’ update equations in Section IV-C.

The mean values of the hyperparameters over the trial distributions are given by

 μ(t)λ=a(t)λb(t)λ, μ(t)ρ=a(t)ρb(t)ρ, μ(t)κ=a(t)κb(t)κ, μ(t)β=a(t)βb(t)β. (47)

The update equation of is given as

 q(t+1)(\bmeta) ∝exp⟨lnp(\bz|\bY)⟩q(t)(\bx,λ,ρ,κ,β,\bPhi) ∝exp(∑i∼j⎧⎨⎩c(t