# Bayesian Image Restoration for Poisson Corrupted Image using a Latent Variational Method with Gaussian MRF

We treat an image restoration problem with a Poisson noise chan- nel using a Bayesian framework. The Poisson randomness might be appeared in observation of low contrast object in the field of imaging. The noise observation is often hard to treat in a theo- retical analysis. In our formulation, we interpret the observation through the Poisson noise channel as a likelihood, and evaluate the bound of it with a Gaussian function using a latent variable method. We then introduce a Gaussian Markov random field (GMRF) as the prior for the Bayesian approach, and derive the posterior as a Gaussian distribution. The latent parameters in the likelihood and the hyperparameter in the GMRF prior could be treated as hid- den parameters, so that, we propose an algorithm to infer them in the expectation maximization (EM) framework using loopy belief propagation(LBP). We confirm the ability of our algorithm in the computer simulation, and compare it with the results of other im- age restoration frameworks.

## Authors

• 9 publications
• ### Fast Bayesian Restoration of Poisson Corrupted Images with INLA

Photon-limited images are often seen in fields such as medical imaging. ...
04/02/2019 ∙ by Takahiro Kawashima, et al. ∙ 0

• ### Image Restoration with Signal-dependent Camera Noise

04/13/2012 ∙ by Ayan Chakrabarti, et al. ∙ 0

• ### A Variational Bayesian Approach for Image Restoration. Application to Image Deblurring with Poisson-Gaussian Noise

In this paper, a methodology is investigated for signal recovery in the ...
10/24/2016 ∙ by Yosra Marnissi, et al. ∙ 0

• ### Patch-Based Image Restoration using Expectation Propagation

This paper presents a new Expectation Propagation (EP) framework for ima...
06/18/2021 ∙ by Dan Yao, et al. ∙ 0

• ### Active image restoration

We study active restoration of noise-corrupted images generated via the ...
09/22/2018 ∙ by Rongrong Xie, et al. ∙ 2

• ### Bayesian image segmentations by Potts prior and loopy belief propagation

This paper presents a Bayesian image segmentation model based on Potts p...
04/11/2014 ∙ by Kazuyuki Tanaka, et al. ∙ 0

• ### Statistical Analysis of Signal-Dependent Noise: Application in Blind Localization of Image Splicing Forgery

Visual noise is often regarded as a disturbance in image quality, wherea...
10/30/2020 ∙ by Mian Zou, et al. ∙ 2

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The technique of the noise reducing, which is called image restoration in the field of digital image processing, is an important in the meaning of the pre-processing. In order to reduce the noise, we should focus several clues for image property. The classical methods, such like the Gaussian or median filter methods, are focused on the similarity of the neighbor pixels. In these decade, a lot of image restoration procedure were also proposed. Introducing image-block similarity instead of pixel-pair similarity, Dabov et al. proposed a block matching method called BM3D that reduces the noise with less degrading edge-like features[1].

From the theoretical viewpoint of statistical inference, these clues could be considered as knowledge, which is called prior, for the natural images. Thus, it is natural to introduce Bayesian inference into the image restoration. In the framework of Bayesian image restoration, additive white Gaussian noise (AWGN) was mainly discussed as the image corrupting process

[2][3][4] [5][6], since the analytical solution could be derived explicitly for the AWGN. However, in the real world, the noise corruption process often could not be described as such Gaussian observation. For example, we could treat the low contrast object observation, such like night vision, as a Poisson noised observation, since the observation of photons might be expressed as a rare event. The Poisson noised observation also appears in some kinds of medical imaging like positron emission tomography (PET).

The Poisson image restoration methods were also proposed in these decades [7][8][9][10][11]. Figueiredo & Bioucas-Dias designed the objective function as the the likelihood function with several penalized term, and optimized the objective function with the alternating directional method of multipliers[9]. Ono & Yamada proposed optimization of the similar objective function by use of hybrid steepest decent method[11]. The other methods also designed the similar objective function for their applications.

These objective functions are defined by the likelihood function of Poisson observation with some penalized term. In the Bayesian manner, regarding such penalty term as a prior, we can consider the penalized method as a maximization of a posteriori (MAP) method. MAP method is a effective strategy for the image restoration, however, the strength balance between the prior and the likelihood is hard to determine in its framework. On the contrary, Bayes inference could determine the strength of the penalized term naturally as the hyperparameter inference[2][4][5][6].

In this study, we treat a Poisson corrupted image restoration problem, and solve it in the manner of the Bayesian approach. The Bayesian approach also requires both likelihood and prior probability. We introduce the observation process as a likelihood, and also introduce Gaussian Markov random field (GMRF) as a prior after the fashion of the several works

[2][4].

Assuming the Poisson corruption observation makes difficult to derive the posterior probability in analytic form, since the Poisson variable take discrete and non-negative value. Thus, we introduce a latent variational approximation in the inference derivation

[12][13][14][15][16] [17][18]. In this study, we transform the Poisson corruption process as the corresponding Bernoulli process, and introduce local latent variables to approximate the observation process as the Gaussian function for the likelihood in the Bayesian approach[15]. Once, we evaluate the observation likelihood as a Gaussian function, we can derive the posterior probability easily[17][18]. In this formulation, we should introduce several latent parameters to describe the observation. In order to infer them, we introduce a expectation maximization (EM) algorithm [19][20], which requires an iterative inference. Our previous work shows the preliminary results[17][18] of this paper. In this paper, we refine the formulation of Bayesian inference of [17], and evaluate the accelerated results of [18] using several images with comapring of other methods.

In the following, we formulate the Bayesian image framework in the section 2 at first. After that, we confirm the abilities of our approach with computer simulation in the section 3. At last, we will conclude and summarize our approach in the section 4.

The source code for this paper can be available from the following site: https://github.com/shouno/PoissonNoiseReduction

## 2 Bayesian Formulation

Our method is based on the Bayesian approach, so that, we explain both image observation process and prior probability in the following. Before the formulation, we define several notations. We consider the 2-dimensional image whose size are and , so that the total number of pixels is described as .

### 2.1 Image Observation process

The digital image is usually defined by the 2-dimensional pixel array. In the observation, we assume the observation for each pixel is independent, so that we consider single pixel observation at first. We consider each pixel has Poisson parameter where means the position of the pixel. Denoting the observed pixel value as , which means the number of of photons for the pixel position

, we regard the observation process as the following Poisson distribution:

 p(zi∣λi)=(λi)zizi!exp(−λi). (1)

Considering the Poisson process, Watanabe et al. treat the corruption process as a Bernoulli process, which counts the number of on-off event in the proper time bins[15]. Thus, we can translate eq.(1

) as the binomial distribution form:

 p(zi∣ρi) =(Kzi)(ρi)zi(1−ρi)K−zi, (2)

where , and means the upper limit of the counting. In this formulation, we can confirm the eq.(2) converges to the Poisson distribution eq.(1) under the condition .

The parameter in the eq.(2

) is a non-negative parameter, which is just hard to treat for us, so that we introduce the logit transform into the parameter

, that is:

 xi=12lnρi1−ρi, (3)

and obtain the conditional probability for the condition as

 p(zi∣xi) =(Kzi)exp((2zi−K)xi−Kln2coshxi). (4)

Hence, the image corruption process can be interpreted as observing the under the condition of .

#### 2.1.1 Evaluation with Local latent variables

The term “” in the eq.(4) looks hard to tract for analysis. Thus, in this study, we introduce a latent variable evaluation [12][15]. Palmer et al. proposed to evaluate the lower bound of super-Gaussians with multiplied form of the Gaussian distribution and concave parameter function[12], that is, any super-Gaussian, which is denoted by where is a concave function, could be described as

 p(u) =exp(−g(u2)) (5) =supξ>0φ(ξ)N(u|0,ξ−1), (6) φ(ξ) =√2πξexp(g∗(ξ2)). (7)

The function pair and is a convex conjugate relationship which is derived from Legendre’s transform

 g(u) =infξ>0ξu−g∗(ξ), (8) g∗(ξ) =infu>0ξu−g(u). (9)

Eq.(6) consists of the Gaussian part for and the non-Gaussian part described as the function where is a latent-parameter. Introducing the latent parameter form, we obtain the upper bound as:

 ln2coshxi≤tanhξi2ξi(x2i−ξ2i)+ln2coshξi, (10)

where is the latent parameter for the -th pixel. Thus, we introduce the latent variational method into the eq.(4), we obtain the lower bound of the likelihood function:

 p(zi∣xi) ≥(Kzi)exp((2zi−K)xi−Ktanhξi2ξi(x2i−ξ2i)+ln2coshξi) =pξi(zi|xi) (11)

Assuming the independence for each pixel observation, we can easily evaluate the lower bound of whole image corruption process:

 p(z∣x) =∏ip(zi∣xi) ≥∏i(Kzi)exp(−12xTΞx+z′Tx) exp(12ξTΞξ−K∑iln2coshξi) =pξ(z∣x), (12)

where

means observation vector

 z′=(2z1−K,⋯,2zi−K,⋯,2zM−K)T, (13)

and means the collection of latent parameter , and matrix means a diagonal matrix whose components are . Thus, we regard the lower bound of the likelihood as the observation process which is denoted as a Gaussian form of .

### 2.2 Prior probability

Introducing the Bayesian inference requires several prior probability for the image in order to compensate for the loss of information through the observation. In this study, we assume a Gaussian Markov random field (GMRF)[2] for the prior. The GMRF prior is one of the popular one in the field of image restoration, and it is not the state-of-art prior in the meaning of the reducing noise performance. However, the GMRF is easy to treat in the analysis, so that we apply it in this study. Usually, we define the GMRF as the sum of neighborhood differential square of parameters where and are neighborhood parameters. The energy function and the prior probability for the GMRF can be described as following:

 Hpri(x;α,h) =α2∑(i,j)(xi−xj)2+h2∑ix2i (14) =12xt(αΛ+hI)x (15) p(x|α,h) =1Z(α,h)exp(−Hpri(x;α,h)), (16) Z(α,h) =∫dxexp(−Hpri(x;α,h)) =√|2π(αΛ+hI)−1| (17)

where the sum-up of means the neighborhood pixel indices, and the matrix and mean the adjacent and identical matrices respectively. In the eq.(14), the first term means the GMRF part and the second means the Gaussian prior for the zero-center value for stable calculation.

### 2.3 Image restoration algorithm with Posterior

From the observation (12) and the prior (16), we can derive approximated posterior as

 pξ(x∣z,α,h)∝pξ(z∣x)p(x∣α,h), (18)

and the observation is evaluated with the latent-valued form, so that we can derive the approximated posterior as Gaussian distribution:

 pξ(x|z,α,h) ∼N(x|m,S−1), (19) S =αΛ+hI+Ξ, (20) m =S−1z′. (21)

Considering the inference parameter of as the posterior mean of the , that is , we can obtain the inference parameter explicitly:

 ⟨x⟩=∑xxpξ(x|z,α,h)=m. (22)

### 2.4 Inference of Hyperparameters and Latent variables

In order to obtain appropriate restoration with (22), the hyper-parameters , and the latent variables should be adjusted properly. Hereafter we introduce the notation for convenience. In order to solve, we applied a expectation maximization (EM) algorithm for inferring these parameters . EM algorithm consists of two-step alternate iterations for the system that has hidden variables [19][20]. Assuming the notation as the each time step, the EM algorithm could be described as the following two-steps:

• E-Step: Calculate Q-function that means the average of the likelihood function for the given parameter :

 Q(θ∣θ(t)) =⟨lnp(x,z∣θ)⟩x∣θ(t) (23)
• M-Step: Maximize the Q-function for , and the arguments are set to the next hyper-parameters :

 θ(t+1)=argmaxθQ(θ∣θ(t)) (24)

Neglecting the constant term for the parameter , we can derive the Q-function in the E-step as:

 Q(θ∣θ(t)) =−12(m(t)TSm(t)+TrSS(t)−1)−12ln|αΛ+hI| +12ξTΞξ−K∑iln2coshξi (25) S(t) =α(t)Λ+h(t)I+Ξ(t) (26) m(t) =S(t)−1z′ (27)

In order to maximize the Q-function in the M-step, we solve the saddle point equations , , and for any . Thus, we obtain

 ∑iηiαηi+h =m(t)TΛm(t)+TrΛS(t)−1, (28) ∑i1αηi+h =∥m(t)∥2+TrS(t)−1, (29) ξi =√m(t)i2+S(t)−1ii, (30)

where

are eigenvalues of the adjacent matrix

and is the th diagonal component of the matrix .

In order to obtain the exact hyper parameters and , we have to solve the eqs.(28) and (29) simultaneously, however, it makes increasing computational cost. Thus, hereafter, we assume the hyperparameter is fixed and given as . Then, we obtain the inference of the hyperparameter as

 1α=1M−1(m(t)%TΛm(t)+TrΛS(t)−1), (31)

since , which are the eigenvalues of the adjacent matrix , only has a zero component and other components are positive values.

#### 2.4.1 Approximating Posterior Mean with Loopy Belief Propagation

In the Algorithm of the previous section[18], each E-step requires the inverse of accuracy matrix to calculate the parameters. In general, the computational cost for inverse of a matrix that size is requires order. In this study, we assume the restoring image size is , so that, in the meaning of calculation scalability, the reduction of the cost is important for the application

In order to reduce the calculation cost, we introduce the loopy belief propagation (LBP) into the E-step in the algorithm. In the manner of the Gaussian graphical model, the efficacy of the LBP were confirmed[4][21][18]. Our approximated posterior, that is eq.(18), is expressed as a kind of Gaussian form, so that we can apply the LBP for the restoration. For applying LBP, we modify the evaluation of restoration value described as (22) to the marginal posterior mean (MPM):

 x∗i=⟨xi⟩MPM=∫dxixipξ(xi∣z,α,h). (37)

Obtaining the marginal posterior mean, we apply a local message passing algorithm defined by LBP. Hereafter, for convenience, we introduce the following notations:

 βi =Ktanhξiξi, (38) yi =2zi−Kβi. (39)

Then we obtain the observation likelihood (11) for -th node as

 p(yi∣xi)∝exp(−βi2(yi−xi)2). (40)

The LBP algorithm is a kind of local message passing. Here, we denote the message from the th node to the -th node as . In each LBP iteration, this message passing is carried out for each connection. In the GMRF case, the message can be derived as

 Mj→i(xi)∝∫dxj p(yj∣xj)exp(−α2(xi−xj)2−h2xj2) ∏k∈N(j)∖iMk→j(xj), (41)

where means the collection of the connected units to the th unit, and means the collection except -th unit. From the form of the integral in the eq.(41), we can regard the message from the th node to the -th node as the following Gaussian

 Mj→i(xi)∝N(xi|μj→i,γj→i−1). (42)

Substituting the message form eq.(42) into the eq.(41), we can derive the message update rule as

 μj→i =βjyj+∑k∈N(j)∖iγk→jμk→jβj+∑k∈N(j)∖iγk→j+h (43) 1γj→i =1α+1βj+∑k∈N(j)∖iγk→j+h. (44)

The LBP requires iterations for convergence of the message values. After the convergence, the marginal posterior required for the EM algorithm can be evaluated as

 p(xi∣y,α,h) ∝p(yi∣xi)∏j∈N(i)Mj→i(xi), (45) p(xi,xj∣y,α,h) ∝p(yi∣xi)p(yj∣xj) exp(−α2(xi−xj)2−h2(x2i+x2j)) ∏k∈N(i)∖jMk→i(xi)∏l∈N(j)∖iMl→j(xj). (46)

Thus, the Q-function for the proposing EM algorithm is

 Q(θ∣θ(t)) =⟨lnp(x,y∣θ)⟩MPM +M−12lnα−α2∑(i,j)⟨(xi−xj)2⟩MPM, (47)

where means the average over the marginal posterior (MPM) denoted as eqs.(45) and (46). Deriving the eq.(47), we also assume the hyper-parameter is enough small .

Let put them all together, the proposing LBP approximated solution is shown as the algorithm 1.

## 3 Computer Simulation Results

We evaluate the restoration performance with computer simulation. In the following, we compare our latent variational restoration with LBP solution (algorithm 1) with the conventional median filter restoration and the standard Gaussian LBP (GLBP) solution[4].

For the evaluation, we extract several image patches from the standard images, called “cameraman”, “lena”, “barbara”, “fingerprint” and “peppers”.

We resample each image into the half-size with weak Gaussian blurring in order to increase smoothness since we assume observing object would be smooth. We regard the resampled images as the observing images, and extract several image patches with size of . The Poisson corruption process is influenced with the contrast of the observing images, so that, we control the maximum and minimum of the image in the simulation. Here, we regard the patch image as , where means the position of the pixel. In the simulation, in order to control the contrast of the image, we introduce the pixel value range which mean the minimum and the maximum values of the Poisson parameters image. Assuming the minimum and the maximum values of the patch image as , and respectively, we define the source image of the -th pixel

as a linear transform :

 λ∗i=λMax−λMinIMax−IMin(Ii−IMin)+λMin. (48)

Thus, the difference between and becomes large, the source image becomes high contrast which means low noise case. Hereafter, we fix the , and only control the parameter as the strength of the accuracy of the observation.

### 3.1 Evaluation of LBP restoration abilities

In the comparison of restoration ability, we apply the peak signal to noise ratio (PSNR). The PSNR is defined as a kind of similarity between the reference image and the test image as:

 PSNR(λ,λ∗) =10log10(maxλ∗−minλ∗)2MSE(λ,λ∗), (49) MSE(λ,λ∗) =1M∑i(λi−λ∗i)2, (50)

where means the image size .

f

### 3.2 Comparison with Other Image Restoration Methods

In this section, we compare the restored image quality with the algorithm1 and other image restoration methods, which are conventional median filter, BM3D method[1], and the Gaussian LBP (GLBP) solution[4]. In these restoration methods, we assume restoration of the parameter from the observed values . Thus, we apply the median filter, BM3D and GLBP methods to the observation in order to obtain the restoration result . Especially, in the Gaussian LBP solution, the observation of the image assumes the pixel values are the result of the observation of the corresponding parameters through the Gaussian channel, that is,

 p(z∣λ) =∏ip(zi∣λi,βG), (51) p(zi∣λi,βG) =√βG2πexp(−βG2(zi−λi)2) (52)

instead of the eq.(1). In the GLBP solution, we also adopt the GMRF as the prior

 p(λ∣αG,hG) =1ZGpri(αG,hG)exp(−H% Gpri(λ∣αG,hG)), (53)

 HGpri(λ∣αG,hG) =αG2∑(i,j)(λi−λj)2+hG2∑iλi2, (54) ZGpri(αG,hG) =∑λexp(−HGpri(λ∣αG,hG)). (55)

The hyperparameters , , and are inference parameters which are solved by the EM algorithm using LBP[4].

Fig.2 shows the comparison of result examples. In the figure, we show several cropped images with from “cameraman”, “barbara”, “fingerprint”, and “peppers”. In the evaluation, we fix the contrast control parameter that controls the noise strength in the Poisson corruption. The first column shows the original images, and the second one shows the Poisson corrupted images. The third shows the conventional median filter restoration results with the size of filter. The fourth shows the restoration results with BM3D method[1]

. The BM3D method is one of the state-of-art method for the AWGN, however, in this case, the variance of noise is not uniform in the image so that the estimation of thresholding parameter looks insufficient. The fifth shows the restoration results with GLBP method, and the sixth shows the one with our LBP method. We can see our result images are more smooth than those of the GLBP results. Table

1 shows the PSNR evaluations for each original image. Our latent variational method shows better restoration results rather than those of the GLBP solutions. We also restoration results with conventional median filter with in the table 1. The median filter restoration make the image too much smooth, so that the PSNR evaluation tends to be the small value.

In order to compare the quantitative restoration evaluations, we introduce the following improvement of PSNR (ISNR) index for two type of restoration results and ,

 ISNR(λ1,λ2;λ∗) =PSNR(λ2,λ∗)−PSNR(λ1,λ∗) =10logMSE(λ1,λ∗)MSE(λ2,λ∗), (56)

where means the ground-truth source image. This index shows the improvement of the against the in the meaning of PSNR. The positive index shows the improvement of the method from .

We evaluate ISNR between the noised image and our results that is , and also evaluate the one with other restoration results, that is median filter result , BM3D restoration result , and GLBP result , with our result, that is respectively. In the image preparation, we crop patch images with the size of from random locations of each original image shown in fig.1. Thus, the total number for the evaluation images is image patches. In the evaluation, we apply several contrast parameter cases, that is .

Fig.3 shows the improvement of our results for the corrupted images . The contrast parameter , which denotes the horizontal value of the figure, controls the noise strength of the Poisson noise. In the figure, the boxplot shows the median, which are described as the thick line in the box, with quantiles for each contrast levels. we obtain [dB] improvement from the corrupted image in the meaning of the median. From the range of the control parameter , our method shows good performance, however, the result shows large quantile variance in the . The reason of the large variance comes from the input image property. The low improvement results only come from the “fingerprint” input image, so that, high spatial frequency with low contrast image might prevent restoration.

We also show the comparison results with applying conventional median filter. We apply median filter for the noised image , and evaluate the PSNR with the original Poisson parameters . Fig.4 shows the results. The horizontal axis shows the same range of the fig.3, and the vertical one shows the ISNR with the range of [dB]. In many cases, applying the median filter, the restored image becomes too much smooth, so that, the PSNR evaluation becomes worse.

Fig.5 shows the improvement of our results for the BM3D restoration[1]. The horizontal axis is identical to the fig.3 and 4, and the vertical shows the ISNR with range of

[dB]. In the figure, the white circles show the outliers which differ twice the standard deviation or more from each average. In the

case, the BM3D restoration looks better than that of ours. However, in other cases, our method shows better performance.

Fig.6 shows the improvement of our results for the GLBP restoration. This result also shows our method shows better results than that of those of the GLBP in the . In the range of , we obtain [dB] improvement from the GLBP solution in the meaning of the median. On the contrary, in the case, almost all the restoration results of the GLBP are better than those of ours. We consider the results comes from the accuracy of the hyperparameter inference. In the GLBP method, the noise variance, which is the inverse of the accuracy parameter in eq.(51), is a single parameter. In the low case, the variances of each observation might be described as a single value. However, the differences of variances of the observation values would be large when the contrast parameter becomes large. As the result, the Gaussian model that is eq.(51) could not describe the observation value within a single hyperparameter . As the result, in many cases of the large , the inference value of the becomes large, which make low efficacy of the prior.

## 4 Conclusion

In this study, we propose an image restoration method for Poisson corrupted image. Introducing the latent variable method, we derive the corruption process, which denote the likelihood function, as a Gaussian form. Using Bayesian image restoration framework, we derive the posterior probability for the restoration with introducing GMRF as a prior. The posterior includes several hyperparameters , , and latent variables . In order to solve the restoration problem with determining these parameters, we construct an algorithm in the manner of the EM method.

Thus, our algorithm could determine all the parameters from the observed data . The original EM algorithm requires computational cost. Hence, in order to accelerate the algorithm, we approximate the posterior mean as the marginal posterior mean, and derive LBP method for hyperparameter inference that is described as the algorithm 1. We introduce the two-body marginalized posterior described as eq.(46) in order to infer the correlation between connected two units denoted as in the eq.(34). Without the two-body interactions, the inference ignores the correlations, that is, for any indices. It is identical to the naive mean field approximation. The naive mean field approximation, which only applies the single-body marginal posterior, occurs the underestimation of the hyperparameter of . The correlation update rule is derived as the eq.(34), which only requires the local message, so that the cost for the inference does not increase so much. Solving exact correlation between two units requires considering not only the connected bodies effect but also all the other bodies effect. This is the reason for the requiring the inverse of the accuracy matrix in the EM algorithm. We only consider the two-bodies effect, however, the hyper-parameter inference looks work well, and the restoration performance becomes same or more than the that of the exact solution in the previous work. Hence, we propose the LBP method is a good approximation for our Poisson corrupted image restoration framework.

Then, we compare our algorithm with other methods, that is, median filter, BM3D and, GLBP restorations[1][4]. The BM3D and GLBP method regards the obtained variables are observed from the Gaussian noise channel. In these solutions, the very low contrast image () shows slightly better restoration result than that of ours, however, the larger contrast becomes, the lower the performance becomes. From the eq.(40), our method could express the observation accuracy for each pixel value, however, the GLBP solution has only single hyperparameter in eq.(51). The BM3D also assumes the variance of the pixels in a image might be denoted as a single parameter in its algorithm. The variance and the mean of the Poisson observation, however, depends on the single parameter , so that, the assumption of GLBP and BM3D observation might not satisfy. As the result, in the GLBP case, the tends to be overestimation. In the numerical evaluation, our method shows better performance rather than that of the GLBP except . Thus, our results suggests that considering the correct Poisson observation model is important as well as the choosing of the prior.

Our latent variable method evaluate the Poisson likelihood function as the Gaussian form. Thus, in future works, we can easily extend our method into other image restoration framework. For example, when we could express the the parameter values of as the some linear transformation:

 x=As (57)

where and are the transformation matrix and the expression vector respectively. Then, we could substitute it into the eq.(11) and consider the prior for the transformed vector such like sparse prior, such like K-SVD[22].

## Acknowledgments

We appreciate K. Takiyama, Tamagawa-Gakuen University, and Professor M. Okada, University of Tokyo for fruitful discussions. This work is partly supported by MEXT/JSPS KAKENHI Grant number 25330285, and 26120515.

## References

• [1] K. Dabov, A. Foi, V. Katkovnik, and Egiazarian K. Image Denoising by Sparse 3D Transform-Domain Collaborative Filtering. IEEE Transactions on Image Processing, 16(8):2080–2095, Aug. 2007.
• [2] K. Tanaka. Statistical-mechanical approach to image processing. Journal of Physics A: Mathematical and General, 35(37):R81–R150, 2002.
• [3] J. Portilla, V Strela, M. J. Wainwright, and E. P. Simoncelli. Image denoising using scale mixtures of gaussians in the wavelet domain. IEEE Transactions on Image Processing, 12:1338–1351, 2003.
• [4] K. Tanaka and D. M. Titterington. Statistical trajectory of an approximate EM algorithm for probabilistic image processing. Journal of Physics A: Mathematical and Theoretical, 40(37):11285, 2007.
• [5] H. Shouno and M. Okada. Bayesian Image Restoration for Medical Images Using Radon Transform. Journal of the Physical Society of Japan, 79:074004, 2010.
• [6] H. Shouno, M. Yamasaki, and M. Okada. A Bayesian Hyperparameter Inference for Radon-Transformed Image Reconstruction. International Journal of Biomedical Imaging, ArticleID 870252:10 pages, 2011.
• [7] I. Rodrigues, J. Sanches, and J. Bioucas-Dias. Denoising of medical images corrupted by Poisson noise. In IEEE International Conference on Image Processing, pages 1756–1759, 2008.
• [8] A. de Decker, J. A. Lee, and M. Velysen. Vairance stabilizing transformations in patch-based bilateral filters for Poisson noise image denoising. In Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pages 3673–3676, 2009.
• [9] M. A. T Figueiredo and J. Bioucas-Dias. Restoration of Poissonian Images Using Alternating Direction Optimization. IEEE Transactions on Image Processing, 19(12):3133–3145, Dec. 2010.
• [10] B. S. Hadj, L. Blanc-Feraud, G. Aubert, and G. Engler. Blind restoration of confocal microscopy images in presence of a depth-variant blur and Poisson noise. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 915–919, 2013.
• [11] S. Ono and I. Yamada. Poisson image restoration with likelihood constraint via hybrid steepest decent method. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5929–5933, 2013.
• [12] J. A. Palmer, K. Kerutz-Delgado, D. P. Wipf, and B.D. Rao. Variational em algorithm for non-gaussian latent variable models. In Advances in Neural Information Processing Systems 18, volume 18, pages 1059–1066. MIT Press, 2005.
• [13] C. M. Bishop.

Pattern Recogition and Machine Learning

.
Springer, 2006.
• [14] M. W. Seeger. Bayesian inference and optimal design for the sparse linear model. Journal of Machine Learning Research, 9:759–813, 2008.
• [15] K. Watanabe and M. Okada. Approximate bayesian estimation of varying binomial process. IEICE Transactions on Information and Systems, E94-A(12):pp.2879–2885, 2011.
• [16] H. Sawada, H. Kameoka, S. Araki, and N. Ueda. Efficient algorithms for multichannel extensions of Itakura-Saito nonnegative matrix factorization. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 261–264, 2012.
• [17] H. Shouno and M. Okada. Poisson observed image restoration using a latent variational approximation with gaussian mrf. In PDPTA, pages 201–208. PDPTA 2013, 7 2013.
• [18] H. Shouno and M. Okada. Acceleration of poisson corrupted image restoration with loopy belief propagation. In PDPTA, volume 1, pages 165–170. PDPTA 2014, 7 2013.
• [19] A. P. Dempster, N. M. Larid, and D. B. Rubin. Maximum likelihood estimation from incomplete data via the em algorithm. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 39:1–38, 1977.
• [20] D. J. C. Mackay and Cavendish Laboratory. Hyperparameters: optimize, or integrate out. In In Maximum Entropy and Bayesian Methods, Santa Barbara, pages 43–60. Kluwer, 1996.
• [21] K.P. Murphy, Y. Weiss, and M.I. Jordan. Loopy belief propagation for approximate inference: An empirical study. In

Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence

, UAI’99, pages 467–475, San Francisco, CA, USA, 1999. Morgan Kaufmann Publishers Inc.
• [22] M. Aharon, M. Elad, and A. Bruckstein. K-SVD: An Algorithm for Desiging OVercomplete Dictionaries for Sparse Representation. IEEE Transacions on Signal Processing, 54(11):4311–4322, Nov. 2006.