1 Introduction
The sharpening of blurred images is a standard problem in many imaging applications. A great variety of different approaches to this severely illposed inverse problem have been developped over time which differ in the assumptions they make, and in their suitability for different application contexts.
Blur of an image is described by a pointspread function (PSF) which describes the redistribution of light energy in the image domain . When blurring acts equally at all locations, one has a spaceinvariant PSF which acts by convolution. Accounting also for the impact of noise , a typical blur model (with additive noise) then reads
(1) 
where is the observed image, and the unknown sharp image.
In the more general case of a spacevariant blur one needs a pointspread function with two arguments, , and is replaced with the integral operator
(2) 
such that
(3) 
which subsumes the spaceinvariant case by setting . We denote by the adjoint of the pointspread function , which is given by . Conservation of energy implies generally that , however, this condition may be violated near image boundaries due to blurring across the boundary.
In deblurring, we want to obtain a restored image that approximates , with the degraded image and the PSF as input. This is the case of nonblind deconvolution (as opposed to blind deconvolution which aims at inferring the sharp image and the PSF simultaneously from the degraded image).
Some approaches to the deconvolution problem are presented in more detail in Section 2.
Our contribution.
The main subject of this paper is to discuss a modification of the RichardsonLucy (RL) deconvolution method [22, 27] by robust data terms. Building on the known variational interpretation of RL deconvolution [30], we replace the asymmetric penaliser function in the data term in such a way that larger residual errors are penalised less than with the standard Csiszár divergence term.
Using robust data terms together with a regulariser similar to [11, 12] we obtain a robust and regularised RichardsonLucy variant that unites the high restoration quality of variational deconvolution methods with high efficiency that is not far from the original RichardsonLucy iteration. This method has already been used for experiments on recovering information from diffuse reflections in [1]
and, combined with interpolation, for the enhancement of confocal microscopy images
[13, 26]. It has also been used to achieve efficient deconvolution under realtime or almostrealtime conditions [36, 38]. We demonstrate that both robust data terms and regularisers contribute substantially to its performance.An earlier version of the present work is the technical report [35].
Related work.
The omnipresence of deblurring problems has made researchers address this problem since long [32, 42]. From the abundant literature on this topic, the most relevant work in our present context includes RichardsonLucy deconvolution [22, 27], variational methods [8, 24, 28, 43], and their interplay [11, 12, 30]; see also [7] for another approach to combine RichardsonLucy deconvolution with regularisation.
Fundamental theoretical results on existence and uniqueness of solutions of deconvolution problems can be found in the work of Bertero et al. [6].
Robust data terms in deconvolution go back to Zervakis et al. [44] in statistical models, and have recently been used intensively in the variational context by Bar et al. [4, 5] and Welk et al. [37, 40]. Positivity constraints were studied in discrete iterative deconvolution by Nagy and Strakoš [23] and in a variational framework by Welk and Nagy [37]. The extension of variational approaches to multichannel images has been studied in [15] and more specifically in deconvolution in [2, 37].
Structure of the paper.
In Section 2 we recall approaches to image deconvolution which form the background for our approach, and discuss some aspects of noise models and robustness. Section 3 recalls the embedding of RichardsonLucy deconvolution into a variational context. Exploiting this connection, the RL algorithm can be modified in order to increase restoration quality and robustness with respect to noise and perturbations. An experimental comparison of the deconvolution techniques under consideration is provided in Section 4 based on both synthetic and realworld data. Conclusions in Section 5 end the paper.
2 Deconvolution Methods
We start by recalling selected deconvolution approaches from the literature which we will refer to later.
2.1 RichardsonLucy Deconvolution
RichardsonLucy (RL) deconvolution [22, 27] is a nonlinear iterative method originally motivated from statistical considerations. It is based on the assumption of positive greyvalues and Poisson noise distribution. If the degraded and sharp images, and the pointspread function are smooth functions over with positive real values, one uses the iteration
(4) 
to generate a sequence of successively sharpened images from the initial image .
In the absence of noise the sharp image is a fixed point of (4), as in this case the multiplier equals the constant function .
The single parameter of the procedure is the number of iterations. While with increasing number of iterations greater sharpness is achieved, the degree of regularisation is reduced, which leads to amplification of artifacts that in the long run dominate the filtered image. This phenomenon is known by the name of semiconvergence.
2.2 Variational Deconvolution
Variational methods [24, 43] address the deconvolution task by minimising a functional that consists of two parts: a data term that enforces the match between the sought image and the observed image via the blur model, and a smoothness term or regulariser that brings in regularity assumptions about the unknown sharp image. The strength of variational approaches lies in their great flexibility, and in the explicit way of expressing the assumptions made. They achieve often an excellent reconstruction quality, but their computational cost tends to be rather high.
A general model for variational deconvolution of a greyvalue image with known pointspread function is based on minimising the energy functional
(5) 
in which the data term penalises the reconstruction error or residual to suppress deviations from the blur model, while the smoothness term penalises roughness of the reconstructed image. The regularisation weight balances the influences of both terms. are increasing penalty functions.
In the simplest case, is the identity, thus imposing a quadratic penalty on the data model. Doing the same in the regulariser, one has WhittakerTikhonov regularisation [31, 41] with . As this choice leads to a blurring that directly counteracts the desired sharpening, it is often avoided in favour of regularisers with edgepreserving properties. A popular representative is total variation deconvolution [8, 24, 33] which uses . Edgeenhancing regularisers (e.g. of the PeronaMalik [25] type) were studied in [3, 39]. In [18, 19, 20], nonlocal regularisation was proposed.
For the actual minimisation, often gradient descent or laggeddiffusivitytype minimisation schemes are employed. Iterative schemes with advantageous convergence behaviour are derived from a halfquadratic approach in [34] (for the total variation regulariser) and [21] (for certain nonconvex regularisers).
Using in (5) a penaliser with lessthanquadratic growth leads to robust data terms [4, 44]. For example, [4] use the penaliser . The concept of robustness originates from statistics [17], and will be discussed in more detail below. Spatially variant robust deconvolution models were investigated in [5, 40].
Even RichardsonLucy deconvolution can be interpreted as a fixed point iteration for an optimisation problem [30], using Csiszár’s information divergence [10] as an (asymmetric) penaliser function. In a spacecontinuous formulation, one has the energy functional
(6) 
No regularisation term is included, which is linked to the abovementioned semiconvergence behaviour. Nevertheless, the energy formulation permits to modify RichardsonLucy deconvolution in the same flexible manner as the standard variational approach (5) by introducing robust data terms and edgepreserving, or even edgeenhancing, regularisers. While an edgepreserving regulariser in combination with RichardsonLucy deconvolution has been used by Dey et al. [11, 12] (in a spaceinvariant setting), the possibility of a robust data term has so far not been studied in detail, although it has been used successfully in applications [1, 13, 26, 36, 38].
2.3 Data Terms, Noise Models and Robustness
Statistical considerations [14, 44]
link particular data terms to specific noise models, in the sense that minimising the so constructed energy functional yields a maximum likelihood estimator under the corresponding type of noise. For example, quadratic penalisation
pertains to Gaussian noise, while the penaliser matches Laplacian noise. The asymmetric penaliser in (6) is related to Poisson noise.This relation allows to use an optimally adapted energy model whenever one has full control over the imaging process and can therefore establish an accurate noise model. In practice, one does not always have perfect control and knowledge of the imaging process. This is where the concept of robustness comes into play. According to Huber [17, p. 1], “robustness signifies insensitivity to small deviations from the assumptions”. In particular, “distributional robustness” means that “the shape of the underlying distribution deviates slightly from the assumed model” [17, p. 1]. This obviously applies to imaging processes in which no exact noise model is known, but also further violations of model assumptions can be subsumed here, such as imprecise estimates of PSF, or errors near the image boundary due to blurring across the boundary, see [40].
To incorporate each single influence factor into a model is not always feasible. Robust models are designed to cope with remaining deviations, and still produce usable results. In view of the uncertainty about the true distribution of noise, they are often based on data terms that match types of noise that are assumed to be “worse” than the real noise. A crucial point is to suppress the effect of outliers, which is achieved e.g. by data terms that penalise outliers less. Models are then adapted to distributions with “pessimistically” heavy tails.
To evaluate robustness experimentally, it is not only legitimate but even necessary to test such models against severe, maybe even unrealistic, types of noise that do not exactly match the model. A frequently used test case is impulse noise, and it turns out that variational approaches actually designed for Laplacian noise can cope with it practically well [4]. Of course, one can no longer expect to establish optimality in the maximumlikelihood sense with respect to the true noise.
Furthermore, any comparison between methods that are optimised for different noise models inevitably involves testing at least one of them with nonmatching noise. For example, this happens already in any comparison of Wiener filtering (Gaussian noise) against RichardsonLucy (Poisson noise).
2.4 Gradient Descent Methods
Functionals of type (5) are often minimised using gradient descent [39, 40]. Alternatively, elliptic iteration schemes can be used [4].
Standard gradient descent.
To derive a gradient descent equation for the energy from (5), one computes by the usual EulerLagrange formalism for a small additive perturbation . The variational gradient (Gâteaux derivative) is obtained from by the requirement that holds for all test functions , where is the standard inner product of functions.
In an elliptic approach, one now solves the EulerLagrange equation . A gradient descent for (5) is given by the integrodifferential equation (compare [40])
(7) 
Complementing this equation with e.g. the blurred image as initial condition and suitable boundary conditions, one has an initialboundary value problem which is then approximated numerically until a steady state is reached, for example via an explicit Euler scheme. As each iteration involves two operations, the computational expense is fairly high. Even more sophisticated approaches like semiimplicit schemes with conjugate gradients for (7) or the elliptic approach do not change the computational cost substantially, as the number of iterations for convergence in any scheme depends primarily on the extent and structure of the PSF.
Positivityconstrained gradient descent.
From modelling considerations one can often infer additional information that helps to mitigate the illposedness of the deconvolution problem. For instance, in a scalarvalued image with greyvalues proportional to radiance, only positive values are admissible.
A strategy to incorporate this positivity constraint in the minimisation of (5) has been described in [37]. Based on earlier work [23], it is proposed to reparametrise the greyvalues by the substitution . The values of the new image function are unconstrained in . Instead of (7) one obtains thereby the gradient descent
(8) 
As detailed in [37], performing the gradient descent based on the variation of the function comes down to a function space transformation. Essentially, the positivityconstrained gradient descent turns out to be the gradient descent in a hyperbolic metric. From this viewpoint, zero and negative greyvalues are avoided because they are put at infinite distance from any positive values. This reinterpretation can immediately be transferred to interval constraints with a suitable metric on the open interval .
We want to point out here another reinterpretation which is obtained by using a multiplicative gradient descent. To this end, one considers a multiplicative perturbation with a test function and computes the variation . Analogous to the ordinary Gâteaux derivative above, a “multiplicative gradient” is then derived from the requirement . One finds , which constitutes a new derivation of the gradient descent (8) without the substitution of .
3 Robust and Regularised RichardsonLucy Deconvolution
For given , , equation (4) can be understood as a fixed point iteration associated to the minimisation of the functional (6), compare [30]. This is the socalled information divergence introduced by Csiszár [10]. The asymmetric penaliser function
(9) 
is strictly convex for with its minimum at .
As a necessary condition for to be a minimiser of (6), one can compute an EulerLagrange equation which in this case becomes particularly simple as no derivatives of are present in the integrand. In view of the positivity requirement for we start by a multiplicative perturbation of (6) with a test function ,
(10) 
With (2) for the integral operator this becomes
(11) 
and, after changing the order of integration and rewriting into ,
(12) 
Requiring that this expression vanishes for all test functions yields the minimality condition
(13) 
Because of the energy conservation property one sees that (4) is a fixed point iteration for (13).
3.1 Regularisation
In the presence of noise the functional (6) is not minimised by a smooth function ; in fact, the fixedpoint iteration shows the abovementioned semiconvergence behaviour and diverges for . From the variational viewpoint, the functional (6) needs to be regularised. In standard RichardsonLucy deconvolution, this regularisation is provided implicitly by stopping the iteration after a finite number of steps. The earlier the iteration is stopped, the higher is the degree of regularisation. Although this sort of regularisation is not represented in the functional (6), the variational picture is advantageous because (6) can be modified in the same flexible way as standard variational approaches. The structure of the iterative minimisation procedure is preserved throughout these modifications, which leads to good computational efficiency.
Let us first note that by limiting the growth of highfrequency signal components, regularisation has a smoothing effect that in deconvolution problems acts contrary to the intended image sharpening. It is desirable to steer this effect in such a way that it interferes as little as possible with the enhancement of salient image structures, such as edges.
Implicit regularisation by stopping, however, allows little control over the way it affects image structures. For this reason, it makes sense to introduce a variational regularisation term into the objective functional. This yields the functional
(14) 
in which the RichardsonLucy data term is complemented by a regulariser whose influence is weighted by the regularisation weight . Concerning the penalisation function in the regulariser, our discussion from Section 2 applies analogously. With the total variation regulariser given by , the energy functional (14) corresponds to the method proposed (in spaceinvariant formulation) by Dey et al. [11, 12]; compare also the more recent work [29] for a similar approach.
The EulerLagrange equation for (14) under multiplicative perturbation is given by
(15) 
which combines (13) with the same multiplicative gradient for the regulariser as in (8), compare [37].
In converting this into a fixed point iteration, we evaluate the divergence expression with , yielding . Dependent on whether the factor with which the divergence term in (15) is multiplied is chosen as or , the righthand side of (4) either receives the additional summand , or is divided by . However, can have either sign, and a negative value in the numerator or denominator will lead to a violation of the positivity requirement. For this reason, we choose the outer factor for as if , or if . Using the abbreviations we can therefore write our final fixed point iteration as
(16) 
We will refer to this method as regularised RL.
3.2 Robust Data Terms
Up to scaling and shifting, the asymmetric penaliser function
equals the logarithmic density of a Gamma distribution. Minimisation of the integral (
6) thus corresponds to a Bayesian estimation of the sharp image assuming a Poisson distribution for the intensities, with the Gamma distribution as conjugate prior.
In variational deconvolution, it has turned out useful to replace quadratic data terms that mirror a Gaussian noise model by robust data terms associated with “heavytailed” noise distributions. Not only can the resulting model handle extreme noise but it can also cope with imprecisions in the blur model. Following this idea, we replace the data term of (14) by one that is adapted to a broader distribution on . To retain the structure of the fixed point iterations (4) and (16), we keep in the data term, but apply a penaliser function that grows less than linear.
Our modified functional therefore reads
(17) 
3.3 MultiChannel Images
Assume now that the blurred image is a multichannel image with a channel index set , e.g. an RGB colour image, whose channels are uniformly blurred, i.e. the PSF is equal for all channels. Replacing the expressions and in the arguments of and with their sums over image channels, and , we obtain as multichannel analog of (19) the functional
(20) 
This yields as the iteration rule for multichannel RRRL
(21) 
The same procedure works for the nonrobust and/or nonregularised RL variants. Note that in the case of standard RL this boils down to channelwise application.
3.4 Stability
The regularised RL model by Dey et al. [12] places the regularisation only in the denominator of the fixed point iteration rule. This imposes a tight bound on : as soon as exceeds , positivity is violated, and the iteration becomes unstable, In our iteration rule, signdependent distribution of divergence expressions to the enumerator and denominator prevents positivity violations, thus enabling larger values of .
Nevertheless, for substantially larger values of instabilities are observed which can be attributed to amplifying perturbations of high spatial frequency in almost homogeneous image regions. In [26] therefore a modification of the fixed point iteration to optimise (17) has been proposed that guarantees stability for arbitrary . A detailed analysis of stability bounds on in the iteration (19) will be given in a forthcoming publication.
4 Experiments
We turn now to evaluate the performance of our newly developped deconvolution methods, and comparing them to existing approaches. Methods chosen for comparison include classical RichardsonLucy deconvolution, variational gradient descent methods, and the methods from [21, 34] which are advocated for performant deblurring in several recent papers, see e.g. [16].
4.1 Deconvolution of Synthetic Data
We start our tests on greyvalue images that are synthetically blurred by convolution. Since in this case the correct sharp image is known, we can rate restoration quality by the signaltonoise ratio
(22) 
Here, and are the original sharp image and restored image, respectively. By
we denote the variance of the image
. One should be aware that SNR measurements do often not capture visual quality of deblurred images very well. The parameters in our experiments are optimised primarily for visual quality, not for SNR.We remark also that in synthetically blurring images, we use convolution via the Fourier domain, which involves treating the image domain as periodic. In contrast, we use convolution in the spatial domain in the variational deconvolution procedures as well as in the RichardsonLucy method and its modifications. This discrepancy in the convolution procedure and boundary treatment is by purpose: it helps to prevent “inverse crimes” [9] that could unduly embellish results. Moreover, our implementations can thereby easily be adapted to spatially variant blurs.
require computations in the Fourier transforms by design.
First series of experiments.
In this series of experiments (Figure 1), we blur the cameraman image, Figure 1(a), by an irregularly shaped pointspread function of moderate size and apply impulsive (uniform) noise (b). The so degraded image can be deblurred to some extent by standard RL, (c), but noise is severely amplified, and dominates the result when the iteration count is increased for further sharpening. Regularised RL visibly reduces the noise effect, see frame (d).
Turning to classical variational approaches, we consider in frame (e) the iterative method from [21]. Minimising an energy functional with quadratic data term and an edgeenhancing regulariser, it still suffers from severe noise effects in our example. The closely related method from [34] as well as other TV deconvolution approaches lead to similar results.
Robust variational deconvolution achieves a significantly better restoration, see frame (f) where the method from [40] without positivity constraint is used, and frame (g) with positivity constraint as in [37] and Section 3. Visually, both variational approaches lead to a comparable restoration quality. In this example the positivity constraint bears no improvement, due to the fact that artifact suppression by the positivity constraint takes effect mainly in dark image regions which play only a minor role in the cameraman image.
Using robust RL without regularisation as in frame (h) reduces the noise to an extent comparable to the regularised method in (d). With more iterations, however, the noise still becomes dominant, thus limiting the possible deconvolution quality. Robust and regularised RL allows fairly good deblurring while only small rudiments of noise remain, see Figure 1(i). Even if the iteration count is drastically increased, such that the implicit regularisation by stopping takes only little effect, the explicit regularisation remains effective (k) and keeps the noise level low. Indeed, the reconstruction quality comes close, both visually and in terms of SNR, to that achieved by stateoftheart robust variational deconvolution as shown before in (f) and (g).
Second series of experiments.
Encouraged by the previous findings, we increase blur and noise in our second series of experiments (Figures 2–3). Under the influence of a drastically larger simulated motion blur and doubled noise intensity, see Figure 2(a), classic RL gives no longer usable results (c). The same is true for the method from [21], see Subfigure (b). Regularised RL and robust RL can again cope better with the noise but their outcomes are far from satisfactory, see Figure 2(d) and 3(a). A significant improvement is obtained with robust and regularised RL, Figure 3(b), as well as with robust variational deconvolution without constraints (c) or with positivity constraint (d). Although the restoration quality is still imperfect, the three last mentioned methods are eye to eye in terms of visual quality and SNR.
Method  Iterations  Run time  Cost factor 

(s)  w.r.t. RL  
Standard RL  20  8.9  1.0 
Variational  1500  792  89.0 
RRRL (TV)  80  59.6  6.7 
RRRL (PM)  300  221  24.8 
In order to achieve its high restoration quality, RRRL required in both synthetic experiments significantly higher computational effort than standard RL. However, run times still remained by a factor below those for the robust variational model from literature. This is caused by the favourable structure of the minimality condition and the fixed point iteration obtained from it. In contrast, minimisation of the classical variational model becomes very slow when getting close to the optimum, thus requiring much more iterations.
4.2 Deconvolution of RealWorld Images
Our last experiment (Figures 4–5) is based on realworld data. The colour photograph shown in Figure 4(a) was blurred during acquisition with an unknown pointspread function that is inferred approximately from the shape of a point light source. For restoration, we use the multichannel versions of our methods.
Restoration by standard RL achieves a decent acuity at moderate computational cost, see the detail view, Figure 5(a). Increasing the number of iterations quickly leads to ringing artifacts that are visible as shadows in the vicinity of all highcontrast image structures, see Figure 5(b).
Variational deconvolution with a robust data term and PeronaMalik regulariser allows a visible improvement in acuity over RL deconvolution while reducing artifacts, see the detail view in Figure 5(c). Using the positivityconstrained gradient descent (8) brings about a further significant improvement, see Figure 5(f). Due to the better suppression of ringing artifacts the regularisation weight could be reduced by half here – in contrast, unconstrained variational deconvolution with the same reduced creates much stronger artifacts, see Figure 5(d). Imposing the constraint but retaining the larger weight , see Figure 5(e), already improves acuity but still smoothes out more fine details than in Figure 5(f).
The excellent restoration quality of variational deconvolution, however, comes at the cost of significantly increased computation time needed in order to approximate the steady state. Robust and regularised RichardsonLucy deconvolution as shown in the last rows of Figure 5 provides an attractive compromise between standard RL and the variational gradient descent. Figure 5(g) shows a RRRL result with TV regulariser which can be computed fairly fast. With the PeronaMalik regulariser instead, see Figures 4(b) and 5(h), more iterations are required in order for the edgeenhancing properties of the regulariser to pay off, but still the computation time is lower than with the gradient descent algorithm, compare also Table 1. In terms of restoration quality, both RRRL results range between the variational deconvolution without and with constraints. We remark that the test image consisting of large dark regions with few highlights makes the positivity constraint particularly valuable.
5 Conclusions
In this paper, we have investigated RichardsonLucy deconvolution from the variational viewpoint. Based on the observation [30] that the RL method can be understood as a fixed point iteration associated to the minimisation of the information divergence [10], it is embedded into the framework of variational methods. This allows in turn to apply to it the modifications that have made variational deconvolution the flexible and highquality deconvolution tool that it is. Besides regularisation that has been proposed before in [11], we have introduced robust data terms into the model. As a result, we have obtained a novel robust and regularised RichardsonLucy deconvolution method that competes in quality with stateoftheart variational methods, while in terms of numerical efficiency it moves considerably closer to RichardsonLucy deconvolution.
Acknowledgement. Part of the work on this paper was done while the author was affiliated with MIA group, Saarland University, Saarbrücken, Germany, see [35].
References
 [1] M. Backes, T. Chen, M. Dürmuth, H. Lensch, and M. Welk. Tempest in a teapot: compromising reflections revisited. In Proc. 30th IEEE Symposium on Security and Privacy, pages 315–327, Oakland, California, USA, 2009.

[2]
L. Bar, A. Brook, N. Sochen, and N. Kiryati.
Color image deblurring with impulsive noise.
In N. Paragios, O. Faugeras, T. Chan, and C. Schnörr, editors,
Variational and Level Set Methods in Computer Vision
, volume 3752 of Lecture Notes in Computer Science, pages 49–60. Springer, Berlin, 2005.  [3] L. Bar, N. Sochen, and N. Kiryati. Variational pairing of image segmentation and blind restoration. In T. Pajdla and J. Matas, editors, Computer Vision – ECCV 2004, Part II, volume 3022 of Lecture Notes in Computer Science, pages 166–177. Springer, Berlin, 2004.
 [4] L. Bar, N. Sochen, and N. Kiryati. Image deblurring in the presence of saltandpepper noise. In R. Kimmel, N. Sochen, and J. Weickert, editors, Scale Space and PDE Methods in Computer Vision, volume 3459 of Lecture Notes in Computer Science, pages 107–118. Springer, Berlin, 2005.
 [5] L. Bar, N. Sochen, and N. Kiryati. Restoration of images with piecewise spacevariant blur. In F. Sgallari, F. Murli, and N. Paragios, editors, Scale Space and Variational Methods in Computer Vision, volume 4485 of Lecture Notes in Computer Science, pages 533–544. Springer, Berlin, 2007.
 [6] M. Bertero, T. A. Poggio, and V. Torre. Illposed problems in early vision. Proceedings of the IEEE, 76(8):869–889, Aug. 1988.
 [7] E. Bratsolis and M. Sigelle. A spatial regularization method preserving local photometry for RichardsonLucy restoration. Astronomy and Astrophysics, 375(3):1120–1128, 2001.
 [8] T. F. Chan and C. K. Wong. Total variation blind deconvolution. IEEE Transactions on Image Processing, 7:370–375, 1998.
 [9] D. Colton and R. Kress. Inverse Acoustic and Electromagnetic Scattering Theory. Springer, Berlin, 1992.
 [10] I. Csiszár. Why least squares and maximum entropy? an axiomatic approach to inference for linear inverse problems. Annals of Statistics, 19(4):2032–3066, 1991.
 [11] N. Dey, L. BlancFéraud, C. Zimmer, Z. Kam, J.C. OlivoMarin, and J. Zerubia. A deconvolution method for confocal microscopy with total variation regularization. In Proc. IEEE International Symposium on Biomedical Imaging (ISBI), April 2004.
 [12] N. Dey, L. BlancFeraud, C. Zimmer, P. Roux, Z. Kam, J.C. OlivoMarin, and J. Zerubia. RichardsonLucy algorithm with total variation regularization for 3D confocal microscope deconvolution. Microscopy Research and Technique, 69:260–266, 2006.
 [13] A. Elhayek, M. Welk, and J. Weickert. Simultaneous interpolation and deconvolution model for the 3D reconstruction of cell images. In R. Mester and M. Felsberg, editors, Pattern Recognition, volume 6835 of Lecture Notes in Computer Science, pages 316–325. Springer, Berlin, 2011.
 [14] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721–741, 1984.
 [15] G. Gerig, O. Kübler, R. Kikinis, and F. A. Jolesz. Nonlinear anisotropic filtering of MRI data. IEEE Transactions on Medical Imaging, 11:221–232, 1992.
 [16] M. Hirsch, C. J. Schuler, S. Harmeling, and B. Schölkopf. Fast removal of nonuniform camera shake. In Proc. IEEE International Conference on Computer Vision, pages 463–470, Barcelona, 2011.
 [17] P. J. Huber. Robust Statistics. Wiley, New York, 1981.
 [18] M. Jung, X. Bresson, T. F. Chan, and L. A. Vese. Nonlocal Mumford–Shah regularizers for color image restoration. IEEE Transactions on Image Processing, 20(6):1583–1598, 2011.
 [19] M. Jung, A. Marquina, and L. A. Vese. Variational multiframe restoration of images degraded by noisy (stochastic) blur kernels. Journal of Computational and Applied Mathematics, 240(1):123–134, 2013.
 [20] M. Jung and L. A. Vese. Nonlocal variational image deblurring models in the presence of Gaussian or impulse noise. In X.C. Tai, K. Mørken, M. Lysaker, and K.A. Lie, editors, ScaleSpace and Variational Methods in Computer Vision, volume 5567 of Lecture Notes in Computer Science, pages 402–413. Springer, Berlin, 2009.
 [21] D. Krishnan and R. Fergus. Fast image deconvolution using hyperlaplacian priors. In Advances in Neural Information Processing Systems, pages 1033–1041, 2009.
 [22] L. B. Lucy. An iterative technique for the rectification of observed distributions. The Astronomical Journal, 79(6):745–754, June 1974.
 [23] J. G. Nagy and Z. Strakoš. Enforcing nonnegativity in image reconstruction algorithms. In D. C. Wilson, H. D. Tagare, F. L. Bookstein, F. J. Preteux, and E. R. Dougherty, editors, Advanced Signal Processing Algorithms, Architectures, and Implementations, volume 4121 of Proceedings of SPIE, pages 182–190. SPIE Press, Bellingham, 2000.
 [24] S. Osher and L. Rudin. Total variation based image restoration with free local constraints. In Proc. 1994 IEEE International Conference on Image Processing, pages 31–35, Austin, Texas, 1994.
 [25] P. Perona and J. Malik. Scale space and edge detection using anisotropic diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12:629–639, 1990.
 [26] N. Persch, A. Elhayek, M. Welk, A. Bruhn, S. Grewenig, K. Böse, A. Kraegeloh, and J. Weickert. Enhancing 3D cell structures in confocal and STED microscopy: a joint model for interpolation, deblurring and anisotropic smoothing. Measurement Science and Technology, in press, 2013.
 [27] W. H. Richardson. Bayesianbased iterative method of image restoration. Journal of the Optical Society of America, 62(1):55–59, 1972.
 [28] L. I. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms. Physica D, 60:259–268, 1992.
 [29] A. Sawatzky and M. Burger. Edgepreserving regularization for the deconvolution of biological images in nanoscopy. In G. Psihoyios, T. Simos, and C. Tsitouras, editors, Proc. 8th International Conference of Numerical Analysis and Applied Mathematics, volume 1281 of Conference Proceedings, pages 1983–1986. AIP, September 2010.
 [30] D. Snyder, T. J. Schulz, and J. A. O’Sullivan. Deblurring subject to nonnegativity constraints. IEEE Transactions on Image Processing, 40(5):1143–1150, 1992.
 [31] A. N. Tikhonov. Solution of incorrectly formulated problems and the regularization method. Soviet Mathematics Doklady, 4:1035–1038, 1963.
 [32] P. H. van Cittert. Zum Einfluß der Spaltbreite auf die Intensitätsverteilung in Spektrallinien. II. Zeitschrift für Physik, 65:298–308, 1933.
 [33] C. R. Vogel and M. E. Oman. Fast, robust total variationbased reconstruction of noisy, blurred images. IEEE Transactions on Image Processing, 7:813–824, 1998.
 [34] Y. Wang, J. Yang, W. Yin, and Y. Zhang. A new alternating minimization algorithm for total variation image reconstruction. SIAM Journal on Imaging Sciences, 1(3):248–272, 2008.
 [35] M. Welk. Robust variational approaches to positivityconstrained image deconvolution. Technical Report 261, Department of Mathematics, Saarland University, Saarbrücken, Germany, Mar. 2010.
 [36] M. Welk and M. Erler. Algorithmic optimisations for iterative deconvolution methods. In J. Piater and A. RodríguezSánchez, editors, Proceedings of the 37th Annual Workshop of the Austrian Association for Pattern Recognition (ÖAGM/AAPR), 2013. arXiv:1304.7211 [cs.CV], 2013.
 [37] M. Welk and J. G. Nagy. Variational deconvolution of multichannel images with inequality constraints. In J. Martí, J. M. Benedí, A. M. Mendonça, and J. Serrat, editors, Pattern Recognition and Image Analysis, volume 4477 of Lecture Notes in Computer Science, pages 386–393. Springer, Berlin, 2007.
 [38] M. Welk, P. Raudaschl, T. Schwarzbauer, M. Erler, and M. Läuter. Fast and robust linear motion deblurring. Signal, Image and Video Processing, 2013, DOI 10.1007/s117600130563x (online first).

[39]
M. Welk, D. Theis, T. Brox, and J. Weickert.
PDEbased deconvolution with forwardbackward diffusivities and diffusion tensors.
In R. Kimmel, N. Sochen, and J. Weickert, editors, Scale Space and PDE Methods in Computer Vision, volume 3459 of Lecture Notes in Computer Science, pages 585–597. Springer, Berlin, 2005.  [40] M. Welk, D. Theis, and J. Weickert. Variational deblurring of images with uncertain and spatially variant blurs. In W. Kropatsch, R. Sablatnig, and A. Hanbury, editors, Pattern Recognition, volume 3663 of Lecture Notes in Computer Science, pages 485–492. Springer, Berlin, 2005.
 [41] E. T. Whittaker. On a new method of graduation. Proceedings of the Edinburgh Mathematical Society, 41:63–75, 1923.
 [42] N. Wiener. Extrapolation, Interpolation and Smoothing of Stationary Time Series with Engineering Applications. MIT Press, Cambridge, MA, 1949.
 [43] Y.L. You and M. Kaveh. Anisotropic blind image restoration. In Proc. 1996 IEEE International Conference on Image Processing, volume 2, pages 461–464, Lausanne, Switzerland, Sept. 1996.
 [44] M. E. Zervakis, A. K. Katsaggelos, and T. M. Kwon. A class of robust entropic functionals for image restoration. IEEE Transactions on Image Processing, 4(6):752–773, June 1995.
Comments
There are no comments yet.