On Convergent Finite Difference Schemes for Variational - PDE Based Image Processing

We study an adaptive anisotropic Huber functional based image restoration scheme. By using a combination of L2-L1 regularization functions, an adaptive Huber functional based energy minimization model provides denoising with edge preservation in noisy digital images. We study a convergent finite difference scheme based on continuous piecewise linear functions and use a variable splitting scheme, namely the Split Bregman, to obtain the discrete minimizer. Experimental results are given in image denoising and comparison with additive operator splitting, dual fixed point, and projected gradient schemes illustrate that the best convergence rates are obtained for our algorithm.


page 5

page 13

page 14

page 15

page 16

page 18


Analysis and Simulation of a Coupled Diffusion based Image Denoising Model

In this study, a new coupled Partial Differential Equation (CPDE) based ...

Adaptive diffusion constrained total variation scheme with application to `cartoon + texture + edge' image decomposition

We consider an image decomposition model involving a variational (minimi...

Splitting schemes for FitzHugh–Nagumo stochastic partial differential equations

We design and study splitting integrators for the temporal discretizatio...

Fixed-Point Automatic Differentiation of Forward–Backward Splitting Algorithms for Partly Smooth Functions

A large class of non-smooth practical optimization problems can be writt...

Positivity-preserving finite difference WENO scheme for Ten-Moment equations with source term

We develop a positivity-preserving finite difference WENO scheme for the...

Convergent adaptive hybrid higher-order schemes for convex minimization

This paper proposes two convergent adaptive mesh-refining algorithms for...

Plug-and-Play external and internal priors for image restoration

Image restoration problems were traditionally formulated as the minimiza...

1 Introduction

Variational and partial differential differential equations (PDEs) based schemes are popular in image and video processing problems. In particular in image restoration, adaptive edge preserving smoothing can be achieved by choosing regularizing functions or equivalently diffusion coefficients carefully. This has been the object of study for the last three decades and we mention the seminal work of Perona and Malik [32] as the starting point in PDE based image processing and the connections to variational and robust statistics has also been considered later [3, 17, 51]. We refer to the recent monographs [1, 44] for an overview of these methods.

Based on the smoothness or regularity assumptions on the true image, various regularization functions can be used. The Tikhonov regularization function [49] which is based on the quadratic growth,

-gradient minimization, suppresses gradients and thus is effective in removing noise. Unfortunately gradients can also represent edges which are important for further pattern recognition tasks. To avoid the over smoothing total variation or the

-gradient minimization, which is widely known as the total variation (TV) regularization model, has been advocated [43]. Recently, there are efforts to combine both the and based fundtionals into one common minimization problem such as the Huber function [4, 36], inf-sup convolution [11, 6]. Adaptive versions of the variational - PDE models are gaining popularity [15, 16, 38, 39, 34, 40] and can give better restoration results than non-adaptive schemes in terms of edge preservation. The discrete approximation to the continuous variational - PDE schemes from image processing using finite difference and finite element based schemes have been studied [7, 18, 52, 13, 8, 47, 27, 53, 54]. Convergence of finite differences for various PDEs is a classic area within numerical analysis111Semen Aronovich Geršgorin’s work [21] in 1930 was the first paper to treat the important topic of the convergence of finite-difference approximations to the solution of Laplace-type equations. and is still an active area of research in application areas such as image processing [10, 30, 5, 55].

In this paper we consider convergent finite difference schemes for an adaptive Huber type functional based energy minimization model. We provide comparison with other convex variational regularization functions and use an edge indicator function guided regularization model. By using piecewise continuous linear functions along with the discrete energy we study the convergence of discrete minimizer to the continuous solution. To solve corresponding discrete convex optimization problem various solvers exist, such as the dual minimization [9], primal-dual [12] alternating direction method of multipliers and, operator splitting [45] etc. Here we use the split Bregman method studied by Goldstein and Osher [25, 24] for computing the discrete energy minimizer as it is the fastest in terms of computational complexity and then prove a convergence result for the class of weakly regular images. We utilize an image adaptive inverse gradient based regularization parameter for better denoising without destroying salient edges. Experimental results on real and synthetic noisy images are given to highlight the noise removal property of the proposed model. Comparison results with different discrete optimization models in undertaken and further visualization are provided to support split Bregman based solution.

The rest of the paper is organized as follows. Section 2 provides the background on an adaptive Huber variational - PDE model along with some basic results on bounded variation space. Section 3 details a convergent numerical scheme for the variational scheme. Section 4 provides comparative numerical results on noisy images and Section 5 concludes the paper.

2 Continuous - variational - PDE model

(a) Regularizers
(b) Diffusivities
Figure 1: Regularization and diffusion functions. (a) The regularization function  (6) lies between the quadratic curve and linear when depending on the parameter . (b) Corresponding diffusion functions .

Let be the input (noisy222We assume Gaussian noise, i.e., .) image. We consider the following continuous variational-PDE scheme for image restoration333Note we use the notation to denote the gradient and in the space of bounded variation functions it is infact a Radon measure and is understood in the sense of distributions. The equality is true when .,


The corresponding PDE can be written in term of the Euler-Lagrange equation,


The adaptive discontinuity function is chosen to be an even function. Note that the PDE in Eqn. (2) is a generalized Perona and Malik [32]


where the diffusion function is related with . The diffusion coefficient function

decides how much smoothness occurs and helps in noisy pixels (outlier) rejection. Various choices for choosing

exists in the literature, see [19, 20, 31, 14] and [48] for a recent review. Note that under Gaussian noise assumption the data fidelity term (also called the likelihood term) in Eqn. (1) is quadratic and hence convex in . Thus, if the regularization term is also convex in then we are guaranteed of the well-posedness of the energy minimization scheme given in (1). There are functions which are non-convex [20, 31, 4, 42] with near and asymptotically linear as . This can cause unstable behavior as the scheme can be plagued with local minima. In this paper, we concentrate on convex regularization functions and study a stable and convergent scheme.

Remark 1.

There are other ways to incorporate adaptive weights inside the regularization function or equivalently the diffusion coefficient. For example, as in adaptive total variation, i.e., with , or in general . The main difference lies in the way the regularization function is weighted anisotropically and the final results change according to the formulation utilized. The main convergence result in Section 3 holds true for these type of adaptive functions as well.

Two of the most obvious choices for the regularization function are the Tikhonov or -gradient and the total variation (TV) or -gradient , see Figure 1(a). Both these functions have their advantages and drawbacks as illustrated by a synthetic noisy step image restoration example given in Fig. 2. To further highlight the smoothing properties we show in Figure 3 a line taken across the image and corresponding results444Evolution of the edge synthetic image mesh under different schemes are available as movies in the supplementary material.. The Tikhonov regularization though effective in removing noise, penalizes higher gradients and hence can smooth the step edge excessively as can be seen in the resultant Fig. 2(c). On the other hand the TV regularization better preserves the edges but some additional regions in the homogeneous parts can be enhances which is known as ‘staircasing’ artifact, see Fig. 2

(d). Hence, a robust regularizer is required for effective smoothing for denoising while edges are preserved. For example, motivated from the robust statistics, we consider the classical M-estimators Huber’s min-max function 

[26] and the Tukey’s bisquare function [50] which are given by,


respectively. Note that the parameter determines the region of transition between low and high gradients thereby providing a separation of homogeneous (flat) regions and edges (jumps). To study the fine properties of the Huber and Tukey regularization functions on the final restoration result, we consider a simple 1D signal which consist of a sharp peak like edge and ramp edges along with flat regions.

(a) Original,
(b) Noisy,
(c) Tikhonov
(d) TV
(e) Our
Figure 2: Synthetic image showing the effects of the choice of regularization function on the final restoration results. The - gradient scheme (Tikhonov) over-smoothes the edge whereas - gradient scheme (TV) though edge-preserving can introduce oscillations known as staircasing in homogeneous regions. An adaptive combination via (6) balances the smoothing along with edge preservation.
Figure 3: One dimensional signal (line) taken across the middle of synthetic image in Figure 2. The proposed adaptive scheme provides smoothing with edge preservation when compared with Tikhonov (over-smoothing) and TV (staircasing) regularization approaches.
(a) Original & noisy signals
(b) Huber Restorations
Figure 4: Dependence on the parameter for the Huber function give in (4). (a) Original signal with an impulse edge at and a step edge in the range

and additive Gaussian noise added (unit variance) (b) Restoration using the variational minimization (

1) with Huber in (4) with and .
(a) Original signals
(b) Noisy signals
(c) Tukey Restorations
Figure 5: Instability using Tukey function given in  (5). (a) Two perturbed signals with different magnitudes (b) Additive Gaussian noise added (unit variance) signals (c) Restoration using the minimization (1) with Tukey in (5).
  • The Huber function  (4) is convex and has a linear response to noisy pixels (outliers) and is strongly depends on the parameter for that. Figure 4 shows how the dependence on affects the final restoration strongly on a 1-D noisy signal () with two type of discontinuities given in Fig. 4(a). If is smaller () much of the noise remains and there is no smoothing, whereas if is bigger () then smoothing occurs indiscriminately (Fig. 4(b)) and edges are blurred like the quadratic regularization (equivalent to Gaussian filtering) case. From this we can conclude that setting a small value for the threshold captures edges as well as outliers corresponding to noise. Since we do not a priori know when and where jumps (edges) occur and the input image is corrupted with additive noise there is a need to include an image adaptive measurement for choosing .

  • On the other hand the Tukey function  (5) is non-convex and gives constant response to outliers (Fig. 1)(a), this can be a drawback in a scenario where the edges and outliers have same high frequency content. To illustrate we consider the same 1-D signal but perturb slightly to obtain another 1-D signal copy, see Fig. 5(a). The two original signals are of same type but of different amplitude. After adding additive Gaussian noise of strength to both signals (Fig. 5(b)) we use Tukey function (5) based minimization scheme (1) and obtain the results Fig. 5(c). This shows that a even slight perturbation of the input signal can produce a very different output due to instability associated with the non-convexity nature of the regularization function.

Motivated by the above arguments and to avoid both the over-under smoothing, and local minima issues, in this paper we use the following regularization function [36],


where the free parameters is chosen so as to make the function lie between quadratic case of Tikhonov and Huber’s min-max function, see Fig. 1(a). This also makes the function to be in between both and and strictly convex. Thus the energy minimization of in (1) is well posed. For completeness we outline the theorem here. We denote the the set of all bounded variation functions [22] from by where is the image domain, usually a rectangle in .

Theorem 1 (Well-posedness).

Let be the initial image. If the regularization function is strictly convex then, the energy minimization problem in (1) is well posed in . Moreover, the maximum and minimum principle holds true.


From (1) the first term is strictly convex in . Thus if is also strictly convex then the well-posedness and maximum - minimum principle follows from [36]. ∎

Remark 2.

Note that if in (6) we approach the Tukey’s bisquare function continuously but we lose the convexity, see Fig. 1(a). Hence we stick to and use an adaptive selection of the threshold parameter , see Section 4.1.

Further, to reduce the dependence on the threshold we use the following adaptive edge indicator function,


where and is the Gaussian kernel with width , and is the convolution operation. Theorem 1 guarantees that the regularization function in (6) with the continuous variational minimization problem (1) is well-posed in the sense of Hadamard. Note that the data fidelity or the lagrangian parameter in (1) can be made adaptive so that when we use an iterative scheme as in Section 3 it is made smaller as the iteration increases. This helps in reducing the regularization as the noise level decreases. An adaptive way to select in the numerical simulations is given in Section 4. As we will see in denoising examples, this makes our scheme to adjust according to the image information at the current iteration and gives better restoration results overall. If the parameter is data adaptive, i.e., (see Eqn. (1)) then the above theorem holds true if and continuous, in our case it is true, see Eqn (19) below.

3 A convergent finite difference scheme

3.1 Discretized functional

The digital image has a natural rectangular grid and without loss of generality we assume that the image has size . Then, the domain is divided into subdomains of side length . We let the vertices so that the square subdomains are . Then we use the following finite difference approximations for the gradients,


and similarly for the -direction gradients , , to obtain the forward and backward discrete gradients , and respectively. Then the discretized functional over is written as,


where is the discrete operator applied to the input image . The discrete regularizer in the above equations is,


with the discrete version of the edge indicator function (7) using the discrete gradient and the discrete window based Gaussian function.

3.2 Split Bregman method

We recall the split Bregman method to solve the discrete energy functional in Eqn. (9). We sketch the main parts of the algorithm here and we refer to [24] and [25] for the general treatment on Split Bregman approach. This is a very fast scheme, faster than other numerical schemes reported in the literature, as we will see for example in image denoising tasks, Section 4. An auxiliary variable is introduced in the model with a quadratic penalty function. That is to solve the TV minimization,


we consider the following unconstrained minimization problem,


The above problem is solved by using an alternating minimization scheme, which includes the addition of a vector

, inside the quadratic functional. That is, the algorithm reduces to the following sequence of unconstrained problems,


First a minimization with respect to is performed using a Gauss-Seidel method. Next a minimization with respect to is done using a shrinkage method. Finally, the vector is updated using (14). The following steps summarize the algorithm,

  1. Initialize

  2. For

    1. Compute

The shrinkage operation is given by,

It can be shown that this algorithm converges very quickly even when an approximate solution is used in Eqn. (13). The split Bregman algorithm for solving our functional (9) can similarly be derived. Note that in our case the shrinkage becomes


where is the adaptive edge indicator function given in Eqn. (7).

3.3 Convergence

The digital image

is interpolated using continuous piecewise linear functions on


with and , , . Similarly we define piecewise constant extension for , and the sampling operator

To prove the convergence of the interpolated function to the continuous solution we first introduce some basic notations. In what follows we use the standard notations on Lebesgue () and functions of bounded variation spaces. We define the translation of a set and and a function with vector as , for respectively. Let us recall the definition of -modulus of continuity of order for a function , . Note that the modulus of continuity gives a quantitative account of the continuity property of functions.

Definition 1 (Weakly regular functions).

Let and . We say is weakly regular (-Lipschitz) function if it satisfies the condition .

The main convergence theorem is stated as follows.

Theorem 2 (Convergence).

Let , weakly regular (-Lipschitz, ) and be the discretization with respect to a uniform quadrangulation . Let be the minimizer of the discretized functional over ,

which is obtained using the split Bregman scheme in Section 3.2, and be the minimizer of the continuous functional (1). Then,

  1. The interpolated solution of the discrete model converges to the continuous solution,

  2. converges to as .

We derive some preliminary results required for proving the main theorem. We use a generic constant which can change in line to line.

Lemma 1 (Bounds on solutions).
  • Continuous: Let be a solution of the energy minimization (1) with the adaptive regularization function (6). If , then

  • Discrete: Let be the minimizer of the discretized functional in (9). Then


(1) The inequality follows from the fact that for the adaptive regularization (6) based energy minimization functional in Eqn. (1) is -subdifferentiable.

(2) We first note that

Then the inequality (17) follows from,

Lemma 2 (Convolution bound).

Let be the minimizer of the discretized functional in (9). Let be the mollified extension of the image function to . Then


First note that


Then the inequality follows from,


Proof of Theorem 2:
Let and . From Eqn. (16),

Using Lemma 1 and Lemma 2 respectively for the two difference terms we obtain,


Let and since is weakly regular , the above inequality becomes

Since as we have the result.

4 Experimental results and discussion

4.1 Parameters

(a) Noisefree
(b) Noisy,
Figure 6: Original gray scale test image of size used in our experiments and its edge map computed with gradients. (a) Noise free image (b) Gaussian noise added image, (c) Gradient image, inverted () for better visualization (d) Adaptive from Eqn. (19) at iteration . Notice that the edges are preserved whereas the noise is removed in homogeneous regions as the iterations are increased.

We set the step size , , and parameters in our regularization function in (6) to , , and the thresholding parameter is determined using the mean absolute deviation (MAD) from robust statistics [41],

where the constant is derived from the fact that the MAD of a zero-mean normal distribution with unit variance is

. For the discrete functional (9), the parameter is computed using the gradient magnitude for which we used the same finite difference approximations introduced before, see Eqns. (8). All the test images are normalized to the range .

We further introduce an iteration and pixel adaptive using the gradient information at iteration via


where is added to avoid numerical instabilities. Note that reduces the influence of the regularization term at edges and makes the scheme an image adaptive method. This also reduces the dependence on the threshold to decide upon the outliers part (compare this with Huber’s minmax function (4) and Fig. 4). Since Theorem 1 implies stability we are guaranteed of a good reconstruction even if the input is perturbed significantly (compare this with Tukey bisquare function (5) and Fig. 5). Fig. 7 we consider the same 1-D signal shown earlier in Fig. 4(a). The restoration result exhibits strong smoothing property of our adaptive regularization function with edge preservation. Exact locations of the true discontinuities are preserved and noise is completely removed in homogenous regions.

Figure 6 show the gray-scale

size image used in our later comparison results. We add Gaussian white noise of standard deviation

and mean zero555Using MATLAB command imnoise(,’gaussian’,0,).. Figure 6 (b) & (c) shows the gradient image (Computed using the formulae (8)) from the initial noisy image and adaptive parameter computed using Eqn. (19) at iteration showing the improvement in the edge map.

4.2 Restoration results

Figure 7: Restoration of a 1-D signal by our scheme: () Original signal () noisy signal and (—–) dark line is the restored curve using our adaptive minimization scheme (1) with function (6). Compare this with the corresponding results for Huber and Tukey functions in Figure 4(b) and Figure 5(c) respectively.
(a) Color scene and , gray scale photo
(b) Smoothed images at iteration
Figure 8: Restoration by our adaptive regularization scheme on some real images with unknown noise strength. (Top row) Original images (Bottom row) Our adaptive regularization scheme results.
(a) Original
(b) Input
(c) Result
(d) Edges
Figure 9: color image restoration result. (a) Original RGB image (b) Noisy image, , (c) Restored by our method, (d) Edges computed in all three channels (RGB) using the Canny edge detector.

In Fig. 8 we restore three real images, original color still (film grain noise, medium granularity), a image taken by a mobile camera picture ( mega-pixels, image contains unknown amount of shot noise), and an old gray-scale photograph (noise type unknown) respectively. Note that for (RGB) color images we use the scheme (1) for each of the channels red, green, and blue and combine the final restoration result. The restored results in Fig. 8 (b) exhibit marked improvements. Note that fine texture details are lost in Fig. 8 (b) (background wall, goat, hair and shirt), we may need to include further statistical information about textures in our scheme. Apart from this our scheme overall performs well and has strong edge preserving smoothing properties. The strong smoothing nature of our adaptive regularization (6) can be seen in another piecewise smooth image shown in Fig. 9 (a). This color image consists of flat background with strong curved edges and the result in Fig. 9 (c) indicates the local smoothing due to Gaussian filtering effect in regions where and edge preserving TV filtering in other areas. Figure 9 (d) shows the Canny edge map of computed in all the three color channels666Using MATLAB command edge(,’canny’). Note that the Canny edge detector employs non-maximal suppression to avoid small scale edges. The edges are computed for each of Red, Green, Blue channels and the final result is shown by combining them..

4.3 Comparison results

(a) True image
(b) Huber
(c) Tukey
(d) Our
(e) Noisy image
(f) Noisy contour
Figure 10: Restoration result for a color color image (size with our adaptive regularization scheme: (a) Original image (b-d) results of Huber, Tukey and our adaptive regularization function based scheme with tolerance respectively (e) Gaussian noise () corrupted image (f-h) Residual noise/method noise image, (i-j) Contour map showing the restoration on level lines.
(a) Iteration Vs Mean Error
(b) Noise Vs Peak Signal-to-Noise Ratio
Figure 11: Comparison of our proposed scheme with in (6) with Huber’s  (4) and Tukey’s : (5) for the color image in Figure 10. (a) Number of iterations () Vs Mean error () (b) Noise level () Vs Peak signal-to-noise ratio () for different noise levels.

Figure 10 we show a comparison of restoration results for the color image. As can be seen from the method noise and a contour maps our adaptive regularization scheme outperforms other schemes in terms of noise removal and edge preservation. The level lines are smoothed without reducing their edginess and flat regions are preserved without staircasing artifacts. Figure. 11 shows the ME and PSNR comparisons illustrating the versatility of our adaptive scheme (1) with the proposed regularization function (6) against other functions. Also note that the ME error curve for our method outperforms Huber and Tukey functions based regularization and quickly converges to a desired solution (usually is sufficient). On the other hand our function (6) is robust when compared to the other two classical functions as can be seen from the PSNR comparison Fig. 11 (b) as well. The topmost PSNR curve indicates that the scheme proposed in this paper surpasses the other two when the noise level increases . Note that is a high level noise and our scheme (1) does a good job in distinguishing between outliers correspond to noise and true edges due to the adaptive nature of  (19).

We next provide comparison with primal dual hybrid gradient (PDHG) [56], projected averaged gradient (Proj. Grad) [57], fast gradient projection (FGP) [2], alternating direction method of multipliers (ADMM) [23], and split Bregman (Split Breg.) based schemes. The following error metrics are used to compare the convergence and performance of different algorithms for the discrete minimization Eqn. (9).

  • Relative duality gap:


    where , represent the primal and dual objective functions respectively. This is used as a stopping criteria for the iterative schemes.

  • Peak Signal-to-Noise (PSNR) ratio,


    The higher the PSNR the better the restoration result.

  • The mean error (ME):

    The mean error needs to be small for restored images.

(a) Tikhonov
(b) TV
(c) Our
(d) PSNR (dB) comparison
Figure 12: Image restoration of noisy (Figure 6 (b)) and PSNR (dB) comparison of results for the (a) Tikhonov (), (b) TV () and (c) our regularization function (6) based schemes (d) PSNR comparison shows that the proposed adaptive scheme performs better across different noise levels.
Algorithm tol tol tol
PDHG 14 70 310
Proj. Grad. 46 721 14996
FGP 24 179 1264
ADMM 97 270 569
Split Breg. 10 28 55
Table 1: Comparison with primal dual hybrid gradient (PDHG), projected averaged gradient (Proj. Grad), fast gradient projection (FGP), alternating direction method of multipliers (ADMM), and split Bregman based scheme. Iterations required for denoising of the image (, noise level ) with different numerical schemes for the relative duality gap .
Noise PDHG Proj. Grad. FGP ADMM Split Breg.
15 21.61 (28s, ) 21.58 (30s, ) 20.21 (20s, ) 21.85 (24s, ) 25.40 (10s, )
20 20.46 (28s, ) 20.29 (30s, ) 20.12 (20s, ) 20.05 (24s, ) 23.82 (10s, )
25 17.01 (28s, ) 16.88 (30s, ) 16.26 (20s, ) 17.73 (24s, ) 17.92 (10s, )
30 10.77 (28s, ) 11.71 (30s, ) 11.93 (20s, ) 11.05 (24s, ) 12.67 (10s, )
Table 2: Comparison of different algorithms in terms of noise level () for the gray scale image. The results are given in terms of best possible PSNR (computational time in seconds, maximum iterations). Each scheme is terminated if the maximum number of iterations exceeded or when the duality gap is less than .

First comparative example in Fig. 12 compares the restoration results for the noisy gray scale image from Fig. 6 (b). As can be seen, adaptive Huber function performs better than the classical TV and Tikhonov schemes. Moreover, improvement in PSNR is (see Fig. 12 (d)) in different noise levels which indicates the success of our scheme in terms of noise removal. Table 1 shows the number of iterations taken by different optimization schemes for solving the discrete regularization scheme (9) with respect to the relative duality gap error (20) as a stopping criteria. The split Bregman based implementation outperforms all the other schemes by reducing the relative duality gap within very few iterations. Next, Table 2 provides a comparison of PSNR (time in seconds, maximum iterations) for different noise levels and for different optimization schemes for the noisy image. The experiments were performed on a Mac Pro Laptop with 2.3GHz Intel Core i7 processor, 8Gb memory and MATLAB R2012a was used for visualizations. The split Bregman minimization outperforms all the related schemes in terms of PSNR (dB) as well as in timing as can be seen from the table. Similar analysis for the image deblurring and deconvolution requires a delicate analysis of the boundary conditions [46] and is treated elsewhere. Other avenues of exploration are treating higher order models [54, 28], multi grid [47] and FEM [29] based schemes and their convergence analysis.

5 Conclusion

In this paper we considered adaptive Huber type regularization function based image restoration scheme. By using discrete split Bregman scheme we proved the convergence to continuous formulation. Experimental results on real images are given to illustrate the results presented. Compared with other schemes the splitting based scheme provides faster convergence as well as good restoration results. The scheme can be extended to handle multispectral images by using inter-channel correlations [37, 33, 35] and this defines our future work.


  • [1] G. Aubert and P. Kornprobst.

    Mathematical problems in image processing: Partial differential equation and calculus of variations

    Springer-Verlag, New York, USA, 2006.
  • [2] A. Beck and M. Teboulle. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Transactions on Image Processing, 18(18):2419–2434, 2009.
  • [3] M. J. Black and A. Rangarajan. On the unification of line processes, outlier rejection, and robust statistics with applications in early vision.

    International Journal of Computer Vision

    , 19(1):57–91, 1996.
  • [4] M. J. Black, G. Sapiro, D. H. Marimont, and D. Heeger. Robust anisotropic diffusion. IEEE Transactions on Image Processing, 7(3):421–432, 1998.
  • [5] E. Carlini and R. Ferretti. A semi-Lagrangian approximation for the AMSS model of image processing. Applied Numerical Mathematics, 2012.
  • [6] V. Caselles, G. Sapiro, and D. H. Chung. Vector median filters, inf-sup operations, and coupled PDE’s: Theoretical connections. Journal of Mathematical Imaging and Vision, 12(2):109–119, 2000.
  • [7] A. Chambolle. Image segmentation by variational methods: Mumford and Shah functional and the discrete approximations. SIAM Journal on Applied Mathematics, 55(3):827–863, 1995.
  • [8] A. Chambolle. Finite–differences discretizations of the Mumford–Shah functional. M2AN Mathematical Modeling and Numerical Analysis, 33(2):261–288, 1999.
  • [9] A. Chambolle. An algorithm for total variation minimization and applications. Journal of Mathematical Imaging and Vision, 20(1–2):89–97, 2004.
  • [10] A. Chambolle, S. Levine, and B. J. Lucier. An upwind finite-difference method for total variation based image smoothing. SIAM Journal on Imaging Sciences, 4(1):277–299, 2011.
  • [11] A. Chambolle and P. L. Lions. Image recovery via total variation minimization and related problems. Numerische Mathematik, 76(2):167–188, 1997.
  • [12] A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision, 40(1):120–145, 2011.
  • [13] T. F. Chan and P. Mulet. On the convergence of the lagged diffusivity fixed point method in total variation image restoration. SIAM Journal on Numerical Analysis, 36(2):354–367, 1999.
  • [14] T. F. Chan and J. Shen. Image processing and analysis: Variational, PDE, wavelet, and stochastic methods. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2005.
  • [15] K. Chen. Adaptive smoothing via contextual and local discontinuities. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(10):1552– 1567, 2005.
  • [16] Y. Chen, S. Levine, and M. Rao. Variable exponent, linear growth functionals in image restoration. SIAM Journal on Applied Mathematics, 66(4):1383–1406, 2006.
  • [17] C. K. Chu, I. K. Glad, F. Godtliebsen, and J. S. Marron. Edge-preserving smoothers for image processing. Journal of American Statistical Association, 93(442):526–541, 1998.
  • [18] D. C. Dobson and C. R. Vogel. Convergence of an iterative method for total variation denoising. SIAM Journal on Numerical Analysis, 34(5):1779–1791, 1997.
  • [19] S. Geman and D. Geman. Stochastic relaxation, Gibbs distribution and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6(6):721–741, 1984.
  • [20] S. Geman and D. McClure. Statistical methods for tomographic image reconstruction. In In Proceedings of the 46-th Session of the ISI, Bulletin of the ISI, volume 52, pages 22–26, 1987.
  • [21] S. A. Gersgorin. Fehlerabschätzung für das differenzverfahren zur lösung partieller differentialgleichungen. J. Angew. Math. Mech., 10:373–382, 1930.
  • [22] F. Giusti. Minimal Surfaces and Functions of Bounded Variation. Birkhauser, Basel, Switzerland, 1984.
  • [23] R. Glowinski and A. Marrocco. Sur lapproximation par elements finis dordre un, et la resolution par penalisation-dualite dune classe de problemes de dirichlet nonlineaires. Rev. Francaise dAut. Inf. Rech. Oper., R(2):41–76, 1975.
  • [24] T. Goldstein, X. Bresson, and S. Osher. Geometric applications of the split Bregman method: segmentation and surface reconstruction. Journal of Scientific Computation, 45(1–3):272–293, 2010.
  • [25] T. Goldstein and S. Osher. The split Bregman algorithm for L1 regularized problems. SIAM Journal on Imaging Sciences, 2(2):323–343, 2009.
  • [26] P. J. Huber. Robust Statistics. Wiley, New York, NY, USA, 1981.
  • [27] R.-Q. Jia, H. Q. Zhao, and W. Zhao. Convergence analysis of the bregman method for the variational model of image denoising. Applied and Computational Harmonic Analysis, 27(3):367 379, 2009.
  • [28] Q. Jiang. Correspondence between frame shrinkage and high-order nonlinear diffusion. Applied Numerical Mathematics, 62(1):51–66, 2012.
  • [29] J. Kaccur and K. Mikula. Solution of nonlinear diffusion appearing in image smoothing and edge detection. Applied Numerical Mathematics, 17(1):47–59, 1995.
  • [30] M.-J. Lai and L. M. Messi. Piecewise linear approximation of the continuous Rudin–Osher–Fatemi model for image denoising. SIAM Journal on Numerical Analysis, 50(5):2446–2466, 2012.
  • [31] S. Z. Li. Markov Field Random Modeling in Computer Vision. Springer, Berlin, Germany, 1995.
  • [32] P. Perona and J. Malik. Scale-space and edge detection using anisotropic diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(7):629–639, 1990.
  • [33] V. B. S. Prasath. Weighted Laplacian differences based multispectral anisotropic diffusion. In IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pages 4042–4045, Vancouver BC, Canada, July 2011.
  • [34] V. B. S. Prasath. A well-posed multiscale regularization scheme for digital image denoising. International Journal of Applied Mathematics and Computer Science, 21(4):769–777, 2011.
  • [35] V. B. S. Prasath, J. C. Moreno, and K. Palaniappan. Color image denoising by chromatic edges based vector valued diffusion. Preprint, 2013. Available at http://arxiv.org/abs/1304.5587.
  • [36] V. B. S. Prasath and A. Singh. A hybrid convex variational model for image restoration. Applied Mathematics and Computation, 215(10):3655–3664, 2010.
  • [37] V. B. S. Prasath and A. Singh. Multispectral image denoising by well-posed anisotropic diffusion scheme with channel coupling. International Journal of Remote Sensing, 31(8):2091–2099, 2010.
  • [38] V. B. S. Prasath and A. Singh. Well-posed inhomogeneous nonlinear diffusion scheme for digital image denoising. Journal of Applied Mathematics, 2010:14pp, 2010. Article ID 763847.
  • [39] V. B. S. Prasath and A. Singh. An adaptive anisotropic diffusion scheme for image restoration and selective smoothing. International Journal of Image and Graphics, 12(1):18pp, 2012.
  • [40] V. B. S. Prasath and D. Vorotnikov. On a system of adaptive coupled pdes for image restoration. Journal of Mathematical Imaging and Vision, Online First, 2012. Available at arXiv:1112.2904.
  • [41] W. J. J. Rey. Introduction to robust and quasirobust statistical methods. Springer-Verlag, Berlin, Germany, 1983.
  • [42] M. Rivera and J. L. Marroquin. Efficient half-quadratic regularization with granularity control. Image and Vision Computing, 21(4):345–357, 2003.
  • [43] L. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms. Physica D, 60(1–4):259–268, 1992.
  • [44] O. Scherzer, M. Grasmair, H. Grossauer, M. Haltmeier, and F. Lenzen. Variational Methods in Imaging. Springer-Verlag, New York, USA, 2009.
  • [45] S. Setzer. Operator splittings, Bregman methods and frame shrinkage in image processing. International Journal of Computer Vision, 92(3):265–280, 2011.
  • [46] Y. Shi and Q. Chang. Acceleration methods for image restoration problem with different boundary conditions. Applied Numerical Mathematics, 58(5):602–614, 2008.
  • [47] R. M. Spitaleri, R. March, and D. Arena. A multigrid finite-difference method for the solution of Euler equations of the variational image segmentation. Applied Numerical Mathematics, 39(2):181–189, 2001.
  • [48] R. Szeliski, R. Zabih, D. Scharstein, O. Veksler, V. Kolmogorov, A. Agarwala, M. Tapen, and C. Rother. A comparative study of energy minimization methods for Markov random fields with smoothness based priors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(6):1068–1080, 2008.
  • [49] A.N. Tikhonov and V.Y. Aresenin. Solutions of Ill-posed Problems. John Wiley, New York, NY, USA, 1997.
  • [50] J. W. Tukey. Exploratory data analysis. Addison-Wesley Publishers, 1977.
  • [51] J. Weickert. Anisotropic diffusion in image Processing. B.G. Teubner-Verlag, Stuttgart, Germany, 1998.
  • [52] J. Weickert, B. M. H. Romeny, and M. A. Viergever. Efficient and reliable schemes for nonlinear diffusion filtering. IEEE Transactions on Image Processing, 7(3):398–410, 1998.
  • [53] P. Weiss, L. Blanc-Feraud, and G. Aubert. Efficient schemes for total variation minimization under constraints in image processing. SIAM Journal of Scientific Computing, 31(3):2047–2080, 2009.
  • [54] T.-T. Wu, Y.-F. Yang, and Z.-F. Pang. A modified fixed-point iterative algorithm for image restoration using fourth-order PDE model. Applied Numerical Mathematics, 62(2):79–90, 2012.
  • [55] G. Xu. Consistent approximations of several geometric differential operators and their convergence. Applied Numerical Mathematics, 69:1–12, 2013.
  • [56] M. Zhu and T. F. Chan. An efficient primal-dual hybrid gradient algorithm for total variation image restoration. Technical Report 08–34, UCLA CAM, 2008.
  • [57] M. Zhu, S. J. Wright, and T. F. Chan. Duality-based algorithms for total-variation-regularized image restoration. Computational Optimization and Applications, 47(3):377–400, 2010.