ℓ_0TV: A Sparse Optimization Method for Impulse Noise Image Restoration

02/27/2018 ∙ by Ganzhao Yuan, et al. ∙ King Abdullah University of Science and Technology 0

Total Variation (TV) is an effective and popular prior model in the field of regularization-based image processing. This paper focuses on total variation for removing impulse noise in image restoration. This type of noise frequently arises in data acquisition and transmission due to many reasons, e.g. a faulty sensor or analog-to-digital converter errors. Removing this noise is an important task in image restoration. State-of-the-art methods such as Adaptive Outlier Pursuit(AOP) yan2013restoration, which is based on TV with ℓ_02-norm data fidelity, only give sub-optimal performance. In this paper, we propose a new sparse optimization method, called ℓ_0TV-PADMM, which solves the TV-based restoration problem with ℓ_0-norm data fidelity. To effectively deal with the resulting non-convex non-smooth optimization problem, we first reformulate it as an equivalent biconvex Mathematical Program with Equilibrium Constraints (MPEC), and then solve it using a proximal Alternating Direction Method of Multipliers (PADMM). Our ℓ_0TV-PADMM method finds a desirable solution to the original ℓ_0-norm optimization problem and is proven to be convergent under mild conditions. We apply ℓ_0TV-PADMM to the problems of image denoising and deblurring in the presence of impulse noise. Our extensive experiments demonstrate that ℓ_0TV-PADMM outperforms state-of-the-art image restoration methods.



There are no comments yet.


page 1

page 8

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Image restoration is an inverse problem, which aims at estimating the original

clean image from a blurry and/or noisy observation . Mathematically, this problem is formulated as:


where is a linear operator, and

are the noise vectors, and

denotes an elementwise product. Let and be column vectors of all entries equal to one and zero, respectively. When and (or and ), (1) corresponds to the additive (or multiplicative) noise model. For convenience, we adopt the vector representation for images, where a 2D image is column-wise stacked into a vector with . So, for completeness, we have , and . Before proceeding, we present an image restoration example on the well-known ‘barbara’ image using our proposed method for solving impulse noise removal in Figure 1.

Figure 1: An example of an image recovery result using our proposed TV-PADMM method. Left column: corrupted image. Middle column: recovered image. Right column: absolute residual between these two images.

In general image restoration problems, represents a certain linear operator, e.g. convolution, wavelet transform, etc., and recovering from is known as image deconvolution or image deblurring. When is the identity operator, estimating from is referred to as image denoising [50]. The problem of estimating from is called a linear inverse problem which, for most scenarios of practical interest, is ill-posed due to the singularity and/or the ill-conditioning of . Therefore, in order to stabilize the recovery of , it is necessary to incorporate prior-enforcing regularization on the solution. Therefore, image restoration can be modelled globally as the following optimization problem:


where measures the data fidelity between and the observation , and

are two suitable linear transformation matrices such that

and compute the discrete gradients of the image along the -axis and -axis, respectively111In practice, one does not need to compute and store the matrices and explicitly. Since the adjoint of the gradient operator is the negative divergence operator , i.e., for any , the inner product between vectors can be evaluated efficiently. Fore more details on the computation of and div operators, please refer to [14, 51, 4]., is the regularizer on and , and is a positive parameter used to balance the two terms for minimization. Apart from regularization, other prior information such as bound constraints [5, 70] or hard constraints can be incorporated into the general optimization framework in (2).

Data Fidelity Function Noise and References
add. Gaussian noise [47, 14]
add. Laplace noise [60, 23]
add. uniform noise [22, 51]
mul. Poisson noise [36, 49]
mul. Gamma noise [3, 53]
mul. Rayleigh noise [48, 2]
mixed Gaussian impulse noise [59]
add./mul. impulse noise [ours]
Table I: Data Fidelity Models

1.1 Related Work

This subsection presents a brief review of existing TV methods, from the viewpoint of data fidelity models, regularization models and optimization algorithms.

Data Fidelity Models: The fidelity function in (2) usually penalizes the difference between and by using different norms/divergences. Its form depends on the assumed distribution of the noise model. Some typical noise models and their corresponding fidelity terms are listed in Table I. The classical TV model [47] only considers TV minimization involving the squared -norm fidelity term for recovering images corrupted by additive Gaussian noise. However, this model is far from optimal when the noise is not Gaussian. Other works [60, 23] extend classical TV to use the -norm in the fidelity term. Since the

-norm fidelity term coincides with the probability density function of Laplace distribution, it is suitable for image restoration in the presence of Laplace noise. Moreover, additive uniform noise

[22, 51], multiplicative Poisson noise [36], and multiplicative Gamma noise [53] have been considered in the literature. Some extensions have been made to deal with mixed Rayleigh impulse noise and mixed Poisson impulse noise in [2]. Recently, a sparse noise model using an -norm for data fidelity has been investigated in [59] to remove impulse and mixed Gaussian impulse noise. In this paper, we consider -norm data fidelity and show that it is particularly suitable for reconstructing images corrupted with additive/multiplicative 222The impulse noise has a discrete nature (corrupted or uncorrupted), thus it can be viewed as additive noise or multiplicative noise. impulse noise.

Regularization Models: Several regularization models have been studied in the literature (see Table II). The Tikhonov-like regularization [1] function is quadratic and smooth, therefore it is relatively inexpensive to minimize with first-order smooth optimization methods. However, since this method tends to overly smooth images, it often erodes strong edges and texture details. To address this issue, the total variation (TV) regularizer was proposed by Rudin, Osher and Fatemi in [47] for image denoising. Several other variants of TV have been extensively studied. The original TV norm in [47] is isotropic, while an anisotropic variation is also used. From a numerical point of view, and cannot be directly minimized since they are not differentiable. A popular method is to use their smooth approximation and (see [46] for details). Very recently, the Potts model [29, 42, 9], which is based on the -norm, has received much attention. It has been shown to be particularly effective for image smoothing [56] and motion deblurring [57].

Regularization Function Description and References
Tikhonov-like [1]
Isotropic [47, 53]
Anisotropic [50, 60]
smooth TV [18, 51]
Potts model [56, 57]
Huber-Like [46]
Table II: Regularization Models

Optimization Algorithms: The optimization problems involved in TV-based image restoration are usually difficult due to the non-differentiability of the TV norm and the high dimensionality of the image data. In the past several decades, a plethora of approaches have been proposed, which include PDE methods based on the Euler-Lagrange equation [47], the interior-point method [18], the semi-smooth Newton method [45], the second-order cone optimization method [31], the splitting Bregman method [32, 69], the fixed-point iterative method [21], Nesterov’s first-order optimal method [44, 5], and alternating direction methods [50, 20, 53]. Among these methods, some solve the TV problem in its primal form [50], while others consider its dual or primal-dual forms [18, 23]. In this paper, we handle the TV problem with -norm data fidelity using a primal-dual formulation, where the resulting equality constrained optimization is solved using proximal Alternating Direction Method of Multipliers (PADMM). It is worthwhile to note that the Penalty Decomposition Algorithm (PDA) in [39] can also solve our problem, however, it lacks numerical stability. This motivates us to design a new -norm optimization algorithm in this paper.

1.2 Contributions and Organization

The main contributions of this paper are two-fold. (1) -norm data fidelity is proposed to address the TV-based image restoration problem333We are also aware of Ref. [19] where -norm data fidelity is considered. However, their interpretation from the MAP viewpoint is not correct. . Compared with existing models, our model is particularly suitable for image restoration in the presence of impulse noise. (2) To deal with the resulting NP-hard 444The norm problem is known to be NP-hard [43], since it is equivalent to NP-complete subset selection problems. norm optimization, we propose a proximal ADMM to solve an equivalent MPEC form of the problem. A preliminary version of this paper appeared in [63].

The rest of the paper is organized as follows. Section 2 presents the motivation and formulation of the problem for impulse noise removal. Section 3 presents the equivalent MPEC problem and our proximal ADMM solution. Section 4 discusses the connection between our method and prior work. Section 5 provides extensive and comparative results in favor of our TV method. Finally, Section 6 concludes the paper.

2 Motivation and Formulations

2.1 Motivation

This work focuses on image restoration in the presence of impulse noise, which is very common in data acquisition and transmission due to faulty sensors or analog-to-digital converter errors, etc. Moreover, scratches in photos and video sequences can be also viewed as a special type of impulse noise. However, removing this kind of noise is not easy, since corrupted pixels are randomly distributed in the image and the intensities at corrupted pixels are usually indistinguishable from those of their neighbors. There are two main types of impulse noise in the literature [23, 35]: random-valued and salt-and-pepper impulse noise. Let be the dynamic range of an image, where and in this paper. We also denote the original and corrupted intensity values at position as and , respectively.

Random-valued impulse noise: A certain percentage of pixels are altered to take on a uniform random number :


Salt-and-pepper impulse noise: A certain percentage of pixels are altered to be either or :


The above definition means that impulse noise corrupts a portion of pixels in the image while keeping other pixels unaffected. Expectation maximization could be used to find the MAP estimate of

by maximizing the conditional posterior probability

, the probability that occurs when

is observed. By the Bayes’ theorem, we have that

Taking the negative logarithm of the above equation, the estimate is a solution of the following minimization problem:


We now focus on the two terms in (5). (i) The expression can be viewed as a fidelity term measuring the discrepancy between the estimate and the noisy image . The choice of the likelihood depends upon the property of noise. From the definition of impulse noise given above, we have that

where is the noise density level as defined in (3) and (4) and counts the number of non-zero elements in a vector. (ii) The term in (5) is used to regularize a solution that has a low probability. We use a prior which has the Gibbs form: with . Here, is the TV prior energy functional, is a normalization factor such that the TV prior is a probability, and is the free parameter of the Gibbs measure. Replacing and into (5) and ignoring a constant, we obtain the following model:

where is a positive number related to , and . The parameter can be 1 (anisotropic TV) or (isotropic TV), and and denote the th component of the vectors and , respectively. For convenience, we define :

In order to make use of more prior information, we consider the following box-constrained model:


where is specified by the user. When is 0, it indicates the pixel in position is an outlier, while when is 1, it indicates the pixel in position is a potential outlier. For example, in our experiments, we set for the random-valued impulse noise and for the salt-and-pepper impulse noise. In what follows, we focus on optimizing the general formulation in (6).

2.2 Equivalent MPEC Reformulations

In this section, we reformulate the problem in (6) as an equivalent MPEC from a primal-dual viewpoint. First, we provide the variational characterization of the -norm using the following lemma.

Lemma 1.

For any given , it holds that


and is the unique optimal solution of the problem in (7). Here, the standard signum function sign is applied componentwise, and .


The total number of zero elements in can be computed as , where . Note that when , will be achieved by maximization, when , will be enforced by the constraint. Thus, . Since the objective function is linear, maximization is always achieved at the boundaries of the feasible solution space. Thus, the constraint of can be relaxed to , we have: .

The result of Lemma 1 implies that the -norm minimization problem in (6) is equivalent to


If is a global optimal solution of (6), then is globally optimal to (8). Conversely, if is a global optimal solution of (8), then is globally optimal to (6).

Although the MPEC problem in (8) is obtained by increasing the dimension of the original -norm problem in (6), this does not lead to additional local optimal solutions. Moreover, compared with (6), (8) is a non-smooth non-convex minimization problem and its non-convexity is only caused by the complementarity constraint .

Such a variational characterization of the -norm is proposed in [25, 34, 27, 7, 6], but it is not used to develop any optimization algorithms for -norm problems. We argue that, from a practical perspective, improved solutions to (6) can be obtained by reformulating the -norm in terms of complementarity constraints [40, 63, 65, 64, 66, 67]. In the following section, we will develop an algorithm to solve (8) based on proximal ADMM and show that such a “lifting” technique can achieve a desirable solution of the original -norm optimization problem.

3 Proposed Optimization Algorithm

This section is devoted to the solution of (8). This problem is rather difficult to solve, because it is neither convex nor smooth. Our solution is based on the proximal ADM method, which iteratively updates the primal and dual variables of the augmented Lagrangian function of (8).

First, we introduce two auxiliary vectors and to reformulate (8) as:


Let be the augmented Lagrangian function of (9).

where , and are the Lagrange multipliers associated with the constraints , and , respectively, and is the penalty parameter. The detailed iteration steps of the proximal ADM for (9) are described in Algorithm 1. In simple terms, ADM updates are performed by optimizing for a set of primal variables at a time, while keeping all other primal and dual variables fixed. The dual variables are updated by gradient ascent on the resulting dual problem.

(S.0) Choose a starting point . Set . Select step size , , , and .

(S.1) Solve the following minimization problems with and :


(S.2) Update the Lagrange multipliers:


(S.3) if , then (S.4)  Set and then go to Step (S.1).

Algorithm 1 (-ADMM) A Proximal ADMM for Solving the Biconvex MPEC Problem (8)

Next, we focus our attention on the solutions of the subproblems in (10) and (11) arising in Algorithm 1. We will show that the computation required in each iteration of Algorithm 1 is insignificant.

(i) -subproblem. Proximal ADM introduces a convex proximal term to the objective. The specific form of is chosen to expedite the computation of the closed form solution. The introduction of is to guarantee strongly convexity of the subproblems.

-subproblem in (10) reduces to the following minimization problem:


After an elementary calculation, subproblem (15) can be simplified as

with . Then, the solution of (10) has the following closed form expression:

Here the parameter depends on the spectral norm of the linear matrices and . Using the definition of and the classical finite differences that and (see [4, 14, 70]), the spectral norm of can be computed by: .

-subproblem in (10) reduces to the following minimization problem:

where , . Therefore, the solution can be computed as:

(iii) -subproblem. Variable in (11) is updated by solving the following problem:

where . It is not difficult to check that for ,

and when ,

Variable in (11) is updated by solving the following problem:

where and . A simple computation yields that the solution can be computed in closed form as:

Proximal ADM has excellent convergence in practice. The global convergence of ADM for convex problems was given by He and Yuan in [33, 20] under the variation inequality framework. However, since our optimization problem in (8) is non-convex, the convergence analysis for ADM needs additional conditions. By imposing some mild conditions, Wen et al. [52] managed to show that the sequence generated by ADM converges to a KKT point. Along a similar line, we establish the convergence property of proximal ADM. Specifically, we have the following convergence result.

Theorem 1.

Convergence of Algorithm 1. Let , and be the sequence generated by Algorithm 1. Assume that is bounded and satisfies . Then any accumulation point of sequence satisfies the KKT conditions of (9).


Please refer to Appendix A. ∎

Remark 1. The condition holds when the multiplier does not change in two consecutive iterations. By the boundedness of the penalty parameter and Eqs (12-14), this condition also indicates that the equality constraints in (9) are satisfied. This assumption can be checked by measuring the violation of the equality constraints. Theorem 1 indicates that when the equality constraint holds, PADMM converges to a KKT point. Though not satisfactory, it provides some assurance on the convergence of Algorithm 1.

Remark 2. Two reasons explain the good performance of our method. (i) It targets a solution to the original problem in (6). (ii) It has monotone and self-penalized properties owing to the complimentarity constraints brought on by the MPEC. Our method directly handles the complimentary constraints in (9) with . These constraints are the only sources of non-convexity for the optimization problem and they characterize the optimality of the KKT solution of (6). These special properties of MPEC distinguish it from general nonlinear optimization [65, 66, 64, 67]. We penalize the complimentary error of (which is always non-negative) and ensure that the error is decreasing in every iteration.

4 Connection with Existing Work

In this section, we discuss the connection between the proposed method -PADM and prior work.

4.1 Sparse Plus Low-Rank Matrix Decomposition

Sparse plus low-rank matrix decomposition [54, 35] is becoming a powerful tool that effectively corrects large errors in structured data in the last decade. It aims at decomposing a given corrupted image (which is of matrix form) into its sparse component () and low-rank component () by solving: . Here the sparse component represents the foreground of an image which can be treated as outliers or impulse noise, while the low-rank component corresponds to the background, which is highly correlated. This is equivalent to the following optimization problem:

which is also based on -norm data fidelity. While they consider the low-rank prior in their objective function, we consider the Total Variation (TV) prior in ours.

4.2 Convex Optimization Method

The goal of image restoration in the presence of impulse noise has been pursued by a number of authors (see, e.g., [60, 23]) using , which can be formulated as follows:


It is generally believed that is able to remove the impulse noise properly. This is because -norm provides the tightest convex relaxation for the -norm over the unit ball in the sense of -norm. It is shown in [12] that the problem of minimizing is equivalent to with high probability under the assumptions that (i) is sparse at the optimal solution and (ii) is a random Gaussian matrix and sufficiently “incoherent” (i.e., number of rows in is greater than its number of columns). However, these two assumptions required in [12] do not necessarily hold true for our optimization problem. Specifically, when the noise level of the impulse noise is high, may not be sparse at the optimal solution . Moreover, the matrix

is a square identity or ill-conditioned matrix. Generally,

will only lead to a sub-optimal solution.

4.3 Adaptive Outlier Pursuit Algorithm

Very recently, Yan [59] proposed the following new model for image restoration in the presence of impulse noise and mixed Gaussian impulse noise:


where is the regularization parameter. They further reformulate the problem above into and then solve this problem using an Adaptive Outlier Pursuit(AOP) algorithm. The AOP algorithm is actually an alternating minimization method, which separates the minimization problem over and into two steps. By iteratively restoring the images and updating the set of damaged pixels, it is shown that AOP algorithm outperforms existing state-of-the-art methods for impulse noise denoising, by a large margin.

Despite the merits of the AOP algorithm, we must point out that it incurs three drawbacks, which are unappealing in practice. First, the formulation in (17) is only suitable for mixed Gaussian impulse noise, i.e. it produces a sub-optimal solution when the observed image is corrupted by pure impulse noise. (ii) Secondly, AOP is a multiple-stage algorithm. Since the minimization sub-problem over 555It actually reduces to the optimization problem. needs to be solved exactly in each stage, the algorithm may suffer from slow convergence. (iii) As a by-product of (i), AOP inevitably introduces an additional parameter (that specifies the Gaussian noise level), which is not necessarily readily available in practical impulse denoising problems.

In contrast, our proposed TV method is free from these problems. Specifically, (i) as have been analyzed in Section 2, i.e. our -norm model is optimal for impulse noise removal. Thus, our method is expected to produce higher quality image restorations, as seen in our results. (ii) Secondly, we have integrated -norm minimization into a unified proximal ADM optimization framework, it is thus expected to be faster than the multiple stage approach of AOP. (iii) Lastly, while the optimization problem in (17) contains two parameters, our model only contains one single parameter.

4.4 Other -Norm Optimization Techniques

Actually, the optimization technique for the -norm regularization problem is the key to removing impulse noise. However, existing solutions are not appealing. The -norm problem can be reformulated as a mixed integer programming [8]problem which can be solved by a tailored branch-and-bound algorithm but it involves high computational complexity. The simple projection methods are inapplicable to our model since they assume the objective function is smooth. Similar to the relaxation, the convex methods such as -support norm relaxation [41], -largest norm relaxation [62], QCQP and SDP relaxations [15] only provide loose approximation of the original problem. The non-convex methods such as Schatten norm [28, 37], re-weighted norm [13], norm DC (difference of convex) approximation [61], the Smoothly Clipped Absolute Deviation (SCAD) penalty method[68], the Minimax Concave Plus (MCP) penalty method [26] only produce sub-optimal results since they give approximate solutions for the problem or incur high computational overhead.

We take norm approximation method for example and it may suffer two issues. First, it involves an additional hyper-parameter which may not be appealing in practice. Second, the regularized norm problem for general could be difficult to solve. This includes the iterative re-weighted least square method [38] and proximal point method. The former approximates by with a small parameter and solves the resulting re-weighted least squares sub-problem which reduces to a weighted problem. The latter needs to evaluate a relatively expensive proximal operator in general, except that it has a closed form solution for some special values such as and [58].

Recently, Lu et al. propose a Penalty Decomposition Algorithm (PDA) for solving the -norm optimization algorithm [39]. As has been remarked in [39], direct ADM on the norm problem can also be used for solving minimization simply by replacing the quadratic penalty functions in the PDA by augmented Lagrangian functions. Nevertheless, as observed in our preliminary experiments and theirs, the practical performance of direct ADM is worse than that of PDA.

Actually, in our experiments, we found PDA is unstable. The penalty function can reach very large values , and the solution can be degenerate when the minimization problem of the augmented Lagrangian function in each iteration is not exactly solved. This motivates us to design a new -norm optimization algorithm in this paper. We consider a proximal ADM algorithm to the MPEC formulation of -norm since it has a primal-dual interpretation. Extensive experiments have demonstrated that proximal ADM for solving the “lifting” MPEC formulation for produces better image restoration qualities.

Figure 2: Asymptotic behavior for optimizing (6) to denoise and deblur the corrupted ’cameraman’ image. We plot the value of the objective function (solid blue line) and the SNR value (dashed red line) against the number of optimization iterations. At specific iterations (i.e. 1, 10, 20, 40, 80, and 160), we also show the denoised and deblurred image. Clearly, the corrupting noise is being effectively removed throughout the optimization process.

5 Experimental Validation

In this section, we provide empirical validation for our proposed -PADMM method by conducting extensive image denoising experiments and performing a thorough comparative analysis with the state-of-the-art.

In our experiments, we use 5 well-known test images of size . All code is implemented in MATLAB using a 3.20GHz CPU and 8GB RAM. Since past studies [11, 21] have shown that the isotropic TV model performs better than the anisotropic one, we choose as the order of the TV norm here. In our experiments, we apply the following algorithms:

(i) BM3D is an image denoising strategy based on an enhanced sparse representation in transform-domain. The enhancement of the sparsity is achieved by grouping similar 2D image blocks into 3D data arrays [24].

(ii) MFM, Median Filter Methods. We utilize adaptive median filtering to remove salt-and-pepper impulse noise and adaptive center-weighted median filtering to remove random-valued impulse noise.

(iii) -SBM, the Split Bregman Method (SBM) of [32], which has been implemented in [30]. We use this convex optimization method as our baseline implementation.

(iv) TSM, the Two Stage Method[16, 17, 10]

. The method first detects the damaged pixels by MFM and then solves the TV image inpainting problem.

(v) -ADMM (direct). We directly use ADMM (Alternating Direction Method of Multipliers) to solve the non-smooth non-convex problem with proximal operator being computed analytically. We only consider in our experiments [58].

(vi) -AOP, the Adaptive Outlier Pursuit (AOP) method described in [59]. We use the implementation provided by the author. Here, we note that AOP iteratively calls the - procedure, mentioned above.

(vii) -PDA, the Penalty Decomposition Algorithm (PDA) [39] for solving the optimization problem in (6).

(viii) -PADMM, the proximal ADMM described in Algorithm 1 for solving the optimization problem in (6). We set the relaxation parameter to 1.618 and the strongly convex parameter to . All MATLAB codes to reproduce the experiments of this paper are available online at the authors’ research webpages.

5.1 Experiment Setup

For the denoising and deblurring test, we use the following strategies to generate artificial noisy images.

(a) Denoising problem. We corrupt the original image by injecting random-value, salt-and-pepper noise, and mixed noise (half random-value and half salt-and-pepper) with different densities (10% to 90%) to the images.

(b) Deblurring problem. Although blurring kernel estimation has been pursued by many studies (e.g. [55]), here we assume that the blurring kernel is known beforehand. We blur the original images with a Gaussian blurring kernel and add impulse noise with different densities (10% to 90%). We use the following MATLAB scripts to generate a blurring kernel of radius ( is set to 7 in the experiments):


We run all the previously mentioned algorithms on the generated noisy and blurry images. For -AOP, we adapt the author’s image denoising implementation to the image deblurring setting. Since both BM3D and Median Filter Methods (MFM) are not convenient to solve the deblurring problems, we do not test them in the deblurring problem. We terminate -PADMM whenever and and . For -PADMM, -PDA, and -PADMM, we use the same stopping criterion to terminate the optimization. For -SBM and -AOP, we adopt the default stopping conditions provided by the authors. For the regularization parameter , we swept over . For the regularization parameter in -AOP, we swept over and set to the number of corrupted pixels.

To evaluate these methods, we compute their Signal-to-Noise Ratios (SNRs). Since the corrupted pixels follow a Bernoulli-like distribution, it is generally hard to measure the data fidelity between the original images and the recovered images. Therefore, we consider three ways to measure SNR.

where is the original clean image and is the mean intensity value of , and is the soft -norm which counts the number of elements whose magnitude is greater than a threshold . We adopt in our experiments.

Img.Alg. BM3D - - - - -
Random-Value Impulse Noise
walkbridge+10% 93/7.1/11.0 95/12.3/15.6 92/7.7/12.3 95/11.8/12.9 96/12.8/16.6 95/12.1/13.8 97/14.1/16.9 97/13.8/15.9
walkbridge+30% 76/3.7/7.1 89/8.6/11.0 82/6.1/10.3 85/5.8/7.8 89/8.4/12.1 89/7.8/11.5 91/9.6/12.8 91/9.5/11.9
walkbridge+50% 59/2.2/4.3 76/4.9/5.7 67/4.1/7.0 69/2.7/4.8 76/5.4/8.1 79/5.4/8.7 84/7.0/10.1 85/7.0/9.2
walkbridge+70% 42/1.0/1.9 56/2.0/1.7 45/2.0/3.3 50/1.3/2.2 53/2.5/4.0 59/3.0/5.0 65/4.0/6.2 76/5.1/7.0
walkbridge+90% 26/-0.1/-0.1 32/-0.2/-1.1 28/0.3/0.5 30/0.0/-0.0 31/0.4/0.8 30/0.4/0.8 34/0.7/1.3 57/2.7/3.9
pepper+10% 67/5.0/9.9 99/19.1/21.5 99/15.0/22.2 97/13.5/15.8 74/5.4/11.3 99/13.6/20.3 100/20.2/24.6 99/18.0/21.0
pepper+30% 55/3.7/7.0 96/12.3/13.6 96/11.4/16.3 87/6.3/9.5 72/5.2/10.7 98/12.0/16.8 98/15.1/19.7 98/14.6/18.3
pepper+50% 44/2.4/4.5 85/6.7/6.7 85/7.0/9.7 71/3.5/5.5 65/4.5/8.9 94/9.7/13.1 96/11.8/15.7 96/11.6/14.4
pepper+70% 33/1.2/2.1 63/2.8/2.1 59/3.1/4.4 52/1.6/2.4 51/2.7/4.7 79/5.2/6.2 84/6.8/8.9 93/9.0/11.4
pepper+90% 24/0.2/0.1 35/0.1/-1.0 30/0.6/0.6 31/0.3/0.1 28/0.7/1.1 35/0.9/1.0 39/1.3/1.7 76/4.2/4.8
mandrill+10% 74/3.3/6.0 89/8.1/9.0 92/6.9/6.9 93/9.6/9.6 84/3.7/7.4 93/9.6/9.6 95/11.1/11.5 95/10.8/10.3
mandrill+30% 63/2.0/3.6 83/5.9/6.6 76/3.8/5.9 83/4.7/4.9 73/3.0/5.5 85/5.8/6.8 87/6.8/7.4 86/6.4/6.5
mandrill+50% 50/1.1/2.2 73/3.6/3.7 65/2.9/4.6 69/2.0/3.4 61/2.2/4.0 74/3.6/5.0 77/4.6/5.6 78/4.4/4.6
mandrill+70% 36/0.4/0.8 57/1.4/0.6 51/1.5/2.4 52/0.9/1.5 47/1.2/2.2 62/2.3/3.4 64/2.9/3.9 70/3.1/3.5
mandrill+90% 28/-0.3/-0.6 36/-0.6/-1.9 37/0.2/0.4 34/-0.1/-0.4 33/0.1/0.3 39/0.5/0.9 42/0.8/1.2 58/1.9/2.5
lake+10% 92/6.9/12.5 98/16.9/21.3 96/11.3/17.7 97/14.0/15.0 97/8.7/16.1 98/14.3/19.2 98/17.2/21.1 98/16.7/19.5
lake+30% 75/4.3/8.1 93/11.3/13.9 91/9.3/14.4 86/7.1/10.0 92/7.9/13.9 95/10.5/15.0 95/12.7/16.7 95/12.0/14.3
lake+50% 58/2.6/4.9 79/6.5/7.2 71/5.9/9.4 69/3.7/5.9 78/6.2/10.2 88/8.3/11.7 91/10.0/13.7 90/9.5/11.5
lake+70% 41/1.3/2.3 54/2.9/2.6 42/2.5/4.1 47/1.8/2.8 43/2.8/4.6 60/4.7/7.0 68/5.8/8.6 84/7.4/9.0
lake+90% 24/0.3/0.3 26/0.5/-0.4 25/0.6/0.8 26/0.5/0.4 24/0.6/1.0 13/0.7/1.1 26/1.1/1.7 62/4.2/5.3
jetplane+10% 39/2.5/6.1 99/17.5/21.0 98/11.5/17.5 98/12.8/13.3 39/3.4/8.3 99/13.1/19.1 99/17.0/20.0 98/15.6/17.0
jetplane+30% 32/0.7/2.6 95/10.3/11.5 94/9.0/13.3 87/5.0/7.3 38/3.2/7.5 97/10.4/15.0 97/12.4/15.7 97/11.5/12.6
jetplane+50% 27/-0.6/-0.1 80/4.5/4.0 75/4.2/6.7 69/1.5/2.8 34/2.4/5.2 92/7.9/10.6 94/9.3/12.2 94/9.0/10.0
jetplane+70% 22/-1.7/-2.4 53/0.6/-0.7 42/0.2/0.9 47/-0.5/-0.5 23/-0.6/-0.3 67/3.2/4.8 74/4.4/6.4 90/6.7/7.4
jetplane+90% 18/-2.5/-4.1 25/-1.8/-3.6 25/-1.7/-2.5 26/-1.8/-2.9 18/-2.3/-3.4 14/-1.6/-2.2 26/-1.2/-1.5 74/3.4/3.7
Salt-and-Pepper Impulse Noise
walkbridge+10% 90/5.4/9.9 96/12.9/17.3 90/7.6/12.4 98/15.8/19.9 98/16.3/20.7 98/15.8/19.9 99/17.2/22.7 99/17.5/23.2
walkbridge+30% 71/3.0/4.5 94/10.4/14.3 83/6.3/9.8 96/11.7/16.4 94/10.5/15.2 96/11.7/16.4 96/12.0/17.1 97/12.3/17.5
walkbridge+50% 51/-0.1/-1.7 89/8.1/11.4 71/4.0/5.4 92/9.3/14.0 88/7.8/11.8 92/9.3/13.9 92/9.2/13.8 93/9.5/14.3
walkbridge+70% 32/-2.0/-4.6 82/6.1/8.7 49/1.4/2.7 87/7.3/11.5 69/4.4/6.9 87/7.3/11.5 85/6.9/11.0 87/7.4/11.6
walkbridge+90% 15/-3.2/-6.2 67/3.7/5.1 26/0.2/0.6 73/4.8/7.8 36/0.9/1.6 73/4.8/7.7 56/3.3/5.8 74/4.8/7.8
pepper+10% 68/4.9/9.6 99/14.8/20.1 99/15.0/21.8 100/20.5/24.9 74/5.4/11.4 100/20.5/24.9 100/23.2/30.5 100/23.9/31.0
pepper+30% 52/3.1/4.8 98/14.6/18.3 95/10.8/13.6 99/16.8/22.9 73/5.4/11.2 99/16.8/22.9 99/17.7/24.8 100/18.5/25.6
pepper+50% 38/0.3/-1.1 97/12.9/16.1 84/6.1/7.0 99/14.9/21.5 71/5.2/10.6 99/14.8/21.5 99/14.5/21.1 99/15.4/22.4
pepper+70% 25/-1.5/-3.9 95/10.6/13.3 57/2.1/3.4 98/12.5/18.5 61/3.9/7.4 98/12.5/18.5 96/11.4/16.9 98/12.7/18.7
pepper+90% 14/-2.7/-5.5 89/7.2/8.5 27/0.4/0.6 93/8.8/12.7 32/1.2/1.9 93/8.8/12.5 75/4.8/7.9 93/9.0/12.9
mandrill+10% 77/2.7/4.9 93/9.8/11.3 90/4.5/6.9 97/13.1/14.3 87/4.2/9.2 97/13.1/14.3 98/14.4/17.1 98/14.5/17.2
mandrill+30% 61/1.5/2.3 90/7.8/9.0 75/4.0/5.9 92/8.9/10.7 79/3.6/7.2 92/8.9/10.7 93/9.3/11.8 93/9.4/11.9
mandrill+50% 44/-0.9/-2.8 84/5.7/6.6 67/2.7/3.3 87/6.6/8.5 68/2.8/5.2 87/6.6/8.5 87/6.7/8.8 88/6.8/8.8
mandrill+70% 27/-2.7/-5.6 76/3.8/4.3 48/1.1/1.9 80/4.9/6.5 54/2.0/3.6 80/4.9/6.5 79/4.8/6.6 80/4.9/6.5
mandrill+90% 10/-3.8/-7.2 63/2.0/1.9 36/0.3/0.6 69/3.1/4.3 35/0.4/0.8 69/3.1/4.3 59/2.4/3.8 69/3.1/4.4
lake+10% 91/6.6/11.9 99/16.4/22.9 96/11.3/17.6 99/19.6/25.9 99/9.0/17.2 99/19.6/25.7 100/20.3/27.5 100/20.6/27.9
lake+30% 71/3.9/5.6 97/13.6/18.7 90/9.1/12.8 98/15.0/21.4 97/8.6/16.0 98/15.0/21.3 98/15.1/21.7 99/15.4/22.3
lake+50% 52/1.2/-0.4 94/11.2/15.3 76/5.7/6.8 97/12.5/18.3 91/7.7/13.6 97/12.5/18.2 96/12.2/17.9 97/12.7/18.6
lake+70% 33/-0.5/-3.0 90/9.0/12.1 52/2.4/3.7 93/10.4/15.2 63/5.0/8.2 93/10.4/15.2 91/9.7/14.4 94/10.4/15.2
lake+90% 18/-1.6/-4.5 80/6.2/7.5 26/0.5/0.9 84/7.3/10.1 25/1.1/1.9 83/7.3/10.1 51/4.3/7.3 84/7.4/10.2
jetplane+10% 49/2.5/6.0 100/17.0/23.4 98/11.6/17.3 100/20.4/26.8 39/3.4/8.5 100/20.4/26.8 100/20.7/28.0 100/21.3/29.2
jetplane+30% 39/0.6/1.2 98/13.6/17.9 93/8.3/10.4 99/15.5/21.9 40/3.4/8.3 99/15.5/21.9 99/15.3/21.6 99/15.9/22.7
jetplane+50% 33/-1.4/-4.1 96/10.9/14.1 79/4.0/5.1 98/12.7/18.4 39/3.1/7.2 98/12.7/18.4 98/12.1/17.3 98/12.9/18.5
jetplane+70% 30/-2.8/-6.4 93/8.5/10.5 53/0.3/1.2 96/10.2/14.6 32/1.2/3.0 96/10.2/14.6 94/9.2/13.3 96/10.3/14.6
jetplane+90% 28/-3.7/-7.9 87/5.6/6.0 26/-1.7/-2.1 89/6.6/8.6 29/-1.9/-2.8 89/6.6/8.6 54/2.4/4.8 89/6.8/8.7
Mixed Impulse Noise (Half Random-Value Noise and Half Salt-and-Pepper Noise)
walkbridge+10% 91/6.1/10.1 93/10.6/14.7 91/7.5/12.3 96/12.6/13.3 96/12.5/16.0 96/12.6/13.3 98/14.8/17.8 98/15.1/17.9
walkbridge+30% 73/3.6/6.7 90/8.4/11.8 83/6.3/10.3 88/6.6/8.3 89/8.6/12.2 92/8.6/12.2 93/10.2/13.5 93/10.2/12.9
walkbridge+50% 55/1.5/1.9 81/5.7/7.0 70/4.3/6.8 76/3.5/5.7 78/5.7/8.7 85/6.3/10.0 86/7.6/10.8 87/7.6/10.1
walkbridge+70% 37/-0.5/-1.8 63/2.4/1.9 50/2.0/2.9 58/1.9/3.2 56/2.8/4.9 72/4.4/7.2 74/5.1/7.9 80/5.7/7.9
walkbridge+90% 21/-1.9/-4.0 34/-0.6/-2.1 30/0.1/0.4 34/0.3/0.5 31/0.6/1.3 38/1.2/2.0 40/1.3/2.3 63/3.3/4.9
pepper+10% 68/5.0/9.7 98/13.9/19.5 99/15.0/22.0 98/14.3/16.0 74/5.4/11.3 99/14.4/19.9 100/21.0/25.6 99/19.9/23.4
pepper+30% 54/3.7/6.8 97/12.7/16.0 96/11.4/15.4 91/7.5/10.8 72/5.3/10.8 98/12.8/18.5 99/15.8/20.7 98/14.9/18.4
pepper+50% 41/1.8/2.3 92/8.5/8.6 86/7.0/8.9 80/4.5/7.0 68/4.8/9.5 97/11.2/16.1 97/12.6/17.1 97/12.6/15.7
pepper+70% 29/-0.1/-1.2 73/3.6/2.4 62/3.0/3.6 63/2.5/3.8 54/3.3/5.9 90/8.1/10.7 92/9.1/12.5 94/10.1/12.8
pepper+90% 19/-1.4/-3.4 39/-0.2/-2.0 33/0.4/0.5 37/0.6/0.7 31/1.0/1.5 53/2.1/2.5 49/2.2/2.9 82/5.6/6.6
mandrill+10% 76/3.0/5.3 86/6.8/8.3 91/5.5/6.8 95/10.4/10.1 83/3.6/7.3 95/10.5/10.3 96/12.1/12.4 96/11.7/11.2
mandrill+30% 63/1.8/3.4 82/5.4/6.6 74/3.9/6.0 85/5.3/5.1 73/2.9/5.3 88/6.5/7.4 89/7.3/8.1 89/