 # Controlled Total Variation regularization for inverse problems

This paper provides a new algorithm for solving inverse problems, based on the minimization of the L^2 norm and on the control of the Total Variation. It consists in relaxing the role of the Total Variation in the classical Total Variation minimization approach, which permits us to get better approximation to the inverse problems. The numerical results on the deconvolution problem show that our method outperforms some previous ones.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Environmental effects and imperfections of image acquisition devices tend to degrade the quality of imagery data, thereby making the problem of image restoration an important part of modern imaging sciences. Very often the images are distorted by some linear transformations. In this paper, we consider the reconstruction of the unknown function

from the following inverse problem:

 Y(x)=Kf(x)+ϵ(x),x∈I={0,...,N−1}2, (1)

where is the observed data, is a continuous linear operator from to is the original image, is a white Gaussian noise of mean

and variance

and are integers. We want to recover the original image starting from the observed one . When the operator is formulated as a convolution with a kernel, this inverse problem reduces to the image deconvolution model in the presence of noise. Of particular interest is the case where the operator is the convolution with the Gaussian kernel:

 (2)

where

 (3)

and

is the standard deviation parameter. This type of inverse problems appears in medical imaging, astronomical and laser imaging, microscopy, remote sensing, photography, etc.

The problem (1) can be decomposed into two sub-problems: the denoising problem

 Y(x)=u(x)+ϵ(x),x∈I (4)

and the possibly ill-posed inverse problem

 u(x)=Kf(x),x∈I. (5)

Thus, the restoration of the signal from the model (1) will be performed into two steps: firstly, remove the Gaussian noise in the model (4); secondly, find a solution to the problem (5).

For the first step, there are many efficient methods, see e.g. Buades, Coll and Morel (2005 ), Kervrann (2006 ), Lou, Zhang, Osher and Bertozzi (2010 ), Polzehl and Spokoiny (2006 ), Garnett, Huegerich and Chui (2005 ), Cai, Chan, Nikolova (2008 ), Katkovnik, Foi, Egiazarian, and Astola ( 2010 ), Dabov, Foi, Katkovnik and Egiazarian (2006 ), and Jin, Grama and Liu(2011 ).

The second step consists in finding a convenient function satisfying (exactly or approximately)

 K˜f−u=0, (6)

where is obtained by the denoising algorithm of the first step. To find a solution to the ill-posed problem (6), one usually considers some constraints which are believed to be satisfied by natural images. Such constraints may include, in particular, the minimization of the norm of or and the minimization of the Total Variation.

The famous Tikhonov regularization method (1977  and 1996 ) consists in solving the following minimization problem:

 ˆf(x)=argmin˜f(x)∥˜f∥22 (7)

subject to

 ∥K˜f−u∥22=0. (8)

Using the usual Lagrange multipliers approach, this leads to

 ˆf=argmin˜f∥˜f∥22+12λ∥K˜f−u∥22,

where is interpreted as a regularization parameter.

An alternative regularization (1977 ) is to replace the norm in (7) with the semi-norm, which leads to the well-known Wiener filter for image deconvolution:

 ˆf=argmin˜f∥∇˜f∥22 (9)

subject to (8), or, again by the Lagrange multipliers approach,

 ˆf=argmin˜f∥∇˜f∥22+12λ∥K˜f−u∥22. (10)

where is a regularization parameter.

The Total Variation (TV) regularization called ROF model (see Rudin, Osher and Fatemi ), is a modification of (9) with replaced by

 ˆf=argmin˜f∥∇˜f∥1 (11)

subject to (8), or, again by the Lagrange multipliers approach,

 ˆf=argmin˜f∥∇˜f∥1+12λ∥K˜f−u∥22. (12)

where is a regularization parameter.

Due to its virtue of preserving edges, it is widely used in image processing, such as blind deconvolution [6, 10, 15], inpaintings 

. Luo et al  composed TV-based methods with Non-Local means approach to preserve fine structures, details and textures.

The classical approach to resolve (12), for a fixed parameter consists in searching for a critical point characterized by

 L(˜f)+λK∗(K˜f−u)=0, (13)

where being the Euclidean norm of the gradient The usual technique to find a solution of (13) is the gradient descent approach, which employs the iteration procedure

 fk+1=fk−h(Lfk+λK∗(Kfk−u)), (14)

where is a fixed parameter. If the algorithm converges, say then satisfies the equation (13). However, this algorithm cannot resolve exactly the initial deconvolution problem (6), for, usually, so that, by (13), This will be shown in the next sections by simulation results.

In this paper, we propose an improvement of the algorithm (14), which permits us to obtain an exact solution of the deconvolution problem (6) when the solution exists, and the closest approximation to the original image when the exact solution to (6) does not exist.

Precisely, we shall search for a solution of the equation

 K∗(K˜f−u)=0,

with a small enough TV. Notice that the last equation characterizes the critical points of the minimization problem which, in turn, whose minimiser gives the closest approximation to the deconvolution problem (6), even if it does not have any solution. In the proposed algorithm, we do not search for the minimization of the TV. Instead of this, we introduce a factor in the iteration process (14) to have a control on the evolution of the TV. Compared with the usual TV minimization approach, the new algorithm retains the image details with higher fidelity, as the TV of the restored image is closer to the TV of the original image.

Our experimental results confirm this point, and show that our new algorithm outperforms the recently ones proposed in the literature that are based on the minimization problems (10) and (12).

## 2 A new algorithm for the inverse problem

### 2.1 Controlled Total Variation regularization algorithm

We would like to find satisfying the deconvolution equation (6) with a small Total Variation. Since an exact solution to (6) may not exist, instead of the initial problem we consider the minimization problem

 min˜f ∥K˜f−u∥22. (15)

This leads us to search for critical points characterized by the equation

 K∗(K˜f−u)=0, (16)

whose TV is small enough. As explained in the introduction, the usual algorithm (14) (classic TV) does not lead to a solution of (16) if it exists. In fact, we have observed by simulation results that usually does not converge to with so that does not tend to neither.

For example taking ”Lena” as test image blurred by the Gaussian convolution kernel (3) with standard deviation the value of increases (see Figure 1 (b), where the classical TV curve correspond to the dashed line) as increases. On the other hand, Figure 1 (c) shows that the value of decreases when increases in the interval , but it remains bounded from below by and does not decrease towards when varies in the interval .

These considerations lead us to the following modified version of (14):

 fk+1=fk−h(θkLfk+λK∗(Kfk−u)), (17)

where are as in (14) and is a sequence decreasing to We shall use for some This ensures that, if the algorithm converges, say , then

 K∗(K˜f−u)=0. (18)

Notice that, compared with the initial algorithm (14), we put a small weight to the TV in the minimization problem (12), thus diminishing the role of this term in the minimization, in order to get an exact solution to the ill-posed problem (18). Here we do not search for the minimization of the TV, but just for a control on the TV. The control of TV rather than the minimization of TV permets us to better preserve image edges and details.

The new algorithm given by (17) will be called Controlled Total Variation regularization algorithm (CTV).

We show by simulations that the new algorithm improves the quality of image restauration. In Figures 1 and 2, we see that the TV of the restored image by the proposed CTV algorithm is larger than that of the restored image by the classical TV minimization approach, but closer to the TV of the original image. This also shows that in the classical TV minimization algorithm the TV of the restored image is too small and that the classical approach may not give the best solution.

On the other hand, by simulation we have also seen that the TV in the iterative process (17) cannot be suppressed completely, i.e. we cannot take

In the case of the Gaussian blur kernel, Figure 1 (c) shows that values of computed by the new algorithm CTV and those by the classical TV algorithm are nearly the same, when varies in the interval . However, for varying in the interval , the values of by the classical TV algorithm remain constant, while those computed by our algorithm continue to decrease to . Accordingly, we see in Figure 1 (d) that the PSNR (the definition of PSNR see Section 3.1) values by the classical TV algorithm and by the new proposed one, are almost the same for varying in the interval imitating the evolution of however, for varying in the interval , the PSNR value by the classical TV algorithm does not increase any more, while the corresponding PSNR by the new algorithm continues to rise. In the case of box average blur kernel, Figure 2 displays similar evolutions as those of the Gaussian blur kernel case. Figure 3 shows the images restored by our method and the classical TV method respectively. Our method keeps much more details of the image than the classical TV method. Figure 1: Comparison between classical TV and CTV. The test image ”Lena” is convoluted with Gaussian blur kernel of standard deviation σb=1. We choose h=0.1, λ=1, maxIter=3000 and θ=0.98. (a) The evolution of TV(fk) as a function of logk. (b) The evolution of log(||Lfk||1+1) as a function of logk. (c) The evolution of log(||K∗(Kfk−u)||1+1) as a function of logk, (d) The evolution of PSNR value as a function of logk. Figure 2: Comparison between classical TV and CTV. The test image ”Lena” is convoluted with 9×9 box average blur kernel. We choose h=0.1, λ=1, maxIter=3000 and θ=0.998. (a) The evolution of TV(fk) as a function of logk. (b) The evolution of log(||Lfk||1+1) as a function of logk. (c) The evolution of log(||K∗(Kfk−u)||1+1) as a function of logk, (d) The evolution of PSNR value as a function of logk. Figure 3: (a) and (b) are the images restored from the test image ”Lena” convoluted with Gaussian blur kernel with σb=1 by our method and by classical TV method respectively. (c) and (d) are the images restored from the test image ”Lena” convoluted with 9×9 box average blur kernel by our method and by classical TV method respectively.

Suppose that a linear operator from to To solve the deconvolution problem (6), inspired by (17), we can also consider the following modified version, that we call Direct Gradient Descent (DGD):

 fk+1=fk−h(θkLfk+λ(Kfk−u)), (19)

where If the algorithm converges, say then satisfies exactly the equation (6). Therefore we could expect that this algorithm gives a better solution to the deconvolution problem. By simulations, this is shown to be the case when is the Gaussian blur kernel and the noise is absent. In this case we find that the DGD approach restores the blurred image more efficiently than classical TV and the CTV regularization approaches introduced above. The results of comparison between the classical TV, CTV and DGD are displayed in Figures 1 and 4.

However, our simulations also show that the algorithm DGD is very sensible to the noise and often may not converge. For example, when is the Gaussian blur kernel, but in the presence of the noise, the method does not give good restoration results; when is the box average kernel, the algorithm diverges and the restoration results are not good with or without noise (see Figure 5).

Notice also that the CTV regularization approach applies even if the linear operator is from to , where is possibly different from contrary to the DGD approach which applies only when

The aforementioned arguments show that the CTV regularization approach (17) should be proffered to the Direct Gradient Descent algorithm (19), especially when the blurring is accompanied with a noise contamination. Figure 4: Comparison between CTV and DGD. The test image ”Lena” is convoluted with Gaussian blur kernel of standard deviation σb=1. We choose h=0.1, λ=1, maxIter=3000 and θ=0.98. (a) The evolution of TV(fk) as a function of logk. (b) The evolution of log(||Lfk||1+1) as a function of logk. (c) The evolution of ||fk+1−fk||1 as a function of logk, (d) The evolution of PSNR value as a function of logk. Figure 5: The test image ”Lena” is convoluted with 9×9 box average blur kernel. We choose h=0.1, λ=1, maxIter=3000 and θ=0.998. (a) The evolution of log(TV(fk)+1) as a function of logk. (b) The evolution of log(||fk+1−fk||1+1) as a function of logk.

## 3 Numerical simulation

### 3.1 Computational algorithm

In this section we present some numerical results to compare different deblurring methods. We shall use CTV regularization algorithm to restore several standard blurred and noisy images and compare it to the Tikhonov + NL/H (Tikhonov regularization followed by a Non-Local weighted approach) and Tikhonov + NL/TV (Tikhonov regularization followed by a Non-Local weighted TV approach) algorithms presented in . When no noise is present, we shall also give simulation results by the DGD approach.

Algorithm : BM3D + CTV(OPF + CTV)

Input: Data and the degradation model .
Step 1. Denoising.

Remove Gaussian noise from by BM3D  or by Optimization Weight Filter  to get the denoised image .
Step 2. Deconvolution.

Set and choose parameters , , , and .

Initialization: and

(a) compute by the gradient descent equation (19)

(b) if and set and go to (a).

Output: final solution .

Note that If there is no noise, we just do Step 2.

In the algorithm we use the control condition since we would like to ensure that We could replace this condition by .

The performance of the CTV regularization algorithm is measured by the usual Peak Signal-to-Noise Ratio (PSNR) in decibels (db) defined as

 PSNR=10log102552MSE,

where

 MSE=1cardI∑x∈I(f(x)−˜f(x))2

with the original image and

the estimated one.

We test all the methods on four images: a synthetic image, Lena, Barbara and Cameraman (see Figure 6) with several kinds of blur kernels and several values of noise variances as listed in Table 1. The synthetic image is referred to as Shape since it contains geometric figures. The Cameraman image is of high contrast and has many edges, while the Lena and Barbara images contain more textures.

### 3.2 Simulation results

We use BM3D or Optimization Weights Filter to remove Gaussian noise. Then we use our CTV algorithm to inverse the denoised blurred image. For each method, we present the PSNR results along with the applied parameters. We list the PSNR values of the used methods in Table 2. The Direct Gradient Descent method inverse excellently the convoluted image with Gaussian blur kernel without any noise, and the PSNR values are much higher than PSNR values of the other algorithms. However, the method works well only for the Gaussian blur kernel without any noise. For the noisy and blurred images, the method BM3D + CTV is the best way to reconstruct these images. Figures 7, 8, 9 and 10 display the reconstructed images restored by the different methods and the square of the difference between the restored and original images. Figures 7, 8 and 10 show that the method BM3D + CTV is the best one to remove the noise and blur. From Figure 9 we see that BM3D + CTV works very well in reconstructing the image blurred with Gaussian blur kernel without any noise, but the method DGD can work much better than BM3D + CTV. Figure 7: A 150×150 image with Gaussian blur σb=2 and Gaussian noise σn=10, the reconstructed image by different methods and their square errors. Figure 8: Lena 256×256 with Gaussian blur σb=1 and Gaussian noise σn=10, the reconstructed image by different methods and their square errors. Figure 9: Barbara 256×256 with Gaussian blur σb=1 and Gaussian noise σn=0, the reconstructed image by different methods and their square errors. Figure 10: Carmeraman 256×256 with box average kernel and Gaussian noise σn=20, the reconstructed image by different methods and their square errors.

## 4 Conclusion

The Total Variation (TV) approach is a classical method to reconstruct images from the noisy or blurred ones. We have reconsidered deeply this approach and propose a modified version which we call Controlled Total Variation (CTV) regularization. The proposed algorithm is based on a relaxation of the TV in the classical TV minimization approach which permits us to find the best approximation to the solution of inverse problems. Numerical simulations show that our CTV algorithm gives superior restauration results compared to known methods.

## References

•  H.C. Andrews and B.R. Hunt. Digital Image Restoration. Prentice-Hall, Englewood Cliffs, 1977.
•  A. Buades, B. Coll, and J.M. Morel. A review of image denoising algorithms, with a new one. Multiscale Model. Simul., 4(2):490–530, 2006.
•  T. Buades, Y. Lou, JM Morel, and Z. Tang. A note on multi-image denoising. In Int. workshop on Local and Non-Local Approximation in Image Processing, pages 1–15, August 2009.
•  J.F. Cai, R.H. Chan, and M. Nikolova. Two-phase approach for deblurring images corrupted by impulse plus Gaussian noise. Inverse Probl. Imag., 2(2):187–204, 2008.
•  T.F. Chan and J. Shen. Mathematical models for local nontexture inpaintings. SIAM J. Appl. Math., pages 1019–1043, 2001.
•  T.F. Chan and C.K. Wong. Total variation blind deconvolution. IEEE Trans. Image Process., 7(3):370–375, 1998.
•  K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process., 16(8):2080–2095, 2007.
•  H.W. Engl, M. Hanke, and A. Neubauer. Regularization of inverse problems, volume 375. Springer Netherlands, 1996.
•  R. Garnett, T. Huegerich, C. Chui, and W. He. A universal noise removal algorithm with an impulse detector. IEEE Trans. Image Process., 14(11):1747–1754, 2005.
•  L. He, A. Marquina, and S.J. Osher. Blind deconvolution using tv regularization and bregman iteration. Int. J. Imaging Syst. Technol., 15(1):74–83, 2005.
•  Q.Y. Jin, I. Grama, and Q.S. Liu. Removing gaussian noise by optimization of weights in non-local means. http://arxiv.org/abs/1109.5640.
•  V. Katkovnik, A. Foi, K. Egiazarian, and J. Astola. From local kernel to nonlocal multiple-model image denoising. Int. J. Comput. Vis., 86(1):1–32, 2010.
•  C. Kervrann and J. Boulanger. Optimal spatial adaptation for patch-based image denoising. IEEE Trans. Image Process., 15(10):2866–2878, 2006.
•  Y. Lou, X. Zhang, S. Osher, and A. Bertozzi. Image recovery via nonlocal operators. J. Sci. Comput., 42(2):185–197, 2010.
•  A. Marquina. Nonlinear inverse scale space methods for total variation blind deconvolution. SIAM J. Imaging Sci., 2:64, 2009.
•  A. Marquina and S.J. Osher. Image super-resolution by tv-regularization and bregman iteration. J. Sci. Comput., 37(3):367–382, 2008.
•  J. Polzehl and V. Spokoiny. Propagation-separation approach for local likelihood estimation. Probab. Theory Rel., 135(3):335–362, 2006.
•  L.I. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1-4):259–268, 1992.
•  A. Tikhonov and V. Arsenin. Solution of Ill-Posed Problems. Wiley, New York, 1977.