Optimal Interpolation Data for PDE-based Compression of Images with Noise

by   Zakaria Belhachmi, et al.

We introduce and discuss shape-based models for finding the best interpolation data in the compression of images with noise. The aim is to reconstruct missing regions by means of minimizing a data fitting term in the L^2-norm between the images and their reconstructed counterparts using time-dependent PDE inpainting. We analyze the proposed models in the framework of the Γ-convergence from two different points of view. First, we consider a continuous stationary PDE model, obtained by focusing on the first iteration of the discretized time-dependent PDE, and get pointwise information on the "relevance" of each pixel by a topological asymptotic method. Second, we introduce a finite dimensional setting of the continuous model based on "fat pixels" (balls with positive radius), and we study by Γ-convergence the asymptotics when the radius vanishes. Numerical computations are presented that confirm the usefulness of our theoretical findings for non-stationary PDE-based image compression.



page 10

page 11

page 12

page 13

page 14

page 15

page 16

page 17


Choosing The Best Interpolation Data in Images with Noise

We introduce and discuss shape based models for finding the best interpo...

Efficient numerical approximation of a non-regular Fokker–Planck equation associated with first-passage time distributions

In neuroscience, the distribution of a decision time is modelled by mean...

A bi-level view of inpainting - based image compression

Inpainting based image compression approaches, especially linear and non...

Clustering-Based Quantisation for PDE-Based Image Compression

Finding optimal data for inpainting is a key problem in the context of p...

Analysis of PDE-based binarization model for degraded document images

This report presents the results of a PDE-based binarization model for d...

Optimising Spatial and Tonal Data for PDE-based Inpainting

Some recent methods for lossy signal and image compression store only a ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.


The aim of PDE-based compression is to reconstruct a given image, by inpainting from a set of few “relevant pixels”, denoted by , with a suitable partial differential operator. The compression is a two steps process which consists of coding part, that is the choice of the set , then the decoding phase where the image is entirely recovered. Therefore, it appears intuitively that a balance between the choice quality of the set , with respect to constraints such as its “size”, as small as possible, and the location of its pixels on one hand, and the achievable accuracy of the reconstructed image, is a major key to success of the PDE-based compression. We quote a picture from [Schmaltz2014] which expresses nicely this idea : “PDE-based data compression suffers from poverty, but enjoys liberty [Bae2010, Belhachmi2009, Galic2008, Schmaltz2009] : Unlike in pure inpainting research [Masnou1998, Bertalmio2000], one has an extremely tight pixel budget for reconstructing some given image. However, one is free to choose where and how one spends this budget”. Besides that, any image compression approach should take into account the nature of the considered images (e.g., noisy, textured, cartoons) and measure its impact on the selection of .

The goal of the present article is to optimize the choice of such sets and to obtain, as far as possible, an analytic criteria to build it, in the spirit of [Belhachmi2009], but when the images are noisy. Optimizing over sets is a well-known field in shape optimization analysis, and many advanced theories and analytic works have been developed for various kinds of constraints on shapes and on differential operators. Our approach fits under this general framework and show the deep links between this field and the mathematical image analysis.

We emphasize that a comprehensive and satisfactory treatment of PDE-based compression must include both the choice of the pixels, the grey (or color) values stored and that of the inpainting operator. Actually, we know from several previous works that (e.g., [Galic2005, Tschumperle2005, Bornemann2007, Galic2008, Belhachmi2009, Schmaltz2009, Bae2010]):

  • Optimal sets, seen as optimal shapes, are not exhaustive with respect to all constraints that might be suited for image compression (e.g., easy storage, sparsity).

  • The stability of an “optimal” set with respect to some perturbations, when it holds, is rather weak, as it requires topologies of convergence of sets. In particular, this stability is under investigated for the case of noisy data or when some stored values are changed.

  • An optimal set, in the sens of optimal shape, is highly dependent on the inpainting operator, whereas a “good” operator may compensate a sub-optimal choice of pixels.

Nevertheless, finding an analytic optimal set remains in our opinion a very reasonable objective to enforce PDE-based compression methods.

Related works

Several works, notably in the field of PDE image compression, were undertaken to optimize the choice of pixels to store in the coding phase to ensure high reconstruction quality with as few as possible selected points, we refer the reader to [Bae2010] and the references therein. In particular, in [Belhachmi2009] the authors studied the choices of “the best set” of pixels as finding an optimal shape minimizing the semi-norm between the reconstructed solution and the initial noiseless image. They obtain such optimal shape in the framework of -convergence approach and they give an analytic expression to build it from topological asymptotics. In [Hoeltgen2015], the authors introduced a mix of probabilistic and PDE based approach to deal with both finding optimal pixels and tonal data for discrete homogeneous harmonic inpainting. Loosely summarized, they start with a data sparsification step which consists of selecting randomly a set of pixels, then they correct this choice within an iterative procedure which consists of a nonlocal exchange of pixels. Lastly, they optimize the grey values at these inpainting points by a least squares minimization method. This procedure is more complete than the first one as it consider both the choice of pixels and the grey values. Notice that for a fixed set , the harmonic inpainting is an elliptic problem and small perturbation of the data (grey values) leads to a small perturbation on the reconstructed solution, thus, optimizing the selection of the set appears more critical for the final outcome. Whereas, a small perturbation of the sets is only “weakly stable” (in the sense of -convergence of sequences of sets, see [Belhachmi2009]). Therefore, it seems reasonable to seek a more general problem of finding an optimal set with some stability properties.

In this paper, we consider a shape-based analysis taking into account noisy data. We study and analyze the problem of finding a fixed set for the time harmonic linear diffusion, extending this way the approach of [Belhachmi2009]. We obtain some selection criteria which are suited to the noise level. We compare different methods proposed and existing in the related literature in the presence of noise.

Let us now give a mathematical formulation of the problem considered. Let the support of an image (say a rectangle) and , an image which is assumed to be known only on some region . There are several PDE models to interpolate and give an approximation of the missing data. One of the basic ways is to approach by the solution of the heat equation, having the Dirichlet boundary data on and homogeneous Neumann boundary conditions on , i.e. to solve

Problem 0.1.

For , find in such that


We assume given and with , for simplicity though in practice is a function of bounded variations with a non trivial jump set. In fact, the whole analysis in the paper extends to the case of .

To ensure the compatibility conditions with the non-homogeneous “boundary” conditions, we take in . Thus we may rewrite the problem with

Problem 0.2.

For , find in such that


Denoting by the solution of Problem 0.2, the question is to identify the region which gives the “best” approximation , in a suitable sense, for example which minimizes some or Sobolev norms, e.g. in [Belhachmi2009]

(associated to a harmonic interpolation of in ). As we want to take into account noisy images, and at the same time to perform the inpainting with denoising, a better choice a priori is to minimize the -norms of and its gradient, particularly for and , known to be good filters for a large class of noises. In this article, we restrict ourselves to linear time harmonic reconstruction, thus we only consider the -norm, or for the data term. The choice of the set , that is to say the coding part, being performed at the first step, We associate a semi-implicit discrete system to solve Problem 0.2. Omitting the indices and looking for the set at the first iteration, with the initial condition in , we are led to consider the elliptic equation :

Problem 0.3.

Find in such that


for , the time step, or equivalently

Problem 0.4.

Find in such that


Thus, finding the set of pixels which gives the best approximation in the sense is a shape analysis problem for the state equation of Problem 0.2 (i.e. Problem 0.3). We recall that if is the solution of Problem 0.3, then is the minimizer of

which is equivalent to

Following [Belhachmi2009], we develop two directions of finding that optimal set. The first is to set a continuous PDE model and search pointwise information by a topological asymptotic method. The second direction is to simulate in the continuous frame a finite dimensional shape optimization problem by imposing

to be the union of a finite number of “fat pixels”. Performing the asymptotic analysis by

-convergence when the number of pixels is increasing (in the same time that the fatness vanishes), we obtain useful information about the optimal distribution of the best interpolation pixels.

Organization of the article

In Section 1, we introduce a mathematical model of the compression problem and its relaxed formulation. In Section 2, we compute the topological gradient of our minimization problem in order to find a mathematical criterion to construct our set of interpolation points. In Section 3, we change our point of view, by considering “fat pixels” instead of a general set of interpolation points. Finally, in Section 4, we expose some numerical results.

1 The Continuous Model

1.1 Min-max formulation

Let be a bounded open subset of . We consider the shape optimization problem


where , is defined by


is a measure, to be chosen, and . We notice that (6) as a cost functional corresponds to the data fitting term with Tikhonov regularization [Tikhonov1977]. Hence is the simplest, and widely used, denoising PDE models at least for the values or . The image compression problem aims to find an optimal set of pixels from which an accurate reconstruction of (noisy) image will be performed. Actually, the data term does not affect the -convergence analysis, so we only consider the case and we drop the index by denoting the energy. Thanks to the proposition A.2 below, the analysis of the continuous model is similar to the -semi norm case in [Belhachmi2009], we give the main analysis result and the main steps of the proof in the next section.

Let be the solution of Problem 0.3, it is straightforward to obtain

Proposition 1.1.

The optimization problem (5) is equivalent to

Finally, problem (5) can be rewritten under the unconstrained form as follows :


for .

The well-posedness of (5) depends of the choice of the measure . In [Belhachmi2009], it has been proven that, in the Laplacian case, choosing the -capacity as measure leads to the existence of a relaxed formulation and the well-posedness of this optimization problem. Consequently, we will study (5) when is the -capacity. The next section is devoted to the analysis within the -convergence (see Appendix A) approach follows the same lines as in [Belhachmi2009] with slight changes.

1.2 Analysis of the model

The optimization problem (5) can be rewritten by penalizing the Dirichlet boundary condition in

It is well known that such shape optimization problems do not always have a solution (e.g. [Belhachmi2009]), we seek a relaxed formulation, which under the capacity constraint yields a relaxed solution, that is to say, a capacity measure. Thus, we consider the problem

where is in . As the - norm is continuous, referring to Proposition A.2, we may drop from the following -convergence analysis, the term

For every in and in , we define , from into , by

We have that is equi-coercive with respect to , for any in . Indeed, let be in such that , we have

For every in , we define , from into , by

For a given in , corresponds to the energy of

Problem 1.1.

Find in such that


Thus, if is a solution of Problem 1.1 for a given in , then and the function satisfies the maximum principle . Since in the next sections we want to include balls centered at points in , that we do not want to be too close to the boundary of , we introduce the following notations for ,


Let us consider the problem

Using the compactness of for the -convergence (Proposition A.4) and the locality of the -convergence (Proposition A.3), we have the following result

Proposition 1.2 (-compactness of ).

The set defined above is compact with respect to the -convergence.

We have also the density theorem (the proof is given in Appendix B)

Theorem 1.1.

We have

i.e., is dense into with respect to the -convergence.

Similarly to Lemma 3.4 in [Belhachmi2009], we have

Theorem 1.2.

Let . If -converge to , then .

Theorem 1.3.

If in -converges to , then is in and -converges to in .

The proof of the last theorem is also given in Appendix B. Finally, we can state the main result of this section (proof in Appendix B).

Theorem 1.4.

We have

Replacing with , and with Proposition A.2, we get the existence of an optimal solution to the relaxed formulation.


In order to solve the relaxed problem

we may use a shape derivative with respect to the measures . However, such a method yields diffuse measures, thus too thick sets whereas we seek discrete sets of pixels.

In the next two sections, we aim to find an explicit characterization of the set using topological asymptotic.

2 Topological Gradient

Here, we aim to compute the solution of our optimization problem (5) by using a topological gradient-based algorithm as in [Larnier2012, Garreau2001]. This kind of algorithm consists in starting with and determining how making small holes in affect the cost functional to find the balls which have the most decreasing effect. To this end, let us define the compact set where is the ball centered in with radius such that . From now, we consider the functional :

or equivalently,

where . Finally, we denote by the minimizer of . Then, we have

Proposition 2.1.

With notations from above, we have when tends to ,


The weak formulation of Problem 0.3 leads to

We have , and hence

It is enough to compute the fundamental term in the asymptotic development of the expression . This is done by using Proposition C.1. ∎

Since for , , the result above suggests to keep the points where is maximal, when small enough. From a practical point of view, this is the main result of our local shape analysis. In the next section, we will see that such a strict threshold rule might be relaxed.

3 Optimal Distribution of Pixels : The “Fat Pixels” Approach

In this section, we change our point of view by considering “fat pixels” instead of a general set of interpolation points. In the sequel, we will follow [Belhachmi2009, Buttazzo2006]. We restrict our class of admissible sets as an union of balls which represent pixels. For and , we define

where is the -neighborhood of . The following analysis remains unchanged in , but for the sake of simplicity we restrict ourselves to the case . We consider problem (5) for every i.e.

Like in the previous section, we set . This last optimization problem can be reformulated as a compliance optimization problem :


where , like in the previous sections. Here, we do not need to specify a size constraint on our admissible domains. Indeed, imposing implies a volume constraint and a geometrical constraint on since is formed by a finite number of balls with radius . We deal with Neumann boundary conditions on . However, it is possible to cover the boundary with balls so that we have formally homogeneous Dirichlet boundary conditions on . The well-posedness of such a problem has been studied in the Laplacian case in [Buttazzo2006]. Without significant change we have

Theorem 3.1.

If is an open bounded subset of and if is in , then the problem (9) admits a unique solution.

If we denote by the solution, then we have that -converge to as tends to . However, the number of pixels in to keep goes also to infinity. Thus, it gives no relevant information on the distribution of the points to retain. As pointed out in [Bucur2005], the local density of can be obtained by using a different topology for the

-convergence of the rescaled energies. In this new frame, the minimizers are unchanged but their behavior is seen from a different point of view. We define the probability measure

for a given set in by

We define the functional from into by

The following -convergence of theorem is similar to the one given in Theorem 2.2. in [Buttazzo2006].

Theorem 3.2.

If , then the sequence of functionals , defined above, -converge with respect to the weak topology in to

where is the Radon–Nikodym–Lebesgue decomposition of ([Folland2013], Theorem 3.8) with respect to the Lebesgue measure and

, solution of Problem 0.3.

As a consequence of the -convergence stated in the theorem above, the empirical measure weak in where is a minimizer of . Unfortunately, the function is not known explicitly. We establish here after that

is positive, non-increasing and vanish after some point which will be enough for practical exploration. The next theorem gives an estimate of the function

defined above. The proof is given in Appendix D.

Theorem 3.3.

We have, for in ,

where , and are constants depending on .


We can extend the results above to any since we may formally split the discussion on the sets and .

These estimates on suggest that to minimize , when is large, should be large in order for to be close to its vanishing point, while when is small could be small. Formal Euler-Lagrange equation and the estimates on give the following information : to minimize

one have to take

This introduces a soft-thresholding with respect to the first approach. To sum up, we can choose the interpolation data such that the pixel density is increasing with . This soft-thresholding rule can be enforced with a standard digital halftoning. According to [Belhachmi2009, Ulichney1987, Adler2003], digital halftoning is a method of rendering that convert a continuous image to a binary image, for example black and white image, while giving the illusion of color continuity. This color continuity is simulated for the human eye by a spacial distribution of black and white pixels. Two different kinds of halftoning algorithms exist : dithering and error diffusion halftoning. The first one is based on a so-called dithering mask function, while the other one is an algorithm which propagate the error between the new value ( or ) and the old one (in the interval ). An ideal digital halftoning method would conserves the average value of gray while giving the illusion of color continuity.

4 Numerical Results

In this section, we present some numerical simulations to validate the previous theoretical analysis and we compare to other commonly used methods of image compression. We discretize the PDEs with a standard implicit finite difference scheme on a quasi-uniform mesh in order to make the comparisons easy. We have considered the method presented in this article that we will denote by -methods, more precisely we call L2-T the algorithm based on hard thresholding with the criteria obtained in Section 2, L2-H the algorithm based on the fat pixels variant (soft thresholding) and each algorithm is used with, respectively without, the halftoning based on Floyd-Steinberg dithering algorithm [Floyd1976]. The methods that we use for comparison purposes are the B-Tree algorithm [Distasi1997] and a random mask selection. Next we discuss and present some extensions of the method in several ways : first, we allow a data modification on the compression set to test how under the same framework and analysis the selected masks may be eventually improved. Secondly, we consider images corrupted with Salt and Pepper noise, though the -norms based reconstruction are less efficient. Finally, we consider the case of color images. We will denote the initial image, its noisy version and the reconstructed one.

4.1 Numerical simulations and Comparisons

For the -methods, respectively, based methods, we implement the hard threshold criteria, namely we select the pixels where is maximum and the soft threshold algorithm of the fat pixels approach, where the selected pixels are chosen according to the distribution of . The last algorithm uses a dithering procedure [Floyd1976].

In Table 1, Table 2 and Table 3, we give the -errors between the images and , as a function of the noise level for each method. We notice that the -errors are better than with B-Tree and random choices when the noise magnitude of the data is not too high whereas it deteriorates increasingly with the noise. In fact, the locations where is high includes more noisy pixels which is reflected in the mask selection. This effect of taking more noisy pixels is amplified with compression ratio. We emphasize that our comparisons are only concerned with the influence of noise on the coding phase in compression and are by no means exhaustive. In particular, when the noise level is too high the criterion based on the locations where the Laplacian is maximum appears less efficient with respect to B-Tree (which include by construction an amount of denoising) or even the random choice of pixels, we will see how to improve the criterion in these cases.

In Figure 2, Figure 3 and Figure 4, we present various masks obtained and the corresponding reconstructed images. We notice that with no noise or low level ones, the masks consist of pixels located on, and close to, the edges which is intuitively expected. The soft threshold method includes few pixels from the homogeneous areas leading to a better reconstruction results. As the noise magnitude grows, increasingly noisy pixels are selected in the mask leading to poor reconstructions.

Noise L2-T L2-H B-tree Rand
0 39.17 9.56 10000 9.88 15.02
0.03 13.47 12.14 30 10.61 15.49
0.05 17.10 15.12 70 11.80 16.13
0.1 31.43 23.98 87 15.50 19.01
0.2 75.48 41.92 68 24.87 27.64
Table 1: -error between the original image and the reconstruction (build from ) with of total pixels saved.
Noise L2-T L2-H B-tree Rand
0 25.57 4.92 10000 6.44 10.94
0.03 9.36 8.59 80 7.56 11.85
0.05 13.91 12.66 62 9.57 13.14
0.1 27.39 23.19 56 15.16 16.94
0.2 61.46 43.29 51 26.73 27.89
Table 2: -error between the original image and the reconstruction with of total pixels saved.
Noise L2-T L2-H B-tree Rand
0 18.21 3.32 10000 4.55 9.03
0.03 8.21 7.67 51 6.54 9.93
0.05 13.05 12.07 15 8.91 11.35
0.1 25.91 23.11 22 15.36 16.78
0.2 54.66 43.34 12 28.20 28.77
Table 3: -error between the original image and the reconstruction with of total pixels saved.
(a) Input image.
(b) , .
(c) , .
(d) , .
Figure 1:

Input images with and without gaussian noise of standard deviation

(a) Mask with L2-T method.
(b) Reconstruction with L2-T method.
(c) Mask with L2-H method.
(d) Reconstruction with L2-H method.
(e) Mask with B-TREE method.
(f) Reconstruction with B-TREE method.
(g) Mask with RAND method.
(h) Reconstruction with RAND method.
Figure 2: Masks and reconstructions for Table 2 when the input image is noiseless ().
(a) Mask with L2-T method.
(b) Reconstruction with L2-T method.
(c) Mask with L2-H method.
(d) Reconstruction with L2-H method.
(e) Mask with B-TREE method.
(f) Reconstruction with B-TREE method.
(g) Mask with RAND method.
(h) Reconstruction with RAND method.
Figure 3: Masks and reconstructions for Table 2 when the input image is affected by gaussian noise ().
(a) Mask with L2-T method.
(b) Reconstruction with L2-T method.
(c) Mask with L2-H method.
(d) Reconstruction with L2-H method.
(e) Mask with B-TREE method.
(f) Reconstruction with B-TREE method.
(g) Mask with RAND method.
(h) Reconstruction with RAND method.
Figure 4: Masks and reconstructions for Table 2 when the input image is affected by gaussian noise ().

4.2 Deeper Comparison with B-Tree

B-Tree algorithms seem in some examples to perform slightly better in terms of the -error. Actually, this is not surprising as B-Tree approach build the masks for compression by optimizing with respect to this norm. However, this is not a disadvantage to our approach when we compare with respect to some other constraints, e.g.

  • the cost of the compression is higher than with our method in term of CPU time. This becomes worst with images of high resolution,

  • B-Tree works only with regular grids, which is a serious limitation for images where features of importance (e.g. edges) are located outside the grid. For images with high anisotropy this shortcoming is more critical,

  • in B-Tree algorithms refinement are necessary such as the choice of the parameters which make them more image dependent.

Our approach gives an analytic criteria which allows to overcome most of these difficulties. We added more comparisons elements between the two approaches in the revised version to give a more complete image on their outcomes Table 4 and Figure 6.

Noise L2-H B-tree
time (s) time (s)
0 9.36 0.37 10000 15.65 198.23
0.03 12.19 0.38 19.59 16.16 33.92
0.05 15.91 0.38 12.20 16.42 20.58
0.1 23.26 0.39 16.57 17.80 13.71
0.2 36.30 0.37 10.00 21.85 20.70
Table 4: -error between the original image and the reconstruction (build from ) with of total pixels saved.
(a) Input image.
(b) , .
(c) , .
(d) , .
Figure 5: Input images with and without gaussian noise of standard deviation .
(a) Mask with L2-H method.
(b) Reconstruction with L2-H method.
(c) Mask with B-Tree method.
(d) Reconstruction with B-Tree method.
(e) Mask with L2-H method.
(f) Reconstruction with L2-H method.
(g) Mask with B-Tree method.
(h) Reconstruction with B-Tree method.
(i) Mask with L2-H method.
(j) Reconstruction with L2-H method.
(k) Mask with B-Tree method.
(l) Reconstruction with B-Tree method.
(m) Mask with L2-H method.
(n) Reconstruction with L2-H method.
(o) Mask with B-Tree method.
(p) Reconstruction with B-Tree method.
Figure 6: Masks and reconstructions for Table 4.

4.3 Improving the selection criteria

The issue raised from the above experiments and analysis is how to improve the mask selection as appears to be very sensitive to the noise magnitude? A way considered by [Mainberger2012, Hoeltgen2015] is to resort to tonal optimization where the data is simultaneously modified on . In this paper we investigate first a more basic idea on how to modify the criterion under the same analysis presented in the previous sections. It is clear that any change in the data on the Dirichlet condition on will causes a modification (e.g. a correction) on the final criterion. Intuitively, taking on where is either less noisy than or a copy of with enhanced edges, it would lead a best pixels selection. The simplest ways to this are presented now.

4.3.1 Sharpening the edges

We replace by , . Then the criterion become :

(a) .
(b) .
(c) Mask with .
(d) Reconstruction with .
Figure 7: With L2-H , .
(a) .
(b) .
(c) Mask with .
(d) Reconstruction with .
Figure 8: With L2-T , .
(a) .
(b) .
(c) Mask with .
(d) Reconstruction with .
Figure 9: With L2-H , .

We notice that sharpening the edges lead to a slight improvement of the accuracy of the reconstruction, however this effect decreases as the noise level increases. This is due to the action of the operators and which enforce the edges but also amplify the noise.

4.3.2 (Pre)-Filtering the data

The idea here is to perform a small amount of filtering of the initial data. This may be performed on coarse mesh with the goal of reducing slightly the noise level. Thus, we replace by solution of in . The criterion become :

We notice a significant improvement in the accuracy and the obtained mask. It is also important to notice that there is no need of blind and strong denoising as the linear filter is applied on coarse mesh with the aim of small reduction of the noise in the homogeneous areas, where intuitively we expect few pixels in the mask, and the similar action near the edges, with even some amount of blurring which do not affect too much the selection criterion.

We emphasize that others possible improvements within the same framework may be considered but clearly, a more systematic study in the spirit of tonal optimization is certainly a better choice in this direction.

(a) .
(b) .
(c) Mask with .
(d) Reconstruction with .
Figure 10: With L2-H , .

4.4 Impulse Noise

We consider now images corrupted with impulse noise. In Figure 11, Figure 12 and Figure 13 we make some experiments with of salt and pepper noise, respectively of only salt noise and of only pepper noise. These numerical simulations show that -methods do not give a satisfying reconstruction with this sort of noise. In fact, the Laplacian takes large values at the noisy pixels. Thus such pixels are selected in the inpainting mask, whereas linear diffusion denoising as it is well known, do not perform well (e.g. large stains in Figure 11 (c)). This suggests to minimize the -errors (instead of [Nikolova2002, Nikolova2004]) to remove the impulse noise like salt and pepper noise as shown in the figures.

(a) Original image with of salt and pepper noise.
(b) L2-H mask.
(c) Reconstruction.
(d) -error minimizer.
Figure 11: Image reconstruction with of total pixels saved and of salt and pepper noise applied to the input image.
(a) Original image with of salt noise.
(b) L2-H mask.
(c) Reconstruction.
(d) -error minimizer.
Figure 12: Image reconstruction with of total pixels saved and of salt noise applied to the input image.
(a) Original image with of pepper noise.
(b) L2-H mask.
(c) Reconstruction.
(d) -error minimizer.
Figure 13: Image reconstruction with of total pixels saved and of pepper noise applied to the input image.

4.5 Colored Images

A colored image can be modeled by a function from to , , where functions , and are from to , represent red channel, green channel and blue channel respectively (Figure 14). Our strategy is to create three masks, one for each channel. This is done in Figure 15 where (a) is the original image and (c) is the reconstruction by keeping of total pixels for each mask. Since we compute a mask with a fixed number of pixels for each channel, the final mask, where the three masks are combined, may not have the same number of pixels. In fact, the three masks may not have common pixels or only some common pixels. More efficient strategies that use YCbCr color space instead of RGB space, have been investigated in [Peter2014, Peter2017, Mohideen2020].

(a) Original image.
(b) Red channel to grayscale.
(c) Green channel to grayscale.
(d) Blue channel to grayscale.
Figure 14: Reconstructions for colored images.
(a) Mask L2-STA-T.
(b) Reconstruction L2-STA-T.
(c) Mask L2-STA-H.
(d) Reconstruction L2-STA-H.
Figure 15: Reconstructions for colored images.

Summary and Conclusions

We have considered the problem of finding the best interpolation data in PDE-based compression problems for images with noise. We aim to have a unified framework for both compression and denoising since it is not clear that doing this two tasks separately leads to satisfying results. We introduced a geometric variational model to determine a set which minimizes the -distance between the initial image and its reconstruction from the datum in . We extended the shape optimization approach introduced in [Belhachmi2009] based on the analysis in the framework of -convergence. In particular, we studied the two approaches considered there which differ in the way a single pixel in is taken. Both theoretical findings emphasize the importance of the Laplacian of appropriate data and highlight the deep connection between the geometric set and the inpainting operator (in our case time harmonic). We have performed several numerical tests and comparisons which demonstrate the efficiency of the approach in handling images with noise. Besides, ongoing research addresses on one hand, a systematic study of tonal optimization techniques as a further step towards a drastic reduction of the “size” of without loss of accuracy. Secondly, extending the shape analysis methods to nonlinear reconstruction operators will open some exciting perspectives in the fields of PDE based image compression.

This work may have several application like compression of “real world” images (since they always contain noise), video compression using only variational methods [Andris2016], microscopy imaging or denoising by inpainting [Adam2017].


Appendix A Framework of the -convergence

For the sake of completeness, we recall the definition of the -capacity, for , -convergence and -convergence written in [Belhachmi2009]. More details about the -capacity or the shape optimization tools can be found in [DalMaso1993, Bucur2005]. Let us start with some definitions.

Definition A.1 (-capacity of a set).

Let be a smooth bounded open set and . We define the -capacity of a subset in by


We notice that if, for a given set and , we have , then we have for every . Thus, the sets of vanishing capacity are the same for all . That is why we will drop the and simply write instead of .

Definition A.2 (quasi-everywhere property).

We say that a property holds quasi-everywhere if it holds for all in except for the elements of a set subset of such that . We write q.e.

Definition A.3 (quasi-open set).

We say that a subset of is quasi-open if for every there exists an open subset of , such that and .

We introduce the set which is denoted by in [DalMaso1987].

Definition A.4 (The set ).

We denote by the set of all non negative Borel measures on , such that

  • , for every Borel set subset of with ,

  • , for every Borel subset of .

Definition A.5 (-capacity of a measure).

The -capacity, for , of a measure of is defined by

The next proposition gives us a natural way to identify a set to a measure of .

Proposition A.1.

Let be a Borel subset of . We denote by the measure of defined by


For a given Borel subset of , we have .

Definition A.6 (-convergence).

Let be a topological space. We say that the sequence of functionals , from into , -converges to in if

  • for every in , there exists a sequence in such that in and ,

  • for every sequence in such that in , we have .

We write sometimes .

Let us recall the following property:

Proposition A.2.

Let be a -convergent sequence, in a Hilbert space , towards a limit and let be continuous, then -converges to .

Definition A.7 (-convergence).

We say that a sequence of measures in -converge to a measure in with respect to (or -converge to ) if -converge in to .

We give a locality of the -convergence result, then the -compacity of , from [DalMaso1997] and [DalMaso1987] respectively.

Proposition A.3 (Locality of the -convergence).

Let and be two sequences of measures in which -converge to and respectively. Assume that and coincide q.e. on a subset of , for every . Then and coincide q.e. on .

Proposition A.4 (-compacity of ).

The set is compact with respect to the -convergence. Moreover, the class of measures of the form