Image restoration often requires to solve an inverse problem. It amounts to estimate an image from a measurement
obtained through a non-invertible linear degradation operator , and contaminated by an additive noise
. Typical degradation operators include masking, subsampling in a uniform grid and convolution, the corresponding inverse problems often named inpainting or interpolation, zooming and deblurring. Estimatingrequires some prior information on the image, or equivalently image models. Finding good image models is therefore at the heart of image estimation.
Mixture models are often used as image priors since they enjoy the flexibility of signal description by assuming that the signals are generated by a mixture of probability distributions. Gaussian mixture models (GMM) have been shown to provide powerful tools for data classification and segmentation applications (see for example [14, 30, 54, 58]), however, they have not yet been shown to generate state-of-the-art in a general class of inverse problems. Ghahramani and Jordan have applied GMM for learning from incomplete data, i.e., images degraded by a masking operator, and have shown good classification results, however, it does not lead to state-of-the-art inpainting . Portilla et al. have shown image denoising impressive results by assuming Gaussian scale mixture models (deviating from GMM by assuming different scale factors in the mixture of Gaussians) on wavelet representations , and have recently extended its applications on image deblurring . Recently, Zhou et al. have developed an nonparametric Bayesian approach using more elaborated models, such as beta and Dirichlet processes, which leads to excellent results in denoising and inpainting .
The now popular sparse signal models, on the other hand, assume that the signals can be accurately represented with a few coefficients selecting atoms in some dictionary . Recently, very impressive image restoration results have been obtained with local patch-based sparse representations calculated with dictionaries learned from natural images [2, 23, 43]. Relative to pre-fixed dictionaries such as wavelets , curvelets , and bandlets , learned dictionaries enjoy the advantage of being better adapted to the images, thereby enhancing the sparsity. However, dictionary learning is a large-scale and highly non-convex problem. It requires high computational complexity, and its mathematical behavior is not yet well understood. In the dictionaries aforementioned, the actual sparse image representation is calculated with relatively expensive non-linear estimations, such as or matching pursuits [19, 22, 48]. More importantly, as will be reviewed in Section III-A
, with a full degree of freedom in selecting the approximation space (atoms of the dictionary), non-linear sparse inverse problem estimation may be unstable and imprecise due to the coherence of the dictionary.
Structured sparse image representation models further regularize the sparse estimation by assuming dependency on the selection of the active atoms. One simultaneously selects blocks of approximation atoms, thereby reducing the number of possible approximation spaces [4, 25, 26, 35, 36, 59]. These structured approximations have been shown to improve the signal estimation in a compressive sensing context for a random operator . However, for more unstable inverse problems such as zooming or deblurring, this regularization by itself is not sufficient to reach state-of-the-art results. Recently some good image zooming results have been obtained with structured sparsity based on directional block structures in wavelet representations . However, this directional regularization is not general enough to be extended to solve other inverse problems.
This work shows that the Gaussian mixture models (GMM), estimated via an MAP-EM (maximum a posteriori expectation-maximization) algorithm, lead to results in the same ballpark as the state-of-the-art in a number of imaging inverse problems, at a lower computational cost. The MAP-EM algorithm is described in SectionII. After briefly reviewing sparse inverse problem estimation approaches, a mathematical equivalence between the proposed piecewise linear estimation (PLE) from GMM/MAP-EM and structured sparse estimation is shown in Section III
. This connection shows that PLE stabilizes the sparse estimation with a structured learned overcomplete dictionary composed of a union of PCA (Principal Component Analysis) bases, and with collaborative prior information incorporated in the eigenvalues, that privileges in the estimation the atoms that are more likely to be important. This interpretation suggests also an effective dictionary motivated initialization for the MAP-EM algorithm. In SectionIV
we support the importance of different components of the proposed PLE via some initial experiments. Applications of the proposed PLE in image inpainting, zooming, and deblurring are presented in sectionsV, VI, and VII respectively, and are compared with previous state-of-the-art methods. Conclusions are drawn in Section VIII.
Ii Piecewise Linear Estimation
This section describes the Gaussian mixture models (GMM) and the MAP-EM algorithm, which lead to the proposed piecewise linear estimation (PLE).
Ii-a Gaussian Mixture Model
Natural images include rich and non-stationary content, whereas when restricted to local windows, image structures appear to be simpler and are therefore easier to model. Following some previous works [2, 10, 43], an image is decomposed into overlapping local patches
where is the degradation operator restricted to the patch , , and are respectively the degraded, original image patches and the noise restricted to the patch, with , being the total number of patches. Treated as a signal, each of the patches is estimated, and their corresponding estimates are finally combined and averaged, leading to the estimate of the image.
GMM describes local image patches with a mixture of Gaussian distributions. Assume there existGaussian distributions parametrized by their means and covariances . Each image patch is independently drawn from one of these Gaussians with an unknown index
, whose probability density function is
Estimating from can then be casted into the following problems:
Estimate the Gaussian parameters , from the degraded data .
Identify the Gaussian distribution that generates the patch , .
Estimate from its corresponding Gaussian distribution , .
These problems are overall non-convex. The next section will present a maximum a posteriori expectation-maximization (MAP-EM) algorithm that calculates a local-minimum solution .
Ii-B MAP-EM Algorithm
Following an initialization, addressed in Section III-C, the MAP-EM algorithm is an iterative procedure that alternates between two steps:
In the E-step, assuming that the estimates of the Gaussian parameters are known (following the previous M-step), for each patch one calculates the maximum a posteriori (MAP) estimates with all the Gaussian models, and selects the best Gaussian model to obtain the estimate of the patch .
In the M-step, assuming that the Gaussian model selection and the signal estimate , , are known (following the previous E-step), one estimates (updates) the Gaussian models .
Ii-B1 E-step: Signal Estimation and Model Selection
In the E-step, the estimates of the Gaussian parameters are assumed to be known. To simplify the notation, we assume without loss of generality that the Gaussians have zero means , as one can always center the image patches with respect to the means.
For each image patch , the signal estimation and model selection is calculated to maximize the log a-posteriori probability :
where the second equality follows the Bayes rule and the third one is derived with the assumption that , with
the identity matrix, and.
The maximization is first calculated over and then over . Given a Gaussian signal model , it is well known that the MAP estimate
is a Wiener filter matrix. Since is semi-positive definite, is positive definite and its inverse is well defined, if is full rank.
The best Gaussian model that generates the maximum MAP probability among all the models is then selected with the estimated
The signal estimate is obtained by plugging in the best model in the MAP estimate (4)
The whole E-step is basically calculated with a set of linear filters. For typical applications such as zooming and deblurring where the degradation operators are translation-invariant and do not depend on the patch index , i.e., , the Wiener filter matrices (6) can be precomputed for the Gaussian distributions. Calculating (5) thus requires only floating-point operations (flops), where is the image patch size. For a translation-variant degradation , random masking for example, needs to be calculated at each position where changes. Since is positive definite, the matrix inversion can be implemented with flops through a Cholesky factorization . All this makes the E-step computationally efficient.
Ii-B2 M-step: Model Estimation
In the M-step, the Gaussian model selection and the signal estimate of all the patches are assumed to be known. Let be the ensemble of the patch indices that are assigned to the -th Gaussian model, i.e., , and let be its cardinality. The parameters of each Gaussian model are estimated with the maximum likelihood estimate using all the patches assigned to that Gaussian cluster,
With the Gaussian model (2) , one can easily verify that the resulting estimate is the empirical estimate
The empirical covariance estimate may be improved through regularization when there is lack of data  (for typical patch size , the dimension of the covariance matrix is , while the is typically in the order of a few hundred). A simple and standard eigenvalue-based regularization is used here, , where is a small constant. The regularization also guarantees that the estimate of the covariance matrix is full-rank, so that the Wiener filter (6) is always well defined. This is important for the Gaussian model selection (7) as well, since if is not full rank, then , biasing the model selection. The computational complexity of the M-step is negligible with respect to the E-step.
As the MAP-EM algorithm described above iterates, the MAP probability of the observed signals
always increases. This can be observed by interpreting the E- and M-steps as a coordinate descent optimization . In the experiments, the convergence of the patch clustering and resulting PSNR is always observed.
Iii PLE and Structured Sparse Estimation
The MAP-EM algorithm described above requires an initialization. A good initialization is highly important for iterative algorithms that try to solve non-convex problems, and remains an active research topic [5, 29]. This section describes a dual structured sparse interpretation of GMM and MAP-EM, which suggests an effective dictionary motivated initialization for the MAP-EM algorithm. Moreover, it shows that the resulting piecewise linear estimate stabilizes traditional sparse inverse problem estimation.
The sparse inverse problem estimation approaches will be first reviewed. After describing the connection between MAP-EM and structured sparsity via estimation in PCA bases, an intuitive and effective initialization will be presented.
Iii-a Sparse Inverse Problem Estimation
Traditional sparse super-resolution estimation in dictionaries provides effective non-parametric approaches to inverse problems, although the coherence of the dictionary and their large degree of freedom may become sources of instability and errors. These algorithms are briefly reviewed in this section. “Super-resolution” is loosely used here as these approaches try to recover information that is lost after the degradation.
A signal is estimated by taking advantage of prior information which specifies a dictionary , having columns corresponding to atoms , where has a sparse approximation. This dictionary may be a basis or some redundant frame, with . Sparsity means that is well approximated by its orthogonal projection over a subspace generated by a small number
of column vectorsof :
where is the transform coefficient vector, selects the coefficients in and sets the others to zero, multiplies the matrix with the vector , and is a small approximation error.
Sparse inversion algorithms try to estimate from the degraded signal the support and the coefficients in that specify the projection of in the approximation space . It results from (11) that
This means that is well approximated by the same sparse set of atoms and the same coefficients in the transformed dictionary , whose columns are the transformed vectors .
Since is not an invertible operator, the transformed dictionary is redundant, with column vectors which are linearly dependent. It results that has an infinite number of possible decompositions in . A sparse approximation of can be calculated with a basis pursuit algorithm which minimizes a Lagrangian penalized by a sparse norm [16, 61]
or with faster greedy matching pursuit algorithms . The resulting sparse estimation of is
As we explain next, this simple approach is not straightforward and often not as effective as it seems. The Restrictive Isometry Property of Candès and Tao  and Donoho  is a strong sufficient condition which guarantees the correctness of the penalized estimation. This restrictive isometry property is valid for certain classes of operators , but not for important structured operators such as subsampling on a uniform grid or convolution. For structured operators, the precision and stability of this sparse inverse estimation depends upon the “geometry” of the approximation support of , which is not well understood mathematically, despite some sufficient exact recovery conditions proved for example by Tropp , and many others (mostly related to the coherence of the equivalent dictionary). Nevertheless, some necessary qualitative conditions for a precise and stable sparse super-resolution estimate (14) can be deduced as follows [45, 47]:
Sparsity. provides a sparse representation for .
Recoverability. The atoms have non negligible norms . If the degradation operator applied to leaves no “trace,” the corresponding coefficient can not be recovered from with (13). We will see in the next subsection that this recoverability property of transformed relevant atoms having sufficient energy is critical for the GMM/MAP-EM introduced in the previous section as well.
Stability. The transformed dictionary is incoherent enough. Sparse inverse problem estimation may be unstable if some columns in are too similar. To see this, let us imagine a toy example, where a constant-value atom and a highly oscillatory atom (with values ), after a subsampling, become identical. The sparse estimation (13) can not distinguish between them, which results in an unstable inverse problem estimate (14). The coherence of depends on as well as on the operator . Regular operators such as subsampling on a uniform grid and convolution, usually lead to a coherent , which makes accurate inverse problem estimation difficult.
Several authors have applied this sparse super-resolution framework (13) and (14) for image inverse problems. Sparse estimation in dictionaries of curvelet frames and DCT have been applied successfully to image inpainting [24, 27, 33]. However, for uniform grid interpolations, Section VI shows that the resulting interpolation estimations are not as precise as simple linear bicubic interpolations. A contourlet zooming algorithm  can provide a slightly better PSNR than a bicubic interpolation, but the results are considerably below the state-of-the-art. Learned dictionaries of image patches have generated good inpainting results [43, 71]. In some recent works sparse super-resolution algorithms with learned dictionary have been studied for zooming and deblurring [41, 65]. As shown in sections VI and VII, although they sometimes produce good visual quality, they often generate artifacts and the resulting PSNRs are not as good as more standard methods.
Another source of instability of these algorithms comes from their full degree of freedom. The non-linear approximation space is estimated by selecting the approximation support , with basically no constraint. A selection of atoms from a dictionary of size thus corresponds to a choice of an approximation space among possible subspaces. In a local patch-based sparse estimation with patch size, typical values of and lead to a huge degree of freedom , further stressing the inaccuracy of estimating from an .
These issues are addressed with the proposed PLE framework and its mathematical connection with structured sparse models described next.
Iii-B Structured Sparse Estimation in PCA bases
The PCA bases bridge the GMM/MAP-EM framework presented in Section II with the sparse estimation described above. For signals following a statistical distribution, a PCA basis is defined as the matrix that diagonalizes the data covariance matrix ,
where is the PCA basis and is a diagonal matrix, whose diagonal elements are the sorted eigenvalues. It can be shown that the PCA basis is orthonormal, i.e., , each of its columns , , being an atom that represents one principal direction. The eigenvalues are non-negative, , and measure the energy of the signals in each of the principal directions .
where, following simple algebra and calculus, the MAP estimate of the PCA coefficients is obtained by
Comparing (17) with (13), the MAP-EM estimation can thus be interpreted as a structured sparse estimation. As illustrated in Figure 1, the proposed dictionary has the advantage of the traditional learned overcomplete dictionaries being overcomplete, and adapted to the image under test thanks to the Gaussian model estimation in the M-step (which is equivalent to updating the PCAs), but the resulting piecewise linear estimator (PLE) is more structured than the traditional nonlinear sparse estimation. PLE is calculated with a linear estimation in each basis and a non-linear best basis selection:
Nonlinear block sparsity. The dictionary is composed of a union of PCA bases. To represent an image patch, the non-linear model selection (3) in the E-step restricts the estimation to only one basis ( atoms out of selected in group), and has a degree of freedom equal to , sharply reduced from that in the traditional sparse estimation which has the full degree of freedom in atom selection.
Linear collaborative filtering. Inside each PCA basis, the atoms are pre-ordered by their associated eigenvalues (which decay very fast as we will later see, leading to sparsity inside the block as well). In contrast to the non-linear sparse estimation (13), the MAP estimate (17) implements the regularization with the norm of the coefficients weighted by the eigenvalues , and is calculated with a linear filtering (5) (6). The eigenvalues are computed from all the signals in the same Gaussian distribution class. The resulting estimation therefore implements a collaborative filtering which incorporates the information from all the signals in the same cluster . The weighting scheme privileges the coefficients corresponding to the principal directions with large eigenvalues , where the energy is likely to be high, and penalizes the others. For the ill-posed inverse problems, the collaborative prior information incorporated in the eigenvalues further stabilizes the estimate. (Note that this collaborative weighting is fundamentally different than the standard one used in iterative weighted approaches to sparse coding .)
As described in Section II, the complexity of the MAP-EM algorithm is dominated by the E-step. For an image patch size of (typical value ), it costs flops for translation-invariant degradation operators such as uniform subsampling and convolution, and flops for translation-variant operators such as random masking, where is the number of PCA bases. The overall complexity is therefore tightly upper bounded by or , where is the number of iterations. As will be shown in Section IV, the algorithm converges fast for image inverse problems, typically in to 5 iterations. On the other hand, the complexity of the minimization with the same dictionary is , with typically a large factor in front as the converges slowly in practice. The MAP-EM algorithm is thus typically one or two orders of magnitude faster than the sparse estimation.
To conclude, let as come back to the recoverability property mentioned in the previous section. We see from (17
) that if an eigenvector of the covariance matrix is killed by the operator, then its contribution to the recovery of is virtually null, while it pays a price proportional to the corresponding eigenvalue. Then, it will not be used in the optimization (17), and thereby in the reconstruction of the signal following (16). This means that the wrong model might be selected and an improper reconstruction obtained. This further stresses the importance of a correct design of dictionary elements, which from the description just presented, it is equivalent to the correct design of the covariance matrix, including the initialization, which is described next.
Iii-C Initialization of MAP-EM
The PCA formulation just described not only reveals the connection between the MAP-EM estimator and structured sparse modeling, but it is crucial for understanding how to initialize the Gaussian models as well.
As explained in Section III-A, for the sparse inverse problem estimation model to have the super-resolution ability, the first requirement on the dictionary is to be able to provide sparse representations of the image. It has been shown that capturing image directional regularity is highly important for sparse representations [2, 11, 46]. In dictionary learning, for example, most prominent atoms look like local edges good at representing contours, as illustrated in Figure 2-(a). Therefore the initial PCAs in our framework, which following (15) will lead to the initial Gaussians, are designed to capture image directional regularity.
The initial directional PCA bases are calculated following a simple numerical procedure. Directions from to are uniformly sampled to angles, and one PCA basis is calculated per angle. The calculation of the PCA at an angle uses a synthetic blank-and-white edge image following the same direction, as illustrated in Figure 2-(b). Local patches that touch the contour are collected and are used to calculate the PCA basis (following (10) and (15)). The first atom, which is almost DC, is replaced by DC, and a Gram-Schmidt orthogonalization is calculated on the other atoms to ensure the orthogonality of the basis. The patches contain edges that are translation-invariant. As the covariance of a stationary process is diagonalized by the Fourier basis, unsurprisingly, the resulting PCA basis has first few important atoms similar to the cosines atoms oscillating in the direction from low-frequency to high-frequency, as shown in Figure 2-(c). Comparing with the Fourier vectors, these PCAs enjoy the advantage of being free of the periodic boundary issue, so that they can provide sparse representations for local image patches. The eigenvalues of all the bases are initiated with the same ones obtained from the synthetic contour image, that have fast decay, Figure 2-(d). These, following (15), complete the covariance initialization. The Gaussian means are initialized with zeros.
It is worth noting that this directional PCA basis not only provides sparse representations for contours and edges, but it captures well textures of the same directionality as well. Indeed, in a space of dimension corresponding to patches of size , the first about atoms illustrated in Figure 2-(c) absorb most of the energy in local patterns following the same direction in real images, as indicated by the fast decay of the eigenvalues (very similar to Figure 2-(d)).
A typical patch size is , as selected in previous works [2, 23]. The number of directions in a local patch is limited due to the pixelization. The DCT basis is also included in competition with the directional bases to capture isotropic image patterns. Our experiments have shown that in image inverse problems, there is a significant average gain in PSNR when grows from to (when , the dictionary is initialized with only a DCT basis and all the patches are assigned to the same cluster), which shows that one Gaussian model, or equivalently a single linear estimator, is not enough to accurately describe the image. When increases, the gain reduces and gets stabilized at about . Compromising between performance and complexity, , which corresponds to a angle sampling step, is selected in all the future experiments.
Figures 3-(a) and (b) illustrates the Lena image and the corresponding patch clustering, i.e., the model selection , obtained for the above initialization, calculated with (7) in the E-step described in Section II. The patches are densely overlapped and each pixel in Figure 3-(b) represents the model selected for the patch around it, different colors encoding different values of , from 1 to 19 (18 directions plus a DCT). One can observe, for example on the edges of the hat, that patches where the image patterns follow similar directions are clustered together, as expected. Let us note that on the uniform regions such as the background, where there is no directional preference, all the bases provide equally sparse representations. As the term in the model selection (7) is initialized as identical for all the Gaussian models, the clustering is random is these regions. The clustering will improve as the MAP-EM progresses.
The oscillatory atoms illustrated in Figure 2-(c) are spread in space. Therefore, for diagonal operators in space such as masking and subsampling, they satisfy well the recoverability condition for super-resolution described in Section III-A. However, as these oscillatory atoms have Dirac supports in Fourier, for convolution operators, the recoverability condition is violated. For convolution operators , requires that the atoms have spread Fourier spectrum, and therefore be localized in space. Following a similar numerical scheme as described above, patches touching the edge at a fixed position are extracted from synthetic edge images with different amounts of blur. The resulting PCA basis, named position PCA basis hereafter, contains localized atoms of different polarities and at different scales, following the same direction , as illustrated in Figure 4 (which look like wavelets along the appropriate direction). For each direction , a family of localized PCA bases are calculated at all the positions translating within the patch. The eigenvalues are initialized with the same fast decay ones as illustrated in Figure 2-(d) for all the position PCA bases. Each pixel in Figure 3-(c) represents the model selected for the patch around it, different colors encoding different position values of , from 1 to 12. The rainbow-like color transitions on the edges show that the position bases are accurately fitted to the image structures.
Iii-C3 Wiener Filtering Interpretation
Figure 5 illustrates some typical Wiener filters, which are the rows of in (6), calculated with the initial PCA bases described above for zooming and deblurring. The filters have intuitive interpretations, for example directional interpolator for zooming and directional deconvolution for deblurring, confirming the effectiveness of the initialization.
Iii-D Additional comments on related works
Before proceeding with experimental results and applications, let us further comment on some related works, in addition to those already addressed in Section I.
The MAP-EM algorithm using various probability distributions such as Gaussian, Laplacian, Gamma and Gibbs have been widely applied in medical image reconstruction and analysis (see for example [70, 40]). Following the Gaussian mixture models, MAP-EM alternates between image patch estimation and clustering, and Gaussian models estimation. Clustering-based estimation has been shown effective for image restoration. To achieve accurate clustering-based estimation, an appropriate clustering is at the heart. In a denoising setting where images are noisy but not degraded by the linear operator , clustering with block matching, i.e., calculating Euclidian distance between image patch gray-levels [10, 17, 42]
, and with image segmentation algorithms such as k-means on local image features, have been shown to improve the denoising results. For inverse problems where the observed images are degraded, for example images with holes in an inpainting setting, clustering becomes more difficult. The generalized PCA  models and segments data using an algebraic subspace clustering technique based on polynomial fitting and differentiation, and while it has been shown effective in image segmentation, it does not reach state-of-the-art in image restoration. In the recent non-parametric Bayesian approach , an image patch clustering is implemented with probability models, which improves the denoising and inpainting results, although still under performing, in quality and computational cost, the framework here introduced. The clustering in the MAP-EM procedure enjoys the advantage of being completely consistent with the signal estimation, and in consequence leads to state-of-the-art results in a number of imaging inverse problem applications.
Iv Initial Supportive Experiments
Before proceeding with detailed experimental results for a number of applications of the proposed framework, this section shows through some basic experiments the effectiveness and importance of the initialization proposed above, the evolution of the representations as the MAP-EM algorithm iterates, as well as the improvement brought by the structure in PLE with respect to traditional sparse estimation.
Following some recent works, e.g., , an image is decomposed into regions, each region treated with the MAP-EM algorithm separately. The idea is that image contents are often more coherent semi-locally than globally, and Gaussian model estimation or dictionary learning can be slightly improved in semi-local regions. This also saves memory and enables the processing to proceed as the image is being transmitted. Parallel processing on image regions is also possible when the whole image is available. Regions are half-overlapped to eliminate the boundary effect between the regions, and their estimates are averaged at the end to obtain the final estimate.
Different initializations are compared in the context of different inverse problems, inpainting, zooming and deblurring. The reported experiments are performed on some typical image regions, Lena’s hat with sharp contours and Barbara’s cloth rich in texture, as illustrated in Figure 6.
Inpainting. In the addressed case of inpainting, the image is degraded by , that is a random masking operator which randomly sets pixel values to zeros. The initialization described above is compared with a random initialization, which initializes in the E-step all the missing pixel value with zeros and starts with a random patch clustering. Figure 6-(a) and (b) compare the PSNRs obtained by the MAP-EM algorithm with those two initializations. The algorithm with the random initialization converges to a PSNR close to, about 0.4 dB lower than, that with the proposed initialization, and the convergence takes much longer time (about 6 iterations) than the latter (about 3 iterations).
It is worth noting that on the contours of Lena’s hat, with the proposed initialization the resulting PSNR is stable from the initialization, which already produces accurate estimation, since the initial directional PCA bases themselves are calculated over synthetic contour images, as described in Section III-C.
Zooming. In the context of zooming, the degradation is a subsampling operator on a uniform grid, much structured than that for inpainting. The MAP-EM algorithm with the random initialization completely fails to work: It gets stuck in the initialization and does not lead to any changes on the degraded image. Instead of initializing the missing pixels with zeros, a bicubic initialization is tested, which initializes the missing pixels with bicubic interpolation. Figure 6-(c) shows that, as the MAP-EM algorithm iterates, it significantly improves the PSNR over the bicubic initialization, however, the PSNR after a slower convergence is still about 0.5 dB lower than that obtained with the proposed initialization.
Deblurring. In the deblurring setting, the degradation is a convolution operator, which is very structured, and the image is further contaminated with a white Gaussian noise. Four initializations are under consideration: the initialization with directional PCAs ( directions plus a DCT basis), which is exactly the same as that for inpainting and zooming tasks, the proposed initialization with the position PCA bases for deblurring as described in Section III-C2 ( positions per each of the directions, all with the same eigenvalues as for the directional PCAs initialization), and two random initializations with the blurred image itself as the initial estimate and a random patch clustering with, respectively, and clusters. As illustrated in Figure 6-(d), the algorithm with the directional PCAs initialization gets stuck in a local minimum since the second iteration, and converges to a PSNR 1.5 dB lower than that with the initialization using the position PCAs. Indeed, since the recoverability condition for deblurring, as explained in Section III-C2, is violated with just directional PCA bases, the resulting images remain still quite blurred. The random initialization with clusters results in better results than with clusters, which is 0.7 dB worse than the proposed initialization with position PCAs.
These experiments confirm the importance of the initialization in the MAP-EM algorithm to solve inverse problems. The sparse coding dual interpretation of GMM/MAP-EM helps to deduce effective initializations for different inverse problems. While for inpainting with random masking operators, trivial initializations slowly converge to a solution moderately worse than that obtained with the proposed initialization, for more structured degradation operators such as uniform subsampling and convolution, simple initializations either completely fail to work or lead to significantly worse results than with the proposed initialization.
Iv-B Evolution of Representations
Figure 7 illustrates, in an inpainting context on Barbara’s cloth, which is rich in texture, the evolution of the patch clustering as well as that of a typical PCA bases as the MAP-EM algorithm iterates. The clustering gets cleaned up as the algorithm iterates. (See figures 3-(d) and (e) for another example.) Some high-frequency atoms are promoted to better capture the oscillatory patterns, resulting in a significant PSNR improvement of more than 3 dB. On contour images such as Lena’s hat illustrated in Figure 6, on the contrary, although the patch clustering is cleaned up as the algorithm iterates, the resulting local PSNR evolves little after the initialization, which already produces accurate estimation, since the directional PCA bases themselves are calculated over synthetic contour images, as described in Section III-C. The eigenvalues have always fast decay as the iteration goes on, visually similar to the plot in Figure 2-(d). The resulting PSNRs typically converge in 3 to 5 iterations.
Iv-C Estimation Methods
|(a) Original image.||(b) Low-resolution image.||(c) Global : 22.70 dB||(d) Global OMP: 28.24 dB|
|(e) Block : 26.35 dB||(f) Block OMP: 29.27 dB||(g) Block weighted : 35.94 dB||(h) Block weighted : 36.45 dB|
From the sparse coding point of view, the gain of introducing structure in sparse inverse problem estimation as described in Section III is now shown through some experiments. An overcomplete dictionary composed of a family of PCA bases , illustrated in Figure 1-(b), is learned as described in Section II, and is then fed to the following estimation schemes. (i) Global and OMP: the ensemble of is used as an overcomplete dictionary, and the zooming estimation is calculated with the sparse estimate (13) through, respectively, an minimization or an orthogonal matching pursuit (OMP). (ii) Block and OMP: the sparse estimate is calculated in each PCA basis through, respectively an minimization and an OMP, and the best estimate is selected with a model selection procedure similar to (7), thereby reducing the degree of freedom in the estimation with respect to the global and OMP. . (iii) Block weighted : on top of the block , weights are included for each coefficient amplitude in the regularizer,
with the weights , where are the eigenvalues of the -th PCA basis. The weighting scheme penalizes the atoms that are less likely to be important, following the spirit of the weighted deduced from the MAP estimate. (iv) Block weighted : the proposed PLE. Comparing with (18), the difference is that the weighted (17) takes the place of the weighted , thereby transforming the problem into a stable and computationally efficient piecewise linear estimation.
The comparison on a typical region of Lena in the image zooming context is shown in Figure 8. The global and OMP produce some clear artifacts along the contours, which degrade the PSNRs. The block or OMP considerably improves the results (especially for ). Comparing with the block or OMP, a very significant improvement is achieved by adding the collaborative weights on top of the block . The proposed PLE with the block weighted , computed with linear filtering, further improves the estimation accuracy over the block weighted , with a much lower computational cost.
In the following sections, PLE will be applied to a number of inverse problems, including image inpainting, zooming and deblurring. The experiments are performed on some standard gray-level and color images, illustrated in Figure 9.
In the addressed case of inpainting, the original image is masked with a random mask, , where is a diagonal matrix whose diagonal entries are randomly either or , keeping or killing the corresponding pixels. Note that this can be considered as a particular case of compressed sensing, or when collectively considering all the image patches, as matrix completion (and as here demonstrated, in contrast with the recent literature on the subject, a single subspace is not sufficient, see also ).
The experiments are performed on the gray-level images Lena, Barbara, House, and Boat, and the color images Castle, Mushroom, Train and Horses. Uniform random masks that retain , , and of the pixels are used. The masked images are then inpainted with the algorithms under consideration.
For gray-level images, the image patch size is when the available data is , , and . Larger patches of size are used when images are heavily masked with only pixels available. For color images, patches of size throughout the RGB color channels are used to exploit the redundancy among the channels . To simplify the initialization in color image processing, the E-step in the first iteration is calculated with “gray-level” patches of size on each channel, but with a unified model selection across the channels: The same model selection is performed throughout the channels by minimizing the sum of the model selection energy (7) over all the channels; the signal estimation is calculated in each channel separately. The M-step then estimates the Gaussian models with the “color” patches of size based on the model selection and the signal estimate previously obtained in the E-step. Starting from the second iteration, both the E- and M-steps are calculated with “color” patches, treating the patches as vectors of size . is set to 6 for color images, as in the previous works [43, 71]
. The MAP-EM algorithm runs for 5 iterations. The noise standard deviationis set to 3, which corresponds to the typical noise level in these images. The small constant in the covariance regularization is set to 30 in all the experiments.
The PLE inpainting is compared with a number of recent methods, including “MCA” (morphological component analysis) , “ASR” (adaptive sparse reconstructions)  , “ECM” (expectation conditional maximization)  , “KR” (kernel regression) , “FOE” (fields of experts) , “BP” (beta process) , and “K-SVD” . MCA and ECM compute the sparse inverse problem estimate in a dictionary that combines a curvelet frame , a wavelet frame  and a local DCT basis. ASR calculates the sparse estimate with a local DCT. BP infers a nonparametric Bayesian model from the image under test (noise level is automatically estimated). Using a natural image training set, FOE and K-SVD learn respectively a Markov random field model and an overcomplete dictionary that gives sparse representation for the images. The results of MCA, ECM, KR, FOE are generated by the original authors’ softwares, with the parameters manually optimized, and those of ASR are calculated with our own implementation. The PSNRs of BP and K-SVD are cited from the corresponding papers. K-SVD and BP currently generate the best inpainting results in the literature.
Table I-left gives the inpainting results on gray-level images. PLE considerably outperforms the other methods in all the cases, with a PSNR improvement of about 2 dB on average over the second best algorithms (BP, FOE and MCA). With available data on Barbara, which is rich in textures, it gains as much as about 3 dB over MCA, 4 dB over ECM and 6 dB over all the other methods. Let us remark that when the missing data ratio is high, MCA generates quite good results, as it benefits from the curvelet atoms that have large support relatively to the local patches used by the other methods.
Figure 10 compares the results of different algorithms. All the methods lead to good inpainting results on the smooth regions. MCA and KR are good at capturing contour structures. However, when the curvelet atoms are not correctly selected, MCA produces noticeable elongated curvelet-like artifacts that degrade the visual quality and offset its gain in PSRN (see for example the face of Barbara). MCA restores better textures than BP, ASR, FOE and KR. PLE leads to accurate restoration on both the directional structures and the textures, producing the best visual quality with the highest PSNRs. An additional PLE inpainting examples is shown in Figure 7.
|(a) Original||(b) Masked||(c) MCA (24.18 dB)||(d) ASR (21.84 dB)|
|(e) KR (21.55 dB)||(f) FOE (21.92 dB)||(g) BP (25.54 dB)||(h) PLE (27.65 dB)|
Table I-right compares the PSNRs of the PLE color image inpainting results with those of BP (the only one in the literature that reports the comprehensive comparison in our knowledge). Again, PLE generates higher PSNRs in all the cases. While the gain is especially large, at about 6 dB, when the available data ratio is high (at ), for the other masking rates, it is mostly between 0.5 and 1 dB. Both methods use only the image under test to learn the dictionaries.
Figure 11 illustrates the PLE inpainting result on Castle with available data. Calculated with a much reduced computational complexity, the resulting 30.07 dB PSNR surpasses the highest PSNR, 29.65 dB, reported in the literature, produced by K-SVD , that uses a dictionary learned from a natural image training set, followed by 29.12 dB given by BP (BP has been recently improved adding spatial coherence in the code, unpublished results). As shown in the zoomed region, PLE accurately restores the details of the castle from the heavily masked image. Let us remark that inpainting with random masks on color images is in general more favorable than on gray-level images, thanks to the information redundancy among the color channels.
|(a) Original||(b) Masked||(c) PLE|
Vi Interpolation zooming
Interpolation zooming is a special case of inpainting with regular subsampling on uniform grids. As explained in Section III-A, the regular subsampling operator may result in a highly coherent transformed dictionary . Calculating an accurate sparse estimation for interpolation zooming is therefore more difficult than that for inpainting with random masks.
The experiments are performed on the gray-level images Lena, Peppers, Mandril, Cameraman, Boat, and Straws, and the color images Lena, Peppers, Kodak05 and Kokad20. The color images are treated in the same way as for inpainting. These high-resolution images are down-sampled by a factor without anti-aliasing filtering. The resulting low-resolution images are aliased, which corresponds to the reality of television images that are usually aliased, since this improves their visual perception. The low-resolution images are then zoomed by the algorithms under consideration. When the anti-aliasing blurring operator is included before subsampling, zooming can be casted as a deconvolution problem and will be addressed in Section VII.
The PLE interpolation zooming is compared with linear interpolators [8, 37, 63, 52] as well as recent super-resolution algorithms “NEDI” (new edge directed interpolation) , “DFDF” (directional filtering and data fusion) , “KR” (kernel regression) , “ECM” (expectation conditional maximization) , “Contourlet” , “ASR” (adaptive sparse reconstructions) , “FOE” (fields of experts) , “SR” (sparse representation) , “SAI” (soft-decision adaptive Interpolation)  and “SME” (sparse mixing estimators) . KR, ECM, ASR and FOE are generic inpainting algorithms that have been described in Section V. NEDI, DFDF and SAI are adaptive directional interpolation methods that take advantage of the image directional regularity. Contourlet is a sparse inverse problem estimator as described in Section III-A, computed in a contourlet frame. SR is also a sparse inverse estimator that learns the dictionaries from a training image set. SME is a recent zooming algorithm that exploits directional structured sparsity in wavelet representations. Among the previously published algorithms, SAI and SME currently provide the best PSNR for spatial image interpolation zooming [47, 69]. The results of ASR are generated with our own implementation, and those of all the other algorithms are produced by the original authors’ softwares, with the parameters manually optimized. As the anti-aliasing operator is not included in the interpolation zooming model, to obtain correct results with SR, the anti-aliasing filter used in the original authors’ SR software is deactivated in both dictionary training (with the authors’ original training dataset of 92 images) and super-resolution estimation. PLE is configured in the same way as for inpainting as described in Section V, with patch size for gray-level images, and for color images.
Table II gives the PSNRs generated by all algorithms on the gray-level and the color images. Bicubic interpolation provides nearly the best results among all tested linear interpolators, including cubic splines , MOMS  and others , due to the aliasing produced by the down-sampling. PLE gives moderately higher PSNRs than SME and SAI for all the images, with one exception where the SAI produces slightly higher PSNR. Their gain in PSNR is significantly larger than with all the other algorithms.
Figure 12 compares an interpolated image obtained by the baseline bicubic interpolation and the algorithms that generate the highest PSNRs, SAI and PLE. The local PSNRs on the cropped images produced by all the methods under consideration are reported as well. Bicubic interpolation produces some blur and jaggy artifacts in the zoomed images. These artifacts are reduced to some extent by the NEDI, DFDF, KR and FOE algorithms, but the image quality is still lower than with PLE, SAI and SME algorithms, as also reflected in the PSNRs. SR yields an image that looks sharp. However, due to the coherence of the transformed dictionary, as explained in Section III-A, when the approximating atoms are not correctly selected, it produces artifact patterns along the contours, which degrade its PSNR. The PLE algorithm restores slightly better than SAI and SME on regular geometrical structures, as can be observed on the upper and lower propellers, as well as on the fine lines on the side of the plane indicated by the arrows.
|(a) HR||(b) LR||(c) Bibubic||(d) SAI||(e) PLE|
An image is blurred and contaminated by additive noise, , where is a convolution operator and is the noise. Image deblurring aims at estimating from the blurred and noisy observation .
Vii-a Hierarchical PLE
As explained in Section III-C2, the recoverability condition of sparse super-resolution estimates for deblurring requires a dictionary comprising atoms with spread Fourier spectrum and thus localized in space, such as the position PCA basis illustrated in Figure 4. To reduce the computational complexity, model selection with a hierarchy of directional PCA bases and position PCA bases is proposed, in the same spirit of . Figure 13-(a) illustrates the hierarchical PLE with a cascade of the two layers of model selections. The first layer selects the direction, and given the direction, the second layer further specifies the position.
In the first layer, the model selection procedure is identical to that in image inpainting and zooming, i.e., it is calculated with the Gaussian models corresponding to the directional PCA bases , Figure 2-(c). In this layer, a directional PCA of orientation is selected for each patch. Given the directional basis selected in the first layer, the second layer recalculates the model selection (7), this time with a family of position PCA bases corresponding to the same direction as the directional basis selected in the first layer, with atoms in each basis localized at one position, and the bases translating in space and covering the whole patch. The image patch estimation (8) is obtained in the second layer. This hierarchical calculation reduces the computational complexity from to .
For deblurring, boundary issues on the patches need to be addressed. Since the convolution operator is non-diagonal, the deconvolution of each pixel in the blurred image involves the pixels in a neighborhood around whose size depends on the blurring kernel. As the patch based methods deal with the local patches, for a given patch, the information outside of it is missing. Therefore, it is impossible to obtain accurate deconvolution estimation on the boundaries of the patches. To circumvent this boundary problem, a larger patch is considered and the deconvolution is casted as a deconvolution plus an inpainting problem. Let us retake the notations , and to denote respectively the patches of size in the original image , the degraded image , and the noise . Let be their support. Let , and be the corresponding larger patches of size , whose support is centered at the same position as and with an extended boundary of width (the width of the blurring kernel, see below), as illustrated in Figure 13-(b). Let be an extension of the convolution operator on such that if , and if . Let be a masking operator defined on which keeps all the pixels in the central part and kills the rest, i.e., if , and 0 if . If the width of the boundary is larger than the radius of the blurring kernel, one can show that the blurring operation can be rewritten locally as an extended convolution on the larger support followed by a masking, . Estimating from can thus be calculated by estimating from , following exactly the same algorithm, now treating the compounded as the degradation operator to be inverted. The boundary pixels in the estimate , , can be interpreted as an extrapolation from , therefore less reliable. The deblurring estimate is obtained by discarding these boundary pixels from (which are outside of anyway).
Local patch based deconvolution algorithms become less accurate if the blurring kernel support is large relative to the patch size. In the deconvolution experiments reported below, and are respectively set to and . The blurring kernels are restricted to a support.
Vii-B Deblurring Experiments
The deblurring experiments are performed on the gray-level images Lena, Barbara, Boat, House, and Cameraman, with different amounts of blur and noise. The PLE deblurring is compared with a number of deconvolution algorithms: “ForWaRD” (Fourier-wavelet regularized deconvolution) , “TVB” (total variation based) , “TwIST” (two-step iterative shrinkage/thresholding) , “SP” (sparse prior) , “SA-DCT” (shape adaptive DCT) , “BM3D” (3D transform-domain collaborative filtering) , and “DSD” (direction sparse deconvolution) . ForWaRD, SA-DCT and BM3D first calculate the deconvolution with a regularized Wiener filter in Fourier, and then denoise the Wiener estimate with, respectively, a thresholding estimator in wavelet and SA-DCT representations, and with the non-local 3D collaborative filtering . TVB and TwIST deconvolutions regularize the estimate with the image total variation prior. SP assumes a sparse prior on the image gradient. DSD is a recently developed sparse inverse problem estimator, described in Section III-A. In the previous published works, BM3D and SA-DCT are among the deblurring methods that produce the highest PSNRs, followed by SP. The results of TVB, TwIST, SP, SA-DCT and DSD are generated by the authors’ original softwares, with the parameters manually optimized, and those of ForWaRD are calculated with our own implementation. The proposed algorithm runs for 5 iterations.
Table III gives the ISNRs (improvement in PSNR relative to the input image) of the different algorithms for restoring images blurred with Gaussian kernels of standard deviation and (truncated to a support), and contaminated by a white Gaussian noise of standard deviation . BM3D produces the highest ISNRs, followed closely by SA-DCT and PLE, whose ISNRs are comparable and are moderately higher than with SP on average. Let us remark that BM3D and SA-DCT apply an empirical Wiener filtering as a post-processing that boosts the ISNR by near 1 dB. The empirical Wiener technique can be plugged into other sparse transform-based methods such as PLE and ForWaRD as well. Without this post-processing, PLE outperforms BM3D and SA-DCT on average.
Figure 14 shows a deblurring example. All the algorithms under consideration reduce the amount of blur and attenuate the noise. BM3D generates the highest ISNR, followed by SA-DCT, PLE and SP, all producing similar visual quality, which are moderately better than the other methods. DSD accurately restores sharp image structures when the atoms are correctly selected, however, some artifacts due to the incorrect atom selection offset its gain in ISNR. The empirical Wiener filtering post-processing in BM3D and SA-DCT efficiently removes some artifacts and significantly improves the visual quality and the ISNR. More examples of PLE deblurring will be shown in the next section.
|Kernel size and input PSNR||ForWaRD||TVB||TwIST||SA-DCT||BM3D||SP||DSD*||PLE|
|(a) Original||(b) Blurred and noisy||(c) BM3D||(D) PLE|
Vii-C Zooming deblurring
When an anti-aliasing filtering is taken into account, image zooming-out can be formulated as , where is the high-resolution image, and are respectively an anti-aliasing convolution and a subsampling operator, and is the resulting low-resolution image. Image zooming aims at estimating from , which amounts to inverting the combination of the two operators and .
Image zooming can be calculated differently under different amounts of blur introduced by . Let us distinguish between three cases: (i) If the anti-aliasing filtering removes enough high-frequencies from so that is free of aliasing, then the subsampling operator can be perfectly inverted with a linear interpolation denoted as , i.e., . In this case, zooming can can be calculated as a deconvolution problem on , where one seeks to invert the convolution operator . In reality, however, camera and television images contain, always a certain amount of aliasing, since it improves the visual perception, i.e., the anti-aliasing filtering does not eliminate all the high-frequencies from . (ii) When removes a small amount of high-frequencies, which is often the case in reality, zooming can be casted as an interpolation problem [39, 47, 51, 60, 68, 69], where one seeks to invert only , as addressed in Section VI. (iii) When removes an intermediate amount of blur from , the optimal zooming solution is inverting together as a compounded operator, as investigated in .
This section introduces a possible solution for the case (iii) with the PLE deblurring. A linear interpolation is first applied to partially invert the subsampling operator . Due to the aliasing, the linear interpolation does not perfectly restore , nevertheless it remains rather accurate, i.e., the interpolated image is close to the blurred image , as has limited high-frequencies in the case (iii). The PLE deblurring framework is then applied to deconvolve from . Inverting the operator is simpler than inverting the compounded operator . As the linear interpolation in the first step is accurate enough in the case (iii), deconvolving results in accurate zooming estimates.
In the experiments below, the anti-aliasing filter is set as a Gaussian convolution of standard deviation and is an subsampling operator. It has been shown that a pre-filtering with a Gaussian kernel of guarantees that the following subsampling generates a quasi aliasing-free image . For a subsampling, the anti-aliasing filtering with leads to an amount of aliasing and visual quality similar to that in common camera pictures in reality.
|(a)||(b)||(c)||(d)||(e) PLE||(f) SR|
Figure 15 illustrates an experiment on the image Lena. Figures 15-(a) to (c) show, respectively, a crop of the original image , the pre-filtered version , and the low-resolution image after subsampling . As the amount of aliasing is limited in thanks to the anti-aliasing filtering, a cubic spline interpolation is more accurate than lower ordered interpolations such as bicubic , and is therefore applied to upsample , the resulting image illustrated in Figure 15-(d). A visual inspection confirms that is very close to , the PSNR between them being as high as 50.02 dB. The PLE deblurring is then applied to calculate the final zooming estimate by deconvolving from . (As no noise is added after the anti-aliasing filter, the noise standard deviation is set to a small value .) As illustrated in Figure 15-(e), the resulting image is much sharper, without noticeable artifacts, and improves by 3.12 dB with respect to . Figure 15-(f) shows the result obtained with “SR” (sparse representation) . SR implements a sparse inverse problem estimator that tries to invert the compounded operator , with a dictionary learned from a natural image dataset. The experiments were performed with the authors’ original software and training image set. The dictionaries were retrained with the described above. It can be observed that the resulting image looks sharper and the restoration is accurate when the atoms selection is correct. However, due to the coherence of the dictionaries as explained in Section III-A, some noticeable artifacts along the edges are produced when the atoms are incorrectly selected, which also offset its gain in PSNR.
Figure 16 shows another set of experiments on the image Girl. Again PLE efficiently reduces the blur from the interpolated image and leads to a sharp zoomed image without noticeable artifacts. SR produces similarly good visual quality as PLE, however, some slight but noticeable artifacts (near the end of the nose for example) due to the incorrect atom selection offset its gain in PSNR.
Table IV gives the PSNRs comparison on the color image images Lena, Girl and Flower. PLE deblurring from the cubic spline interpolation improves from 1 to 2 dB PSNR over the interpolated images. Although SR is able to restore sharp images, its gain in PSNR is offset by the noticeable artifacts.
|(a) HR||(b) LR||(c) Cubic spline||(c) PLE||(d) SR|
Viii Conclusion and future works
This work has shown that Gaussian mixture models (GMM) and a MAP-EM algorithm provide general and computational efficient solutions for inverse problems, leading to results in the same ballpark as state-of-the-art ones in various image inverse problems. A dual mathematical interpretation of the framework with structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes and improves the traditional sparse inverse problem approach. This connection also suggests an effective dictionary motivated initialization for the MAP-EM algorithm. In a number of image restoration applications, including inpainting, zooming, and deblurring, the same simple and computationally efficient algorithm (its core, (4), (5), (7) and (8), can be written in 4 lines Matlab code) produce either equal, often significantly better, or very small margin worse results than the best published ones, with a computational complexity typically one or two magnitude smaller than sparse estimations. Similar results (on average just 0.1 dB lower than BM3D ) are obtained for the simpler problem of denoising ( the identity matrix).
As described in Section II, the proposed algorithm is calculated with classic statistical tools of MAP-EM clustering and empirical covariance estimation. As a possible future work, its performance may be further improved with more sophisticated statistical instruments, for example, the stochastic EM algorithms  and more advanced covariance regularization methods , at a cost of higher computational complexity.
Acknowledgements: Work supported by NSF, NGA, ONR, ARO and NSSEFF. We thank Stéphanie Allassonnière for helpful discussions, in particular about MAP-EM and covariance regularization.
J. Abernethy, F. Bach, T. Evgeniou, and J.P. Vert.
A new approach to collaborative filtering: Operator estimation with
The Journal of Machine Learning Research, 10:803–826, 2009.
-  M. Aharon, M. Elad, and A. Bruckstein. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. on Signal Proc., 54(11):4311, 2006.
-  S. Allassonniere, Y. Amit, and A. Trouvé. Towards a coherent statistical framework for dense deformable template estimation. J.R. Statist. Soc. B, 69(1):3–29, 2007.
-  R.G. Baraniuk, V. Cevher, M.F. Duarte, and C. Hegde. Model-based compressive sensing. Preprint, 2008.
-  J.P. Baudry, A.E. Raftery, G. Celeux, K. Lo, and R. Gottardo. Combining mixture components for clustering. Accepted to Journal of Computational and Graphical Statistics, 2010.
-  J.M. Bioucas-Dias and M.A.T. Figueiredo. A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration. IEEE Trans. on Image Proc., 16(12):2992–3004, 2007.
-  J.M. Bioucas-Dias, M.A.T. Figueiredo, and J.P. Oliveira. Total variation-based image deconvolution: a majorization-minimization approach. In ICASSP, volume 2, 2006.
-  T. Blu, P. Thévenaz, and M. Unser. MOMS: Maximal-order interpolation of minimal support. IEEE Trans. on Image Proc., 10(7):1069–1080, 2001.
-  S.P. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
-  A. Buades, B. Coll, and J.M. Morel. A review of image denoising algorithms, with a new one. Multiscale Modeling and Simulation, 4(2):490–530, 2006.
-  E.J. Candes and D.L. Donoho. New tight frames of curvelets and optimal representations of objects with C2 singularities. Comm. Pure Appl. Math, 56:219–266, 2004.
-  E.J. Candes and T. Tao. Near-optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans. on Info. Theo., 52(12):5406–5425, 2006.
-  G. Celeux and G. Govaert. A classification EM algorithm for clustering and two stochastic versions. Computational Statistics and Data Analysis, 14(3):315–332, 1992.
-  G. Celeux and G. Govaert. Gaussian parsimonious clustering models. Pattern Recognition, 28(5):781–793, 1995.
-  P. Chatterjee and P. Milanfar. Clustering-based denoising with locally learned dictionaries. IEEE Trans. on Image Proc., 18(7), 2009.
-  S.S. Chen, D.L. Donoho, and M.A. Saunders. Atomic decomposition by basis pursuit. SIAM J. on Sci. Comp., 20:33, 1999.
-  K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. on Image Proc., 16(8):2080–2095, 2007.
-  K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image restoration by sparse 3D transform-domain collaborative filtering. In SPIE Electronic Imaging, 2008.
-  I. Daubechies, M. Defrise, and C. De Mol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Comm. Pure Appl. Math, 57:1413–1457, 2004.
-  I. Daubechies, R. DeVore, M. Fornasier, and S. Güntürk. Iteratively re-weighted least squares minimization for sparse recovery. Commun. Pure Appl. Math, page 35, 2009.
-  D.L. Donoho. Compressed sensing. IEEE Trans. on Info. Theo., 52(4):1289–1306, 2006.
-  B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics, 32(2):407–451, 2004.
-  M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. on Image Proc., 15(12):3736–3745, 2006.
-  M. Elad, J.L. Starck, P. Querre, and D.L. Donoho. Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA). Applied and Computational Harmonic Analysis, 19(3):340–358, 2005.
-  Y.C. Eldar, P. Kuppinger, and H. Bölcskei. Compressed sensing of block-sparse signals: uncertainty relations and efficient recovery. arXiv: 0906.3173, submitted to IEEE Trans. on Signal Proc., 2009.
-  Y.C. Eldar and M. Mishali. Robust recovery of signals from a union of structured subspaces. IEEE Trans. Inform. Theory, 55(11):5302–5316, 2009.
-  M.J. Fadili, J.L. Starck, and F. Murtagh. Inpainting and zooming using sparse representations. The Comp. J., 52(1):64, 2009.
-  A. Foi, K. Dabov, V. Katkovnik, and K. Egiazarian. Shape-adaptive DCT for denoising and image reconstruction. In Proceedings of SPIE, volume 6064, pages 203–214, 2006.
C. Fraley and A.E. Raftery.
How many clusters? Which clustering method? Answers via model-based cluster analysis.The Computer Journal, 41(8):578–588, 1998.
-  N. Friedman and S. Russell. Image segmentation in video sequences: a probabilistic approach. In Uncertainty in Artificial iIntelligence: Proceedings of the Thirteenth Conference, volume 94720, page 175, 1997.
-  Z. Ghahramani and M.I. Jordan. Supervised learning from incomplete data via an EM approach. NIPS, page 120, 1994.
-  J.A. Guerrero-Colon, L. Mancera, and J. Portilla. Image restoration using space-variant Gaussian scale mixtures in overcomplete pyramids. IEEE Trans. on Image Proc., 17(1):27, 2008.
-  O.G. Guleryuz. Nonlinear approximation based image recovery using adaptive sparse reconstructions and iterated denoising–Part II: Adaptive algorithms. IEEE Trans. on Image Proc., 15(3):555–571, 2006.
-  R.J. Hathaway. Another interpretation of the EM algorithm for mixture distributions. Stat. & Prob. Letters, 4(2):53–56, 1986.
-  J. Huang, T. Zhang, and D. Metaxas. Learning with structured sparsity. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 417–424. ACM, 2009.
-  R. Jenatton, J.Y. Audibert, and F. Bach. Structured variable selection with sparsity-inducing norms. arXiv:0904.3523v2, 2009.
-  R. Keys. Cubic convolution interpolation for digital image processing. IEEE Trans. on Acoustics, Speech and Signal Proc., 29(6):1153–1160, 1981.
-  A. Levin, R. Fergus, F. Durand, and W.T. Freeman. Deconvolution using natural image priors. Tech. report, CSAIL/MIT, 2007.
-  X. Li and M. T. Orchard. New edge-directed interpolation. IEEE Trans. on Image Proc., 10(10):1521–1527, 2001.
-  Z. Liang and S. Wang. An EM approach to MAP solution of segmenting tissue mixtures: a numerical analysis. IEEE Trans. on Medical Imaging, 28(2):297–310, 2009.
-  Y. Lou, A. Bertozzi, and S. Soatto. Direct sparse deblurring. Technical report, CAM-UCLA, 2009.
J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman.
Non-local sparse models for image restoration.
Proceedings of the IEEE International Conference on Computer Vision, 2009.
-  J. Mairal, M. Elad, and G. Sapiro. Sparse representation for color image restoration. IEEE Trans. on Image Proc., 17, 2008.
-  J. Mairal, G. Sapiro, and M. Elad. Learning multiscale sparse representations for image and video restoration. SIAM Multiscale Modeling and Simulation, 7(1):214–241, 2008.
-  S. Mallat. A Wavelet Tour of Signal Proc.: The Sparse Way, 3rd edition. Academic Press, 2008.
-  S. Mallat and G. Peyré. Orthogonal bandlet bases for geometric images approximation. Comm. Pure Appl. Math, 61(9):1173–1212, 2008.
-  S. Mallat and G. Yu. Super-resolution with sparse mixing estimators. Accepted to IEEE Trans. on Image Proc., 2010.
-  S.G. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Trans. on Signal Proc., 41(12):3397–3415, 1993.
-  G.J. McLachlan and K.E. Basford. Mixture Models: Inference and Applications to Clustering. New York: Dekker, 1988.
-  J.M. Morel and G. Yu. On the consistency of the SIFT Method. Accepted to Inverse problems and Imaging, 2009.
-  N. Mueller, Y. Lu, and M.N. Do. Image interpolation using multiscale geometric representations. In Computational Imaging V. Edited by Bouman, Charles A.; Miller, Eric L.; Pollak, Ilya. Proceedings of the SPIE, volume 6498, page 64980A, 2007.
-  A. Muñoz, T. Blu, and M. Unser. Least-squares image resizing using finite differences. IEEE Trans. on Image Proc., 10(9):1365–1378, 2001.
-  R. Neelamani, H. Choi, and R. Baraniuk. Forward: Fourier-wavelet regularized deconvolution for ill-conditioned systems. IEEE Trans. on Signal Proc., 52(2):418–433, 2004.
-  H. Permuter, J. Francos, and IH Jermyn. Gaussian mixture models of texture and colour for image database retrieval. In Proc. ICASSP, volume 1, pages 25–88, 2003.
-  J. Portilla, V. Strela, M.J. Wainwright, and E.P. Simoncelli. Image denoising using scale mixtures of Gaussians in the wavelet domain. IEEE Trans. on Image Proc., 12(11):1338–1351, 2003.
-  S. Roth and M.J. Black. Fields of experts. International Journal of Computer Vision, 82(2):205–229, 2009.
-  J. Schafer and K. Strimmer. A shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics. Statistical Applications in Genetics and Molecular Biology, 4(1):1175, 2005.
-  C. Stauffer and WEL Grimson. Adaptive background mixture models for real-time tracking. In IEEE Conference on Computer Vision and Pattern Recognition, volume 2, 1999.
-  M. Stojnic, F. Parvaresh, and B. Hassibi. On the reconstruction of block-sparse signals with an optimal number of measurements. IEEE Trans. on Signal Proc., 57(2):3075–3085, 2009.
-  H. Takeda, S. Farsiu, and P. Milanfar. Kernel regression for image processing and reconstruction. IEEE Trans. on Image Proc., 16(2):349–366, 2007.
-  R. Tibshirani. Regression shrinkage and selection via the lasso. J. of the Royal Stat. Society, pages 267–288, 1996.
-  J. A. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. on Info. Theo., 50:2231–2242, 2004.
-  M. Unser. Splines: A perfect fit for signal and image processing. IEEE Signal Proc. Magazine, 16(6):22–38, 1999.
-  R. Vidal, Y. Ma, and S. Sastry. Generalized principal component analysis (GPCA). IEEE Trans. on Pattern Analysis and Machine Intelligence, 27(12):1945–1959, 2005.
-  J. Yang, J. Wright, T. Huang, and Y. Ma. Image super-resolution via sparse representation. Accepted to IEEE Trans. on Image Proc., 2010.
-  G. Yu and S. Mallat. Sparse super-resolution with space matching pursuits. Proc. SPARS’09, Saint-Malo, 2009.
-  G. Yu, G. Sapiro, and S. Mallat. Image modeling and enhancement via structured sparse model selection. Accepted to ICIP, 2010.
-  L. Zhang and X. Wu. An edge-guided image interpolation algorithm via directional filtering and data fusion. IEEE Trans. on Image Proc., 15(8):2226–2238, 2006.
X. Zhang and X. Wu.
Image interpolation by adaptive 2-d autoregressive modeling and soft-decision estimation.IEEE Trans. on Image Proc., 17(6):887–896, 2008.
-  J. Zhou, J.L. Coatrieux, A. Bousse, H. Shu, and L. Luo. A Bayesian MAP-EM algorithm for PET image reconstruction using wavelet transform. IEEE Trans. on Nuclear Science, 54(5):1660–1669, 2007.
-  Zhou, M. and Chen, H. and Paisley, J. and Ren, L. and Li, L. and Xing, Z. and Dunson, D. and Sapiro, G. and Carin, L. Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images. Submitted to IEEE Trans. on Image Proc., 2010.