Smoothness-Constrained Image Recovery from Block-Based Random Projections

10/08/2013 ∙ by Giulio Coluccia, et al. ∙ 0

In this paper we address the problem of visual quality of images reconstructed from block-wise random projections. Independent reconstruction of the blocks can severely affect visual quality, by displaying artifacts along block borders. We propose a method to enforce smoothness across block borders by modifying the sensing and reconstruction process so as to employ partially overlapping blocks. The proposed algorithm accomplishes this by computing a fast preview from the blocks, whose purpose is twofold. On one hand, it allows to enforce a set of constraints to drive the reconstruction algorithm towards a smooth solution, imposing the similarity of block borders. On the other hand, the preview is used as a predictor of the entire block, allowing to recover the prediction error, only. The quality improvement over the result of independent reconstruction can be easily assessed both visually and in terms of PSNR and SSIM index.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 4

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Conventional image acquisition and compression schemes usually rely on the sampling of a huge number of pixels satisfying the classic Shannon/Nyquist theorem. Then, the size of the acquisition is reduced by the means of a more energy–compacting signal representation, usually consisting in the projection of blocks of pixels onto a convenient basis, like the DCT or the wavelet. Recently, the theory of Compressed Sensing (CS) [1, 2] has proposed a new paradigm. According to this theory, a sparse or compressible signal (as natural images are), can be sensed acquiring a small number of random projections, called measurements, and recovered from these measurements using algorithms promoting sparsity. While on one hand the acquisition process is really simple, the complexity is transferred on the decoder, where the optimum reconstruction algorithm, based on norm minimization, has combinatorial complexity, while its norm approximation has a complexity which is roughly cubic with the size of the signal. Indeed, this or even computationally simpler algorithms, like OMP [3], are far too expensive to deal with the reconstruction of a signal whose dimension is large, as is the case of images or, more generally, multidimensional signals.

Different approaches have been proposed when applying CS to multidimensional signals, based on the partitioning of the signal to be acquired, in order to reduce the size of the signal to be reconstructed. For example, in [4] a raster scanning approach has been combined to a reconstruction scheme based on linear predictors.

A different approach to the problem is to partition the image in blocks, as is done in the JPEG or MPEG standards for image and video encoding, respectively, where an orthonormal transform is applied to non overlapping blocks of pixels of the image or of a frame. The same approach has been proposed for compressed image sensing, where it was labelled as Block Compressed Sensing (BCS) after [5]. In that paper, each block of pixels is acquired using the same sensing matrix. At the decoder, each block is processed independently. This approach usually leads to poor visual quality due to blocking artifacts, especially when the number of measurements per block is low. To overcome this problem, the authors of [5] propose to apply to the reconstructed image two additional processing stages, where the entire image undergoes two sequential iterative algorithms based on Projections onto Convex Sets (POCS) and Iterative Hard Thresholding (IHT). This approach was further improved in [6], where directional transforms were used instead of conventional DCT or DWT, along with an improved thresholding criterion.

In this paper, we propose a different approach to the reconstruction process. In particular, we propose to improve the reconstruction by imposing smoothness constraints between adjacent blocks, in addition to sparsity constraints for each block, in order to “drive” the reconstruction process towards solutions promoting both smoothness and sparsity at the same time. This result is obtained by the efficient computation of a preview of each block, whose purpose is twofold. First, it supplies an estimation of block borders, used to impose additional smoothness constraints. Second, it is used as a predictor of the block, allowing to reconstruct prediction error, only. We dub our algorithm Smoothness-Constrained Block Compressed Sensing (SC-BCS). Simulation results show a significant performance improvement with respect to independent block reconstruction, in terms of PSNR, structural similarity (SSIM) index

[7] and visual quality assessment, demonstrating the validity of this approach. Compared to the results obtained in [6], our algorithm obtains better results especially when the number of measurements is low. Moreover, with respect to [5] and [6], both the acquisition and the reconstruction stages can be parallelized, since each block of the image is acquired and reconstructed independently.

Ii Background

Ii-a Notation and definitions

We denote (column-) vectors and matrices by lowercase and uppercase boldface characters, respectively. The

-th element of a matrix is . The transpose of a matrix is . The stack operator denotes the column vector obtained by stacking the columns of  on top of each other, from left to right.

The notations , , denote the number of nonzero elements, the -norm and the Euclidean norm of vector , respectively. The notation

denotes a Gaussian random variable

with mean

and variance

 .

Ii-B Compressed Sensing

In the standard CS framework, introduced in [1], a signal which has a sparse representation in some basis , i.e, , can be recovered by a smaller vector , , of linear measurements , where is the sensing matrix. The optimum solution, seeking the sparsest vector compliant with

  is an NP-hard problem, but one can resort to a linear programming reconstruction by minimizing the

norm

(1)

provided that .

The same algorithm can be used to reconstruct signals which are not exactly sparse, but rather compressible, meaning that the magnitude of their sorted coefficients (in some basis ) decays at least exponentially.

It has been shown in [8] that extracting the elements of

 at random from any sub-Gaussian distribution, allows a correct reconstruction with overwhelming probability.

An alternative approach to the norm minimization is the minimization of the Total Variation (TV) norm of the image. The TV norm of a bidimensional signal, in its isotropic version, can be defined as

(2)

Seeking to minimize the TV norm relies on the assumption that the gradient of the image is approximately sparse, hence the TV norm should be small.

Hence, the reconstruction problem (1) becomes

(3)

Iii Proposed Algorithm

Traditional BCS techniques rely on independent blockwise image acquisition and independent blockwise reconstruction. This may result in discontinuities in the reconstruction of adjacent blocks, especially when the number of measurements acquired for each block is low. See for example Fig. 1, where, for each block, measurements have been taken. The blocking artifacts have a large impact on the visual quality of the reconstruction. To overcome this problem, we propose a technique, detailed in the following sections, based on additional constraints to the reconstruction problem. The idea is to enforce smoothness between the reconstruction of block borders111Here and in the following, with the term block borders we refer to pixels belonging to the first and last rows and to the first and last columns of the block. and the contiguous borders of adjacent blocks, in order to reduce blocky artifacts in the reconstruction of adjacent blocks and obtain homogeneity overall. We dub our algorithm as Smoothness-Constrained Block Compressed Sensing (SC-BCS).

Fig. 1: Standard independent blockwise reconstruction, measurements per block

Iii-a Image acquisition

As we will explain in section III-B, our idea relies on an initial estimation of block borders. We propose the acquisition of the image in an overlapping block fashion, in order to have an initial estimate of satisfactory quality. For each non overlapping block, we consider 1 extra pixel per side, as depicted in Fig. 2, resulting in a global overlapping of 2 pixels per side. Each block has size . It is raster scanned then measured with a sensing matrix  as where is the vector collecting the measurements of block and is the sensing matrix of block . As for the sensing matrix, we choose a matrix allowing us to obtain an estimate of the block borders before performing CS reconstruction of the block. For this reason, we employ the Dual-Scale Sensing (DSS) matrix [9]. The key property of the DSS matrix is the ability to generate a low-resolution preview of the image with low complexity, while at the same time preserving the properties that enable CS reconstruction. A DSS matrix is obtained as

where is an Hadamard matrix, having elements such that , is an downsampling operator and contains a random pattern such that . This construction preserves matrix properties from the CS point of view, thanks to the contribution of matrix , and also allows to obtain a fast preview of the image at low resolution, since it minimizes the error between the downsampled version of the signal and the computed preview. The preview is generated as

(4)

where contains the rows of the preview stacked on top of each other. This operation is fast since is a Hadamard matrix and the inversion can be implemented by the fast Walsh-Hadamard transform algorithm [10]. Moreover, the use of a Hadamard matrix imposes some constraints on the number of measurements . In particular, must be a power of 2 and a perfect square, since the preview will be of size pixels.

Fig. 2: Overlapping blockwise image acquisition

Iii-B Image reconstruction

SC-BCS aims at maximizing the reconstruction quality by minimizing the visual effect of blocking artifacts. In particular, this is achieved by adding some additional smoothness constraints among neighbouring blocks to the reconstruction problem of (3). In particular (see Fig. 3 for reference), reconstruction of current block  is obtained as the solution of the following problem

(5)

where is the top row of current block, is the bottom row of current block, is the leftmost column of current block, is the rightmost column of current block, is the bottom row of the block on top of current one, is the top row of the block at the bottom of current one, is the rightmost column of the block on the left of current one and is the leftmost column of the block on the right of current one. Parameters are automatically evaluated from the initial border estimation, as explained later. The principle behind (5) is to “drive” the minimization to promote the solutions enhancing the smoothness between adjacent blocks. The reconstruction problem (5) raises two implementation problems, to which we propose solution in the following.

Fig. 3: Similarity constraints between adjacent block borders

The first problem is to obtain an estimate of the borders of all blocks to plug into equation (2) as , , and to obtain the reconstruction of each block. The answer to this problem is given by the fast preview, which is enabled by the use of the DSS matrix. Hence, before running equation (5) on each block, we first reconstruct a low resolution preview using (4

), then we interpolate it to obtain a preview of the block with the original resolution. Since the blocks are acquired in an overlapping fashion, we merge the previews of the blocks by properly averaging neighbouring borders to obtain a preview of the entire image at the original size.

The second problem is related to the fact that the measurement vector contains measurements of pixels belonging to the current block as well as to the borders of its neighbouring blocks, while the problem in (5) aims to reconstruct the -th block without employing the pixels belonging to neighbouring blocks. Again, to solve this problem we use the previously estimated preview to subtract the contribution of pixels belonging to neighbouring blocks from

where contains only the pixel of the preview belonging to neighbouring blocks and contains the corresponding columns of the sensing matrix.

Finally, we run (5) for each block, using and , where contains the columns of the sensing matrix whose indexes correspond to the pixels of -th block (excluding pixel belonging to neighbouring blocks due to overlapping). Fig. 4 visually explains the raster scanning of a block, highlighting the indexes belonging to block borders and their corresponding columns in the sensing matrix.

Fig. 4: Block inside and border separation

Finally, we propose to exploit the entire available preview (and not only its borders) as a predictor of the block, reconstructing with (5) only the prediction error. This is justified by the assumption that, if the predictor is accurate enough, the prediction error will be more compressible than the signal itself. The reconstruction problem will then become

(6)

and corresponds to the rearrangement of the prediction in matrix form, , , and refer to matrix and , , , , are evaluated as in (5).

SC-BCS reconstruction is summarized in Algorithm 1. We remark the fact that the most complex operation, consisting in the solution of (III-B), does not need any interaction among the blocks and hence can be run fully in parallel. On the other hand, other approaches such as BCS-SPL in [6] require global operations needing synchronization among blocks.

for  do
     
     Interpolate to the full block size
end for
Merge the overlapping previews to obtain a preview of the original image
for  do
     
     Reconstruct the block using (III-B).
end for
Algorithm 1 Proposed algorithm

Iv Numerical Results

Scheme per block Lena Goldhill Peppers Barbara Mandrill Boat Couple
Independent 73 25.79 / 0.800 25.56 / 0.737 25.16 / 0.808 21.95 / 0.691 19.98 / 0.553 23.49 / 0.713 23.46 / 0.693
SC-BCS 64 28.49 / 0.893 27.32 / 0.837 27.59 / 0.898 23.01 / 0.768 20.82 / 0.662 25.27 / 0.815 25.07 / 0.803
BCS-SPL DDWT 73 26.54 / 0.847 26.14 / 0.761 27.54 / 0.880 22.10 / 0.731 20.25 / 0.551 24.09 / 0.749 24.00 / 0.724
SC-BCS Baseline 64 28.31 / 0.889 27.30 / 0.832 27.56 / 0.899 23.13 / 0.773 20.94 / 0.658 25.20 / 0.810 25.07 / 0.798
SC-BCS Genie 64 28.79 / 0.903 27.64 / 0.848 28.54 / 0.912 23.38 / 0.789 21.04 / 0.678 25.65 / 0.834 25.52 / 0.822
Independent 289 31.08 / 0.939 29.61 / 0.911 30.14 / 0.928 24.56 / 0.846 22.62 / 0.813 28.48 / 0.899 27.72 / 0.900
SC-BCS 256 32.21 / 0.950 29.93 / 0.919 31.29 / 0.939 24.06 / 0.822 22.81 / 0.809 28.34 / 0.906 28.34 / 0.909
BCS-SPL DDWT 289 33.21 / 0.960 30.13 / 0.917 33.49 / 0.961 25.38 / 0.875 22.71 / 0.798 29.24 / 0.916 28.33 / 0.906
SC-BCS Baseline 256 31.63 / 0.946 29.99 / 0.918 30.93 / 0.940 24.58 / 0.842 22.48 / 0.815 28.19 / 0.905 28.11 / 0.902
SC-BCS Genie 256 31.78 / 0.957 30.22 / 0.933 31.14 / 0.949 24.82 / 0.870 22.85 / 0.839 28.47 / 0.922 28.40 / 0.925
TABLE I: Obtained PSNR (dB) / SSIM index for various settings
Fig. 5: Peppers, top down: blockwise independent reconstruction with measurements per block, SC-BCS with measurements per block

SC-BCS has been tested using various grayscale test images of size . Each image is divided into blocks, whose interior has size plus 1 extra pixel on each side. Hence, this corresponds to and each block is then composed by pixels. We test the proposed algorithm with and measurements per block, corresponding to compression rates of and compression. This corresponds to splitting the image in overlapping blocks. We compare with two systems performing non overlapped acquisition using a Gaussian random sensing matrix, where each element of the matrix is distributed as . The first one reconstructs each block independently using (3), while the second one is the BCS-SPL-DDWT scheme222For BCS-SPL-DDWT, the software provided by the authors at http://www.ece.msstate.edu/fowler/BCSSPL/ has been used [6], where independent reconstruction is followed by a global iterative image processing stage employing Wiener filtering and thresholding of the coefficients of the image projected on a directional transform. For a fair comparison, since the number of non overlapping blocks is lower (), for these systems we will take a greater number of measurements per block ( and , respectively) so that the total number of measurements is the same as in the overlapped case.

Table I summarizes the results in terms of PSNR and SSIM index [7]. SC-BCS Baseline refers to the scheme not using the preview as a predictor (5), while SC-BCS Genie refers to an ideal scheme where reconstruction (5) is performed knowing perfectly block borders. Results show that the proposed idea leads to significant gains. It can be noticed that when SC-BCS leads to significant gains with respect to the independent case, proving the validity of the approach. Moreover, it outperforms BCS-SPL-DDWT with all images. On the other hand, when the gains with respect to independent reconstruction are lower and BCS-SPL-DDWT has the same or better performance. The reason is that when is large, even independent reconstruction leads to good estimates, hence the contribution of additional smoothness constraints is less significant. Performance obtained by SC-BCS Baseline and SC-BCS Genie is reported to show that imperfect knowledge of block borders implies a loss of only  dB with respect to the ideal case where borders are perfectly known.

We conclude by showing some reconstructed images to assess the visual quality of the reconstruction. In fact we stress that we are addressing a visual quality problem and an increase in PSNR does not necessarily imply a reduction in the blockiness of the image. We report only the case with , where the differences among various reconstruction schemes are more easily noticeable. However, we remark that less blocky artifacts can be noticed in the case, as well. Fig. 5 shows the results of the independent blockwise reconstruction while Fig. 5 shows the reconstructed image provided by SC-BCS. It can be clearly noticed that independent reconstruction yields the poorest visual reconstruction quality, while our proposed scheme shows fewer blocking artifacts, with an acceptable visual quality at a compression rate of . Figs. 6 shows a visual comparison among independent recovery, SC-BCS and BCS-SPL-DDWT. It can be noticed that SC-BCS has the best visual quality performance, with less blocky artifacts with respect to the independent case. On the other hand, when BCS-SPL-DDWT reconstruction suffers from localized clusters of out-of-scale pixel values, which significantly deteriorate the quality of the image, and tends to flatten image details.

V Conclusions

In this paper we proposed a method to address a visual quality issue of block-based compressed sensing. Blocking artifacts due to independent reconstruction of the blocks can severely affect the quality of the final image. We showed that imposing additional constraints in the reconstruction problem allows to exploit prior information on the smoothness of the image across block borders. We solved a dependency problem in which the constraints depend on the reconstructions themselves by using a fast preview of each block and, by combining the overlapping parts of the previews, it was possible to enforce a smooth reconstruction. Finally we showed that the already available preview can be used as an additional signal model besides sparsity. This information was used as a prediction of the final recovered image to improve the quality of the reconstruction.

References

  • [1] E. Candes and T. Tao, “Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?” IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5406–5425, 2006.
  • [2] D. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
  • [3] J. Tropp and A. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” Information Theory, IEEE Transactions on, vol. 53, no. 12, pp. 4655–4666, 2007.
  • [4] G. Coluccia, S. K. Kuiteing, A. Abrardo, M. Barni, and E. Magli, “Progressive compressed sensing and reconstruction of multidimensional signals using hybrid transform/prediction sparsity model,” Emerging and Selected Topics in Circuits and Systems, IEEE Journal on, vol. 2, no. 3, pp. 340–352, 2012.
  • [5] L. Gan, “Block compressed sensing of natural images,” in Digital Signal Processing, 2007 15th International Conference on.   IEEE, 2007, pp. 403–406.
  • [6] S. Mun and J. E. Fowler, “Block compressed sensing of images using directional transforms,” in Image Processing (ICIP), 2009 16th IEEE International Conference on.   IEEE, 2009, pp. 3021–3024.
  • [7] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” Image Processing, IEEE Transactions on, vol. 13, no. 4, pp. 600–612, 2004.
  • [8] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, “A simple proof of the restricted isometry property for random matrices,” Constructive Approximation, vol. 28, no. 3, pp. 253–263, 2008.
  • [9] A. C. Sankaranarayanan, C. Studer, and R. G. Baraniuk, “Cs-muvi: Video compressive sensing for spatial-multiplexing cameras,” in Computational Photography (ICCP), 2012 IEEE International Conference on.   IEEE, 2012, pp. 1–10.
  • [10] B. Fino and V. Algazi, “Unified Matrix Treatment of the Fast Walsh-Hadamard Transform,” Computers, IEEE Transactions on, vol. C-25, no. 11, pp. 1142–1146, 1976.