Review of Algorithms for Compressive Sensing of Images

08/05/2019 ∙ by Yoni Sher, et al. ∙ 4

We provide a comprehensive review of classical algorithms for compressive sensing of images, focused on Total variation methods, with a view to application in LiDAR systems. Our primary focus is providing a full review for beginners in the field, as well as simulating the kind of noise found in real LiDAR systems. To this end, we provide an overview of the theoretical background, a brief discussion of various considerations that come in to play in compressive sensing, and a standardized comparison of off-the-shelf methods, intended as a quick-start guide to choosing algorithms for compressive sensing applications.



There are no comments yet.


page 7

page 11

page 13

page 17

page 18

page 19

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction and theoretical background

This paper is intended as a ”how-to” guide for beginners in the field of compressive sensing, giving a broad introduction to the field and the classical algorithms available. The comparative section is written in the spirit of [15, 2] and others, however with the focus of running time of given algorithms ”as provided”, for the benefit of users without the resources to develop and deploy custom software packages. We focus on classical algorithms requiring no parallelization or special hardware support, suitable for generic workstation environments.
We begin with a short explanation of Compressive Sensing: the theoretical background and the assumptions it is based on. We then give a qualitative discussion of the major algorithms for sparse image reconstruction, from standard algorithms available with every scientific computation package to optimized special-purpose methods. Finally, We present the results of comprehensive simulation of selected algorithms under a variety of parameters and quantitative comparison of the different methods.

1.1 Sub-sampling theory

Compressive Sensing is a method of reconstructing images from fewer measurements than the number of pixels in the image. It is based on two assumptions: that the image has a sparse representation in some basis, and that the measurement basis in incoherent with the basis in which the image is sparse [3].
Sparsity gives rise to the following argument: the Shannon entropy of an image with pixels but only non-zero elements is [6]


We use this as a lower bound for the number of measurement required to reconstruct a signal. We will show that we can get very close to this bound.
Incoherence [4, 6]

: Two bases are mutually incoherent if the dot product of any vector from one with any vector from the other is not too large: If

are basis of dimension , the mutual incoherence is:


Natural images tend to be sparse in frequency bases: Fourier or Wavelet, for example. Random measurement bases will be approximately

incoherent with any base with high probability (and therefore suitable for compressed sensing).

1.2 Reconstruction theory:

Let some signal, and be some basis in which is -sparse. We use to mean the basis and it’s matrix representation interchangeably. Let be a set of measurements of in some basis , and let


Then with probability greater than :


This implies that if we take measurement of in some random basis, we can expect to reconstruct with high probability.
Practically, measurements are usually sufficient to reconstruct the signal exactly, and signals can be approximately reconstructed from as few as measurements.
The measurements used in this paper are random elements of the Dragon Wavelet group[5], which can be computed efficiently (in time) and in-place for optimal memory use. The Dragon wavelet group resemble fractal noise patterns, and so are highly incoherent with natural images as well as other signals we would be likely to measure.

1.3 Comparison scenarios

Next we will present the ideas behind a number of reconstruction methods, a short discussion of how they work and a quantitative comparison between them. This is done for three scenarios:

  • Algorithms for reconstruction under the assumption of sparsity: This is the original premise of compressive sensing, however it is only applicable for signals that are already sparse. Natural images are not sparse in the sense, but usually have a sparse representation in other basis such as Discrete Cosine Transform (DCT - used in JPEG compression) or various wavelets (such as Haar wavelets, used for JPEG2000 compression). This leads to the proposal to ”sparsify” an image and then use compressive sensing; a proposal which is discussed in section 2.

  • Algorithms for reconstruction under Total Variation (TV) sparsity constraints: Total variation is an over-complete base in which natural images are sparse or nearly sparse (in essence, total variation is the norm of the discrete gradient of an image). While the standard proof of correct reconstruction can not be applied, the argument from incoherence holds. In general, algorithms that do not depend directly on the orthogonality of the measurement basis can also be easily modified to take advantage of any sparsifying norm or psudo-norm.

  • Reconstruction under noise: Most of the literature on compressive sensing focuses on the pure theoretical problem, where measurements in the compressive base are noiseless. This approach is easy to simulate, but does not cover the use-case where compressive sensing is most appropriate: noisy environments where a complete set of measurements is not available or not practical. A prime example is low-light imaging, where measurement noise is inversely proportional to the time over which averaging is done. In these scenarios, the improvement in acquisition time provided by compressive sensing would be negated by the increased time for each measurement required to achieve low noise.

2 Norm minimizing algorithms

In order for compressive sensing minimizing the norm of a signal to work, the signal we are reconstructing must be sparse in the norm. This is not true for natural images, although the success of image compression algorithms based on re-sampling (such as JPEG and JPEG2000) indicates that there exists a basis in which most natural images are sparse. Abusing notation as we did in the theoretical background, let be our sampling basis as a matrix and be a sparsifying basis (such as DCT) as a matrix. We now create a new reconstruction problem, denoting , the image in the sparsifying base, so that are our measurements. The optimization problem becomes:


However, we can not get direct access to without already having the original signal . We would like to re-sample using and then apply the sparsifying base, however does not in general commute with , so . For the sake of simulation, we can ignore this fact and attempt to reconstruct by computing and as pre- and post-calculation steps, as is done in the comparison part of this section, but this is not applicable to a real-world system.
Another option (inspired by [10]) is to move the sensing matrix (rather than the signal) in to the sparsifying basis. This takes the form:


Except that is not a square matrix. While mathematical tools exists to approximate , they do not have a closed form and are not efficiently computable.
To conclude this section, we gives a short history of relevant optimization algorithms, focusing on the incremental improvements each one provides. However, minimization are not in general optimal for imaging applications and the algorithms here are brought merely for completeness.

2.1 Off-the-shelf algorithms

First, a brief review of the most standard optimization algorithms, available as library functions in most scientific and programming environments. Many of these algorithms are packaged together as a single library function, such as the simplex method, interior-point and ellipsoid algorithm are in MATLAB. The disadvantages of these methods are their slow convergence rate and prohibitive memory requirements, which make them unsuitable for large problems. Additionally, these methods have little to no tolerance for noisy measurements – when applied to imperfect data, they often fail to converge. We therefore start our more in-depth review with more advance algorithms.

2.1.1 Orthogonal Matching Pursuit (OMP)

OMP is conceptually based on building a dictionary of simple image elements, and recovering a new image by combining the best matching elements of the dictionary [13, 12]. An iterative process finds the best matching element in the dictionary and adds it to the reconstruction until the residual measurement is sufficiently small.

This method is often proposed for compressive sensing reconstruction, though it is not entirely appropriate: firstly, for a dictionary lookup method to work well the dictionary should be over-complete, which means we must take many more samples than indicated by the information bound. Secondly, the reconstruction guaranty is not independent of the signal as it is in linear programming. For these reasons, OMP is more suited for image de-noising, where it is often used.

2.1.2 Ridge Regression, aka LASSO

LASSO is a small alteration of standard linear optimization problems that compensates for noisy measurements[4, 6]. Formally, the problem is defined as:


Where is the assume bound of the signal. In principle this can be seen as searching for a signal with a small norm that matches the measurements with minimal noise. The current state of the art LASSO algorithms are significantly faster and use less memory than standard linear programming and OMP, but memory usage scales prohibitively with image size, with mega-pixel images well beyond the resources of widely available computing systems.

2.2 Gradient projection for sparse reconstruction (GPSR)

GPSR is a newer algorithm developed specially for compressed sensing [6, 9]. This approach formulates the problem as a bound constraint quadratic program. By defining we get , and we can recast the optimization problem to the form:


The GPSR algorithm now performs minimization step by searching along the negative gradient of the loss function projected into the non-negative orthant for a point that is within the problem bounds. There is also a variation of the algorithm based on the approach of Barzilai and Borwein

[1], which does not guaranty a decrease in the loss function every step but tends to converge faster.

2.3 Spectral Projected Gradient for minimization (SPGL1)

SPGL1 is based on the observation that in the case of, e.g., LASSO formulation, if is a controllable parameter, the expression to minimize defines a trade-off curve between the norm of the signal and the least-squares fit to the measurements. The crucial realization for this algorithm relies on introducing another formulation of the problem, Basis Pursuit De-Noising (BPDN):


It can now be shown that the dual solution to eq. 8 yields information on how to update the parameter to obtain a solution much closer to the one given by 10. These two solutions meet on the desired trade-off curve, giving an optimal solution for the chosen values of and .

2.3.1 Theoretical conclusions

GPSR and SPGL1 both present a striking advantage over their predecessors: the iterative step only requires vector-matrix products with and

, making them efficient in memory and computation time. This property can be utilized farther by providing measurement matrices that allow for fast vector products via recursive structure, such as Hadamard and Fourier transform. The algorithms introduced in the next section also have this advantage.

2.4 Practical comparison

The following performance comparison is provided for completeness: GPSR and SPGL1 (described here) are compared with L1 Magic and NESTA (described in the next section) for a variety of images, benchmarked for speed (Fig. 1). The times shown are for a compression ration of 80%, where reconstruction time increases with the number of measurements. Code profiling shows that there are two causes for the this: firstly, longer measurement vectors directly affect calculation time because more operations are needed to process them. Secondly, with less constraint information the optimum described by the measurements is reached in fewer iterations and convergence stops.
Reconstruction quality is identical between the algorithms for the case (Fig. 2), and one example is brought for illustration (Fig. 3). All the images used for comparison can be found in the appendix.

Figure 1: Comparison of runtime for the algorithms presented.
Figure 2: error for different images, for different numbers of measurements.
(a) Image #1, reconstructed using NESTA
(b) Image #7, reconstructed using NESTA
Figure 3: Sample images. Even using of the measurements, the reconstruction is imperfect. While the reasons for this are beyond the scope of this paper, some ideas are brought in section 5.

3 Algorithms capable of total variation minimization

3.1 L1 Magic

The L1 Magic package is not a single algorithm but several, developed together as a educational tool to prove computational tractability of compressive sensing and reconstruction from incomplete measurements. While these algorithms are not optimized nor fully robust, they are the package of choice for many educators in the field of compressive sensing. The package can be divided in to two primary parts: a set of solvers using a primal-dual approach for linear problems (similar in concept to 2.3), and a pseudo-norm solver using a log-barrier algorithm, formulated as second-order cone problems. The algorithms within each part differ only by the Newton step used to generate the gradient followed in each iteration until convergence is reached.
This package implements standard algorithms remarkably well and is very suitable for educational purposes thanks to it’s simple structure. However, lack of optimization and robustness make it in-optimal for large scale problems. Specifically, both the and implementations fail entirely for ill-posed problems, such as sampling near the information content of the image (as described in section 1.2) or measurements with noise, while other algorithms show varying degrees of success (section 4).

3.2 Nestorov’s algorithm (NESTA)

The NESTA algorithm is based on a method developed by Yurii Nesterov for deriving optimization algorithms that achieve a convergence rate he proved was optimal two decades earlier [8]. The method combines smoothing with an augmented gradient to achieve a convergence rate (where is the residual and is the number of steps) [2].

Nestorov’s algorithm can be applied to any smooth, convex loss functions over a convex feasible set. The function’s derivative is assumed to be Lipschitz, and the algorithm then iteratively estimates three sequences:

: the minimum of the loss function,
: a Lipschitz bounded next guess for , and
: a smoothed gradient step, which takes into account all the previous steps and a proximity function that enforces the convex set’s bounds (for constrained problems).
NESTA is the only algorithm presented in this paper with a better theoretical convergence rate than , and it is capable of both and total variation minimization.

3.3 Tval3

The TVAL3 algorithm is base on an augmented Lagrangian multiplier approach, using an alternating direction non-monotone line search to minimize an augmented Lagrangian type loss function [6, 7, 15]. TVAL3 can solve any optimization problem of the form:


Under the assumption that is differentiable with respect to and and is differentiable with respect to (but not necessarily in respect to ). In our case, we use and . The problem is then approximated by the following augmented Lagrangian form:


which can be minimized iteratively under the assumption of uneven complexity: minimizing with respect to is of much lower complexity than minimizing with respect to .
TVAL3 alternates between minimizing the augmented Lagrangian with the current Lagrange multipliers and updating the Lagrange multipliers and , until sufficient convergence is reached.
While difficult to compare analytically to other algorithms on account of the alternation between the primal and dual problems, TVAL3 is remarkably fast. The algorithms inability to optimize problems is not a shortcoming for imaging applications, as noted in the beginning of section 2.

3.4 Practical comparison

The comparisons between algorithms for total variation minimization are shown in figures 4, 5 and 6. TVAL is nearly three times as fast as NESTA and ten times faster than L1 Magic, so it is the obvious choice for well-conditioned problems. TVAL’s shortcoming lies in situations where the number of measurements is close to the information limit or when there is significant measurement noise (as described in section 4), where the quality of reconstruction decreases rapidly.

3.4.1 Comparison to algorithms

The most striking difference between and total variation algorithms is the number of measurements required to achieve good quality reconstruction. While the values for error are similar in the examples brought here, the total variation algorithms need a smaller percentage of the the measurements, and the reconstructed image is more similar the original even when the error indicates the reconstruction qualities are comparable. The time required to reconstruct an image also shows consistent variation between the algorithm classes: Total variation algorithms take approximately three times longer to reconstruct an image than methods, and profiling shows that nearly all the difference is due to calculating the total variation pseudo-norm, while the number of iterations is almost identical (between algorithms of similar structure).

Figure 4: Comparison of runtime for the algorithms presented.
(a) Magic
(c) TVAL3
Figure 5: error for different images, for different numbers of measurements. Here we find that while the algorithms show identical results for well-posed problems, as the number of measurements decreases TVAL3 is less able to reconstruct the image. This will become more of an issue with noisy measurements, as shown in section 4. Note that while the error does decrease when moving from of the measurements to , the difference is less pronounced. This effect can be seen clearly in the example images (Fig. 6).
(a) Image #1, reconstructed using NESTA
(b) Image #7, reconstructed using NESTA
Figure 6: Sample images. The visual quality is greatly improved over reconstruction with a fraction of the measurements, while the error is comparable. Some thoughts on this difference can be found in section 5.

4 Reconstruction under noise

Finally, we would like to present the effects of measurement noise on reconstruction quality, to complete the preparation for using compressive sensing in real-world systems. The modele used here is white Gaussian additive noise with mean

and standard deviation chosen to provide the desired SNR. While the low SNR value of

might seem extreme, it was chosen to provide similar conditions to experimental data sets such as the ones in [11]. The number of measurements was chosen to be of the number of pixels for the same reason.
An interesting feature of noisy reconstruction not shown in the graphs is that reconstruction takes longer for higher SNR values. Here the length of the measurement vectors is fixed, so we conclude that without enough information to reconstruct the image correctly the algorithm ”gives up” early. The relative times between different algorithms are similar to the ones without noise.

(b) TVAL3
Figure 7: error for different images, for different SNR. There is a pronounced difference between NESTA and TVAL3 at very low SNR: while NESTA reconstructs an image that maintains the overall features and can be improved using post-processing tools, TVAL3 fails completely. This is crucial to experimental systems, and is illustrated in the example images (Fig. 8).
(a) Image #1, reconstructed using NESTA
(b) Image #1, reconstructed using TVAL3
Figure 8: Image 1, reconstructed using different algorithms. At high SNR both algorithms preform well. At low SNR, both algorithms preform poorly (as expected), but NESTA better maintains the major features allowing for improvement via post-processing.

5 Summary

Our first conclusion is that total variation minimization is more appropriate for natural images than minimization under a sparsifying basis. Beyond the experimental data, we justify this with the following intuition: In both cases (TV and under a wavelet transform) the vector we reconstruct is not mathematically sparse; rather, many of it’s entries are very small, and the reconstruction takes them to be zero. Since wavelet transforms are global operations, this leaves interference patterns that require more measurements to eliminate. Additionally, under total variation minimizations the zeroed elements are local, leading to image distortion that have smaller distance and are less noticeable to the eye.
Of the algorithms described here, two stand out: NESTA and TVAL3. Both are fast and efficient, and both use only matrix-vector products to leverage efficiently calculable basis (such as Hadamard or Dragon wavelets). This difference is especially significant as the problem scales to mega-pixel images, where speed and memory requirements become prevalent concerns. Both algorithms are also robust: their performance is not dependent on the exact values of the tunable input parameters and is only weakly dependent the image provided. For experimental uses the trade-off between noise tolerance and speed is significant, and will ultimately decide which algorithm is appropriate on a application specific basis.

6 Future work

A major point left out in this work is comparison of different sampling bases, such as can be found in [14]. The choice of Dragon noiselets is based on ease of use, and as such was never fully quantified. Even within the family of self-similar noiselets, scrambled Hadamard sampling vectors present superior memory performance and are nearly as easy to implement. Additionally, we do not discuss parallelization or other recent optimization techniques, which would surely play a part in any industrial-scale application.
Finally, while this paper was written using insights gained from real-world data, it’s scope does not extend far enough to contain a quantitative discussion of reconstruction of images outside the virtual realm.

7 Acknowledgments

We would like to thank Yair Weiss for the background and guidance enabling this work, and Hagai Eisenberg from the Racah Institute of Physics for providing the framework and resources to apply these conclusions to real data.


  • [1] Jonathan Barzilai and Jonathan M Borwein. Two-point step size gradient methods. IMA Journal of Numerical Analysis, 8(1):141–148, 1988.
  • [2] Stephen Becker, Jérôme Bobin, and Emmanuel J Candès. Nesta: a fast and accurate first-order method for sparse recovery. SIAM Journal on Imaging Sciences, 4(1):1–39, 2011.
  • [3] Emmanuel Candes and Justin Romberg. Sparsity and incoherence in compressive sampling. Inverse problems, 23(3):969, 2007.
  • [4] Emmanuel J Candès and Michael B Wakin. An introduction to compressive sampling. IEEE signal processing magazine, 25(2):21–30, 2008.
  • [5] Roni Feldman, Yair Weiss, and Yonina C Eldar. Power-efficient cameras using natural image statistics. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops

    , pages 36–44, 2016.
  • [6] Gregory A Howland. Compressive sensing for quantum imaging. PhD thesis, University of Rochester, 2014.
  • [7] Chengbo Li, Wotao Yin, Hong Jiang, and Yin Zhang. An efficient augmented lagrangian method with applications to total variation minimization. Computational Optimization and Applications, 56(3):507–530, 2013.
  • [8] Yurii Nesterov. A method for unconstrained convex minimization problem with the rate of convergence o (1/k2). In Doklady an SSSR, volume 269, pages 543–547, 1983.
  • [9] Robert D Nowak, Stephen J Wright, et al. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE Journal of selected topics in signal processing, 1(4):586–597, 2007.
  • [10] Encyclopedia of Mathematics. Heisenberg representation. (Accessed: 22 November 2018).
  • [11] Yoni Sher, Lior Cohen, Daniel Istrati, and Hagai S Eisenberg. Low intensity lidar using compressed sensing and a photon number resolving detector. In Emerging Digital Micromirror Device Based Systems and Applications X, volume 10546, page 105460J. International Society for Optics and Photonics, 2018.
  • [12] J Tropp and Anna C Gilbert. Signal recovery from partial information via orthogonal matching pursuit, 2005.
  • [13] Joel A Tropp and Anna C Gilbert. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on information theory, 53(12):4655–4666, 2007.
  • [14] Zhongmin Wang and Gonzalo R Arce. Variable density compressed image sampling. IEEE Transactions on image processing, 19(1):264–270, 2010.
  • [15] Junfeng Yang and Yin Zhang. Alternating direction algorithms for -problems in compressive sensing. SIAM journal on scientific computing, 33(1):250–278, 2011.

Appendix A Images used for comparison

All the images used in the paper are included here, with the identifying numbering used above. Some features of the reconstruction are easy to see, e.g. image 32 (Figure 43) is random noise and the reconstruction error is high, as expected. Image 16 (Figure 17) also has high information content in the TV sense because of it’s sharp edges. The code used for this paper is available on the author’s website.

Figure 9: Image #1
Figure 10: Image #2
Figure 11: Image #3
Figure 12: Image #4
Figure 13: Image #5
Figure 14: Image #6
Figure 15: Image #7
Figure 16: Image #8
Figure 17: Image #9
Figure 18: Image #10
Figure 19: Image #11
Figure 20: Image #12
Figure 21: Image #13
Figure 22: Image #14
Figure 23: Image #15
Figure 24: Image #16
Figure 25: Image #17
Figure 26: Image #18
Figure 27: Image #19
Figure 28: Image #20
Figure 29: Image #21
Figure 30: Image #22
Figure 31: Image #23
Figure 32: Image #24
Figure 33: Image #25
Figure 34: Image #26
Figure 35: Image #27
Figure 36: Image #28
Figure 37: Image #29
Figure 38: Image #30
Figure 39: Image #31
Figure 40: Image #32
Figure 41: Image #33
Figure 42: Image #34
Figure 43: Image #35