Fast Binary Compressive Sensing via ℓ_0 Gradient Descent

01/30/2018 ∙ by Tianlin Liu, et al. ∙ Jacobs University Bremen 0

We present a fast Compressive Sensing algorithm for the reconstruction of binary signals 0,1-valued binary signals from its linear measurements. The proposed algorithm minimizes a non-convex penalty function that is given by a weighted sum of smoothed ℓ_0 norms under the [0,1] box-constraint. It is experimentally shown that the proposed algorithm is not only significantly faster than linear-programming-based convex optimization algorithms, but also shows a better recovery performance under several different metrics.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Compressive Sensing (CS) is a method in signal processing which aims to reconstruct signals from a relatively small number of measurements. It has been shown that sparse signals can be reconstructed with a sampling rate far less than the Nyquist rate by exploiting the sparsity [1].

In this paper, we focus on Binary Compressive Sensing (BCS) which restricts the signals of interest to binary -valued signals, which are widely used in engineering applications, such as fault detection [2], single-pixel image reconstruction [3], and digital communications [4]. Related works are as follows: Nakarmi and Rahnavard [5] designed a sensing matrix tailored for binary signal reconstruction. Wang et al. [6] combined norm with norm to reconstruct sparse binary signals. Nagahara [7] exploited the sum of weighted norms to effectively reconstruct signals whose entries are integer-valued and, in particular, binary signals and bitonal images. Keiper et al. [8]

analyzed the phase transition of binary Basis Pursuit.

We note that most of the previous work on BCS are based on convex optimization. Indeed, convex optimization based algorithms allow performance guarantee via rich mathematical tools. However, they are found to be notoriously slow in large-scale applications compared to greedy methods such as the Orthogonal Matching Pursuit (OMP) [9]. On the other hand, greedy methods like OMP are fast but often have a worse recovery rate than convex optimization methods. In this work, we propose a fast BCS algorithm with a high recovery rate. Taking the binariness of signals into account, our algorithm is a gradient descent method based on the smoothed norm [10]. Through numerical experiments, we show that the proposed algorithm compares favorably against previously proposed CS and BCS algorithms in terms of recovery rate and speed.

The rest of the paper is organized as follows. We give a short review on CS/BCS algorithms in Section II and present our algorithm in Section III. In Section IV, we present experimental results which compare the performance of the proposed algorithm with other algorithms. We conclude this paper with some remarks in Section V.

Notations:

For a vector

and , the norm of is denoted by . The number of non-zero entries in is denoted by

. The probability of an event

is denoted by . Let for . We denote by the -dimensional vector with all entries equal to .

Ii Binary Compressive Sensing (BCS)

In the standard CS scheme, one aims to recover a sparse signal from its linear measurements. The constraints posed by the measurements can be formulated as

(1)

where , , is the measurement matrix and is the measurement of a sparse signal . CS algorithms exploit the fact that is sparse and seek a sparse solution satisfying (1).

The BCS scheme considers binary signals for . Note that a binary signal is sparse if and only if its complementary binary signal is dense, i.e., is almost fully supported. As the measurement matrix is known, the equation (1) converts equivalently to

(2)

where and . This shows that reconstructing a sparse signal under the constraint (1) is equivalent to reconstructing a dense signal under the constraint (2). For this reason, in contrast to the case of generic signals, binary signals that are dense can be recovered as well as those that are sparse.

Two types of models for binary signals have been considered in the literature (e.g., [11, 6, 7]): (i) is a deterministic vector which is binary and sparse, i.e., most of its entries are and only few are ; (ii)

is a random vector whose entries are independent and identically distributed (i.i.d.) with probability distribution

for some fixed . If is small, a realization of is likely a sparse binary signal.

In this work, we shall consider the second model which can accommodate dense binary signals as well as sparse binary signals.

Below we give a short review of CS/BCS methods that are related to our work.

Ii-a minimization (L0)

A naive approach to finding sparse solutions is the minimization,

subject to ()

This method works generally for continuous-valued signals that are sparse, i.e., signals whose entries are mostly zero. However, solving the minimization requires a combinatorial search and is therefore NP-hard [12].

Ii-B Smoothed minimization (SL0)

Smoothed minimization (SL0) [10] replaces the norm in () with a non-convex relaxation:

subject to

This is motivated by the observation

which implies that for any ,

(3)

Noticing that is a smooth function for any fixed , Mohimani et al. [10] proposed an algorithm based on the gradient descent method. The algorithm iteratively obtains an approximate solution by decreasing .

Mohammadi et al. [13] adapted the SL0 algorithm particularly to non-negative signals. Their algorithm, called the Constrained Smoothed method (CSL0), incorporates the non-negativity constraints by introducing some weight functions into the cost function. Empirically, CSL0 shows better performance than SL0 in the reconstruction of non-negative signals.

Ii-C Basis Pursuit (BP)

A well-known and by now standard relaxation of () is the -minimization, also known as the Basis Pursuit (BP) [14]:

subject to ()

Similar to (), this method works generally for continuous-valued signals that are sparse.

Ii-D Boxed Basis Pursuit (Boxed BP)

Donoho et al. [11] proposed the Boxed Basis Pursuit (Boxed BP) for the reconstruction of k-simple bounded signals:

subject to

The intuition behind Boxed BP is straightforward: the norm minimization promotes sparsity of the solution while the restriction reduces the set of feasible solutions. Recently, Keiper et al. [8] analyzed the performance of Boxed BP for reconstructing binary signals.

Ii-E Sum of Norms (SN)

Wang et al. [6] introduced the following optimization problem which combines the and norms:

subject to

Minimizing promotes sparsity of while minimizing forces the entries to be small and of equal magnitude (see Fig. 1). The two terms are balanced by a tuning parameter .

Fig. 1: Left: the minimization of finds sparse solutions. Right: the minimization of forces the entries to be small and of equal magnitude.

Fig. 2: The function given in (4).

Ii-F Sum of Absolute Values (SAV)

Nagahara [7] proposed the following method for reconstruction of discrete signals whose entries are chosen independently from a set of finite alphabets with a priori known probability distribution. In the special case of binary signals, SAV is formulated as,

subject to

where , , is the probability distribution of the entries of . If , i.e., if is sparse, then so that SAV performs similar to BP. We note that

where

(4)

Iii Box-Constrained Sum of Smoothed

L0 and SL0 utilize the norm and its smoothed version respectively, however, they do not take into account that is binary. On the other hand, Boxed BP, SN, and SAV utilize the norm in one way or another and are specifically adjusted to the binary setting. A natural question arises: Can we achieve a better recovery rate for binary signals by adjusting L0 and SL0 to the binary setting?

We note that Boxed BP takes into account the binariness of by imposing the restriction . It is straightforward to apply this trick to L0 and SL0, and we will call the resulting algorithms Boxed L0 and Boxed SL0 respectively. Boxed L0 is still NP-hard like L0, but Boxed SL0 shows a clear improvement over SL0 while requiring a similar amount of run time (Fig. 3). However, the recovery rate of Boxed SL0 is significantly worse than Boxed BP or SN.

In this paper, we aim to adapt the SAV method and the restriction to SL0, in order to achieve a better performance. A straightforward adaptation leads to the following formulation. For small,

subject to (5)

where

(6)

and . Note that by (3), we have

so that can be approximated by with small .

Next, we will use a weight function to incorporate the restriction into the function . For integers , let

For and integers , we define

Note that since for all , minimizing forces to be small so that all ’s lie within . In this way, the restriction is incorporated into the cost function. Our optimization problem now reads as follows: For small and large,

subject to

To solve this problem, we propose an algorithm which is based on the gradient descent method and is implemented similarly as algorithms in [10, 13]. A major difference in our algorithm is that the cost function of SL0 is replaced with which is designed specifically for binary signals by adapting the formulation of SAV [7].

1:Data: Measurement matrix , observation , probability distribution prior .
2:Parameters: Iters and are the number of iterations in the outer and inner loops respectively, is a step-size parameter for gradient descent, and is a decreasing factor for .
3:Initialization: , ,
4:;
5:for  do
6:     for  do
7:         ;  % gradient descent
8:         ;  % projection
9:     end for
10:     ;
11:     ;
12:end for
13:;  % round to a binary vector
Algorithm 1 Box-Constrained Sum of Smoothed (BSSL0)

The proposed algorithm is comprised of two nested loops. In the outer loop, we slowly decrease and iteratively search for an optimal solution from a coarse to a fine scale by decreasing by a factor of . As decreases, we also gradually increase so that a larger penalty is put on solutions that have entries outside the range . The inner loop performs a gradient descent of iterations for the function , where and are given from the outer loop. In each iteration of the gradient descent, the solution is projected into the set of feasible solutions .

Numerical experiments in Section IV show that for binary signals the proposed algorithm outperforms all other algorithms (BP, Boxed BP, SN, SAV, SL0, and Boxed SL0).

As already mentioned, our algorithm is implemented similarly as SL0 [10, 13]. The parameters used in our algorithm are exactly the same as in [10] except and . As justified in [10, Section IV-B]

, we set the initial estimate of

as the minimum norm solution of , i.e., . The initialization value for is discussed in [10, Remark 5 in Section III]. Also, the choice of the step-size for gradient descent is justified in [10, Remark 2 in Section III] and the choice of in [13, Lemma 1].

The gradient of used in Algorithm 1 is given by

where

This is derived using the fact that for all except ; we have set in the implementation. Let us point out that the discontinuity of at does not deteriorate the performance of gradient descent. One can replace the function with a smooth function, however, at the cost of increased run time.

Iv Numerical Experiments

In this section, we compare the performance of our algorithm BSSL0 with other CS/BCS algorithms described in Section II. The MATLAB codes for the experiments are available in [15].

Iv-a Experiment 1: Binary Sparse Signal Reconstruction

In this experiment, we tested BSSL0 with randomly generated binary signals and compared it with other CS/BCS algorithms. Random Gaussian matrices are considered for the measurement matrix , that is, all entries of

are drawn independently from the standard normal distribution. The parameter

is varied from to by step-size , and a binary signal is generated by drawing its entries independently with and . For and , we compute the measurement vector and run the respective algorithms introduced in section II (BP, Boxed BP, SN, SAV, SL0, Boxed SL0, and BSSL0) to obtain a solution vector as a approximated reconstruction of . Additionally, we consider the Orthogonal Matching Pursuit (OMP) [16] which is a fast greedy algorithm for sparse signal reconstruction. The following are considered for the performance evaluation: (i) Failure of Perfect Reconstruction (FPR): if (successfully recovered the signal perfectly) and if (failed to recover perfectly); (ii) Noise Signal Ratio (NSR): NSR = ; (iii) Run time. For each , experiments are repeated times and the results are averaged. For SN, we set the parameter to be as fine-tuned in [6]. For BSSL0, we set , , , and .

Fig. 3: Results for Experiment 1.

In Fig. 3, BSSL0 shows a better recovery rate than other CS/BCS algorithms and also shows a run time comparable to SL0.

Iv-B Experiment 2: Bitonal Image Reconstruction

As in [7], we considered reconstruction of the -pixel bitonal image given in Fig. 4 (left). Following the same setup in [7]

, we added to each pixel a random Gaussian noise with mean-zero and standard deviation of

, as shown in Fig. 4 (right).

Fig. 4: Original image (left) and the image corrupted by Gaussian noise (right).

The noisy image is represented by a real-valued matrix

and we apply the discrete Fourier transform (DFT) to obtain

equivalently,

where with and is the -point DFT matrix. As in [7], we randomly subsampled to obtain a half-sized vector and set the measurement matrix as the corresponding submatrix of . Fig. 5 shows the reconstructed images by BP, SN, SAV, and BSSL0, all with entrywise rounding off to . For SN, an optimal tuning parameter was searched from to by stepsize and the value was chosen. For SAV and BSSL0, as in [7], we chose the parameter as a rough estimate for the sparsity of the bitonal image (see [7]). We set , , , and for the parameters of BSSL0. The respective run time for BP, SN, SAV, and BSSL0 are also given in Tab. I.

Fig. 5: Reconstructed images by BP (upper left), SN (upper right), SAV (lower left), and the proposed method BSSL0 (lower right).
Algorithm Run Time
Basis Pursuit 185.2044 seconds
SN 406.1007 seconds
SAV 191.5366 seconds
BSSL0 (proposed) 0.92577 seconds
TABLE I: The Run Time Comparison

V Conclusion

In this work, we proposed a fast algorithm (BSSL0) for reconstruction of binary signals which is based on the gradient descent method and smooth relaxation techniques. We showed that for binary signals our algorithm outperforms other CS/BCS methods in terms of the recovery rate and speed. Future work includes a detailed analysis of BSSL0 in stability/robustness and extensions to ternary and finite alphabet signals.

Acknowledgment

T. Liu and D. G. Lee acknowledge the support of the DFG Grant PF 450/6-1. The authors are grateful to Robert Fischer and Götz E. Pfander for their helpful suggestions. The authors thank anonymous reviewers for their comments.

References

  • [1] D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
  • [2] D. Bickson, D. Baron, A. Ihler, H. Avissar, and D. Dolev, “Fault identification via nonparametric belief propagation,” IEEE Transactions on Signal Processing, vol. 59, no. 6, pp. 2602–2613, 2011.
  • [3] M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE signal Processing Magazine, vol. 25, no. 2, pp. 83–91, 2008.
  • [4] K. Wu and X. Guo, “Compressive sensing of digital sparse signals,” in Wireless Communications and Networking Conference (WCNC), 2011 IEEE.   IEEE, 2011, pp. 1488–1492.
  • [5] U. Nakarmi and N. Rahnavard, “Bcs: Compressive sensing for binary sparse signals,” in Military Communications Conference 2012.   IEEE, 2012, pp. 1–5.
  • [6] S. Wang and N. Rahnavard, “Binary compressive sensing via sum of -norm and -norm regularization,” in Military Communications Conference 2013 IEEE.   IEEE, 2013, pp. 1616–1621.
  • [7] M. Nagahara, “Discrete signal reconstruction by sum of absolute values,” IEEE Signal Processing Letters, vol. 22, no. 10, pp. 1575–1579, 2015.
  • [8] S. Keiper, G. Kutyniok, D. G. Lee, and G. E. Pfander, “Compressed sensing for finite-valued signals,” Linear Algebra and its Applications, 2017.
  • [9] D. L. Donoho and Y. Tsaig, “Fast solution of -norm minimization problems when the solution may be sparse,” IEEE Transactions on Information Theory, vol. 54, no. 11, pp. 4789–4812, 2008.
  • [10] H. Mohimani, M. Babaie-Zadeh, and C. Jutten, “A fast approach for overcomplete sparse decomposition based on smoothed norm,” IEEE Transactions on Signal Processing, vol. 57, no. 1, pp. 289–301, 2009.
  • [11] D. L. Donoho and J. Tanner, “Precise undersampling theorems,” Proceedings of the IEEE, vol. 98, no. 6, pp. 913–924, 2010.
  • [12] B. K. Natarajan, “Sparse approximate solutions to linear systems,” SIAM Journal on Computing, vol. 24, no. 2, pp. 227–234, 1995.
  • [13] M. Mohammadi, E. Fatemizadeh, and M. H. Mahoor, “Non-negative sparse decomposition based on constrained smoothed norm,” Signal Processing, vol. 100, pp. 42–50, 2014.
  • [14] S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Review, vol. 43, no. 1, pp. 129–159, 2001.
  • [15] T. Liu and D. G. Lee, “Matlab codes for BSSL0 (Box-constrained Sum of Smoothed ) algorithm,” https://github.com/liutianlin0121/BSSL0, 2018.
  • [16] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on information theory, vol. 53, no. 12, pp. 4655–4666, 2007.