Deep Learning Sparse Ternary Projections for Compressed Sensing of Images

08/28/2017 ∙ by Duc Minh Nguyen, et al. ∙ 0

Compressed sensing (CS) is a sampling theory that allows reconstruction of sparse (or compressible) signals from an incomplete number of measurements, using of a sensing mechanism implemented by an appropriate projection matrix. The CS theory is based on random Gaussian projection matrices, which satisfy recovery guarantees with high probability; however, sparse ternary 0, -1, +1 projections are more suitable for hardware implementation. In this paper, we present a deep learning approach to obtain very sparse ternary projections for compressed sensing. Our deep learning architecture jointly learns a pair of a projection matrix and a reconstruction operator in an end-to-end fashion. The experimental results on real images demonstrate the effectiveness of the proposed approach compared to state-of-the-art methods, with significant advantage in terms of complexity.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

deep-ternary

Source code for the paper "Deep Learning Sparse Ternary Projections For Compressed Sensing of Images"


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Compressed sensing or compressive sampling (CS) is a theory [1, 2]

that merges compression and acquisition, exploiting sparsity to recover signals that have been sampled at a drastically lower rate than what the Shannon/Nyquist theorem imposes. The results of CS have an important impact on numerous signal processing applications including the efficient processing and analysis of high-dimensional data such as audio

[3], image [4] and video [5].

Assume a finite-length, real-valued signal , CS yields a compressed representation of the treated signal using a sensing mechanism that is realized by a sensing or projection matrix. The linear measurement process is described by

(1)

where , , is the projection matrix and

is the vector containing the obtained measurements. In CS, we assume that either

is a sparse signal or that has a sparse representation with respect to a suitable basis , that is, , , , where is the quasi-norm counting the non-vanishing coefficients of the treated signal. Therefore, we obtain the underdetermined linear system

(2)

A sparse vector satisfying (2) can be obtained as the solution of the -minimization problem

(3)

employing well-known algorithms like Basis Pursuit [6].

Conventional CS theory is based on random Gaussian or random Bernoulli matrices, which can be used to recover a -dimensional -sparse signal, provided that the number of measurements is [7]. An important issue when considering random matrices is that such matrices are typically difficult to build in hardware. The difficulty in storing these matrices and certain physical constraints on the measurement process makes it challenging to realize CS in practice. Moreover, when multiplying arbitrary matrices with signal vectors of high dimension, the lack of any fast matrix multiplication algorithm results in high computational cost.

Deep learning is an emerging field that learns multiple levels of representation of data and has been used successfully in image processing tasks. Existing work has been presented for image super-resolution

[8], image denoising [9] and compressed sensing [10, 11]. Deep learning has also been applied in distributed CS [12], quantized CS [13] and video CS [14].

In this paper, we adopt a deep learning approach to learn an optimized projection matrix and a non-linear reconstruction mapping from measurements to the original signal. In order to design projections suitable for hardware implementation, we focus on sparse matrices composed of , imposing sparsity and binary constraints on the proposed network architecture. The network is trained on image patches and the learned projection matrix is used to acquire images in a block-based manner. Our experimental results show that high quality reconstruction can be achieved with projections containing only % nonzero entries.

The rest of the paper is organized as follows. In Section 2 we review related work in CS and deep learning. Section 3 includes the proposed approach and Section 4 our experimental results. Conclusions are drawn in Section 5.

2 Related Work and Our Contributions

2.1 Compressed Sensing

While compressed sensing was introduced utilizing random projection matrices, new results concern matrices that are more efficient than random projections, leading to fewer necessary measurements or improvement of the reconstruction performance [15, 16]. Alternative studies have proposed designs that leverage prior knowledge on the signal [17] akin to the theory of compressed sensing with prior information [18]. Important research directions focus on the hardware implementation of CS. In order to achieve efficient storage, fast encoding and decoding, structured matrices have been proposed [19, 20]. Unfortunately, these constructions come at the cost of poor recovery conditions. Binary and ternary matrices [21, 22, 23] yield fast computations; however, most of the proposed constructions are deterministic and impose restrictions on the matrix dimensions. Moreover, as it has been proven in [24], LDPC like matrices with entries may perform well as long as they are not too sparse.

Our approach addresses both hardware limitations and recovery requirements. The proposed projections can be extremely sparse (only of their elements are nonzero) allowing highly efficient storage, and have nonzero entries yielding fast computations during acquisition 111The source code of the proposed method is available at https://github.com/nmduc/deep-ternary. While such a sparse projection matrix would lead to unacceptable recovery performance when combined with conventional reconstruction algorithms, the joint optimization of the sensing mechanism and the reconstruction process yields state-of-the-art results in image recovery.

2.2 Deep Learning

The first work concerning recovery from compressed measurements via deep learning is presented in [10]

. The authors use stacked denoising autoencoders to jointly train a non-linear sensing operator and a non-linear reconstruction mapping. While the reconstruction quality is comparable to state-of-the-art algorithms, the gain in reconstruction time is considerable. Deep learning for CS has also been employed in

[11]. Our approach follows similar principles as [10, 11], but focuses on simplifying the projection matrix. While the projection matrices in [10, 11] are dense matrices with real-valued entries, we enforce the projection matrix to be a sparse matrix with elements in so that it can be efficiently implemented.

The proposed algorithm is based on recent work on simplifying deep neural networks, applied on image classification tasks, in which deep networks have achieved great progress. In

[25], neural networks are trained with binary weights . The study in [26] extends [25] to full binary neural networks (BNNs), with binary weights and binary hidden unit activations. A different technique [27]

adds scaling factors to compensate for the loss introduced by weight binarization. Another direction in simplifying deep neural networks is to compress pre-trained networks. The method proposed in

[28] not only learns weights but also connections, producing sparsely-connected networks. Extending [28], in [29], connection pruning, weight quantization and Huffman coding are employed to compress deep neural networks.

Our algorithm incorporates ideas from both these two directions. Nevertheless, compared to existing methods, our novelty is two fold: (i) We propose a sparsifying technique on the weights implementing the sensing layer; combined with binarization techniques, our method yields a highly sparse ternary projection matrix. The learned projections can be stored efficiently and allow fast computations during acquisition, therefore, they are suitable for hardware implementation. (ii) We only simplify the first layer in our network, which corresponds to the linear projection matrix and allow the reconstruction module to be non-linear in order to achieve high performance.

3 Learning Sparse Ternary Projections

Next, we present our proposed method for efficient compressed sensing of images. The section starts with a description of our network architecture, followed by our proposed training algorithm.

3.1 Network Architecture

Figure 1: Our network architecture: the left block corresponds to the sensing module while the right block (in dash) corresponds to reconstruction module. stands for the sparse binary sensing weights. , denotes the number of units of layer . White blocks denote linear layers while shaded blocks denote non-linear layers.

Our network architecture is illustrated in Fig. 1. It consists of a sensing and a reconstruction module. The network takes vectorized image patches of size as input. The sensing layer projects the -dimensional input signal to the -dimensional domain; thus, the number of units in this layer is , where is the sensing rate. The sensing layer has weights , corresponding to the projection matrix ,

. In order to learn a simple linear projection matrix, we do not put bias and non-linearity activation function in this layer.

The first layer in the reconstruction module is a scaling layer, which linearly scales the outputs of the sensing layer by learned factors . This layer consists of hidden units connected “1-1” to the hidden units of the sensing layer and it is employed to compensate for the loss induced by binarizing the sensing weights, as explained in Sec. 3.2. Nevertheless, in deployment, only the projection matrix is implemented in sensing devices and the learned scaling factors are kept in the reconstruction module. The scaling layer is followed by

hidden layers. These hidden layers employ the Rectified Linear Unit (ReLU) activation function

[30]

. The output layer is a linear fully connected layer with size equal to the input dimension. All layers in the reconstruction module, except for the scaling layer, are fully connected, and they are followed by a batch-normalization layer

[31].

3.2 Training Algorithm

In general, our network training follows the standard mini-batch gradient descent method. Denote the input and reconstructed patches, respectively, with ,

. We employ the mean squared error between the input and the reconstruction as our loss function:

(4)

where is the number of sample patches. In constrast to the conventional mini-batch gradient descent method, we introduce a sparsifying and a binarization step on the training of the sensing layer.

More particularly, we first sparsify the continuous-valued sensing weights , to get the sparse weights . For this step, we propose to retain in only the entries that correspond to the top-K largest absolute values in and set all the rest entries to zero. We refer to this procedure as function. The selection of the top weights can be applied column-wise, row-wise or over the whole matrix. Nevertheless, since the update of the scaling layer, presented below, involves column-wise operator on the sensing weights, we opt to perform

in a column-wise manner. Each neuron in the sensing layer is connected to

elements in the input signal, where is the sparsity ratio. Implementation-wise, we construct a sparse binary mask with entries equal to corresponding to the largest weights in . The sparse sensing weights are updated according to , where represents the Hadamard product.

The binarization step involves a mapping of the sparse continuous valued weights to sparse binary weights . Nevertheless, both the sparse and the binarization step introduce some loss, which we want to recover during reconstruction. For this reason, the sensing layer is followed by a scaling layer similar to the one proposed in [27]. This layer with weights serves as an inverse mapping of the sparse binarized weights to the continuous weights . Thus, the output of the scaling layer is an approximation of , i. e., the measurements corresponding to the continuous projections . Let and be the columns of and , respectively. corresponds to the dense continuous weights of the hidden unit in the sensing layer. We approximate with , where is a scale factor, corresponding to the entry of the scaling weights . The values of and can be determined by minimizing the following mean square error with respect to , :

(5)

By expanding (5), we have:

(6)

where is a constant. As is a positive scalar, following [27] but also taking into account our sparsity constraint, we obtain the optimal sparse binary vector as a solution to the problem:

(7)

where is the column of , and denotes the positions of the nonzero entries of the treated vector. The solution of (7) is a vector containing the signs of . After obtaining the optimal , we can solve for the optimal by making the derivative of equal to zero. Considering that , the optimal is given by:

(8)

After the sparsifying and binarization steps, the resulting is sparse and has nonzero entries in in each of its column.

Following existing training algorithms for networks with binary weights [25, 26, 27], we use the sparse binary weights during forward and backward propagation. The high precision weights , on the other hand, are used during parameter update, to accommodate the small changes of the weights after each update step. In our training, is updated using the gradient of the loss function with respect to . It should be noted that even though contains only discrete weights in , the gradient of the loss with respect to it still lies in the continuous domain. Denoting the continuous, sparse, and sparse binary sensing weights, respectively, the scaling layer’s weights, the reconstruction weights and the column of at step , our training is summarized by Algorithm 1.

Input: The patches ; the weights and , the learning rate
Output: The loss ; the updated weights , ; the sparse binary weights ; the scaling weights

1:procedure Sparsify and binarize sensing weights
2:     
3:     
4:     
5:     
6:procedure Forward propagation
7:     
8:procedure Backward propagation
9:     
10:procedure Parameter update
11:     
12:
13:     
Algorithm 1 The proposed training algorithm of sparse ternary projection matrix and reconstruction weights at step .

4 Experimental results

In order to evaluate the proposed algorithm, we carried out experiments in image recovery. We employed the ILSVRC2012 validation set [32], including K images for training, and tested our model on two testing sets of images of resolution . The first testing set consists of 10 images, taken from the ILSVCR2014 [32] dataset, and is provided by the authors of [10]. The second testing set is composed of 50 images randomly selected from the LabelMe dataset [33]. All the images were converted to grayscale in our experiments. To reduce the computational overhead, we ran our experiments in small image patches of size pixels. In total, we randomly sampled

millions patches to form our training set. During training, the input patches were preprocessed by subtracting the mean and dividing by the standard deviation. We trained our network using Algorithm

1, with the Adam parameter update [34], a batch size of , epochs and a learning rate of decaying by a factor of every epochs. The training samples were randomly shuffled after each epoch. To avoid over-fitting, we employ regularization on the reconstruction modules, with a weight equal to

. During the testing phase, we sampled overlapping patches from each test image, with a stride of

pixels and determined the final image reconstruction as the average of the patches’ reconstructions. The methods are evaluated using PSNR values, expressed in dB. Concerning the network architecture, we empirically set the number of non-linear hidden layers to , each with hidden units since this configuration produces a good trade-off between training time and reconstruction quality.

Sensing rate
First, we experiment with different sensing rates. We use and vary in . The mean PSNR values on the first testing set are shown in Table 1. As shown in the table, the overall reconstruction quality gets better with larger sensing rates, since more information from the signal is retained in the measurements.

PSNR 25.24 26.11 26.65 27.52 27.96
Table 1: Reconstruction performance when varying ().


Sparsity ratio
Next, we experiment with different sparsity ratios, using and varying . The mean PSNR values on the first testing set are presented in Table 2. The dimension of input is and, for , we obtain . With , the number of nonzero entries in each column of (i. e. ) is , , , , , , , respectively. As can be seen, acceptable reconstruction performance can be achieved using extremely sparse projection matrices with only % nonzero entries (). Varying from to considerably improves the performance, while the difference between and is negligible. The network reaches its peak performance with and performs slightly worse with . It should be pointed out that with , there are and nonzero entries in the projection matrix, respectively. The former is not enough to fully cover the dimensional input signal. We argue that this is the reason for the noticeable performance jump when increasing to . During training, the network experiences over-fitting with . This explains why gives better performance than . As a result, the proposed sparse binary constraint can be considered as an extra regularizer to the network.

PSNR 25.83 26.96 26.98 27.61 27.52 27.40 27.37
Table 2: Reconstruction performance when varying ().


Comparison with state of the art
As the proposed algorithm implements CS via deep learning, the next experiment involves a comparison with the method of [10], which employs a stacked denoising autoencoder to jointly learn the sensing layer and the reconstructor. We select the best algorithm from [10], referred to as O-NL-SDA for our comparison. This algorithm uses a non-linear sensing mechanism, with overlapping image patches of size . The results of O-NL-SDA on the first testing set is taken from [10]. To obtain the results on the second testing set, we train an O-NL-SDA model on our training set using the proposed configurations in [10]. Results obtained with a conventional reconstruction algorithm, namely, Basis Pursuit (BP) [6], using random ternary projections are also presented. Sparse binary and ternary constructions like the ones proposed in [22, 23] could not be employed in our experiments due to the constraints they impose on matrix dimensions. To have a fair comparison with [10], we use the same sensing rate, . We choose for the proposed method since it yields the best performance, while producing a highly sparse projection matrix. The comparison between the selected methods on the first testing set is shown in Table 3. On the second testing set, the mean PSNR values for O-NL-SDA [10], BP [6] and the proposed algorithm are , and dB, respectively.

O-NL-SDA [10] BP [6] Proposed
Damselfly 30.85 26.49 31.56
Birds 26.62 22.75 27.17
Rabbit 27.24 22.63 27.89
Turtle 34.65 26.31 35.59
Dog 21.55 16.97 22.33
Eagle Ray 26.57 22.65 27.20
Boat 33.11 25.48 33.94
Monkey 30.32 23.51 30.70
Panda 21.00 17.85 21.53
Snake 17.71 14.67 18.23
Mean PSNR 26.96 21.93 27.61
Table 3: Reconstruction performance (PSNR) of different algorithms on the first testing set.

As can be seen, the proposed algorithm yields significantly better results than the conventional reconstruction with BP [6] and ternary but not sparse projections. Despite having a sparse ternary matrix of only 5% of nonzero entries, our method outperforms O-NL-SDA [10] in terms of the recovery performance. Concerning the speed of the reconstructor, as it was reported in [10]

, a reconstructor implemented using a feed-forward neural network can perform orders of magnitude faster than a convex optimization solver. Clearly, our method can provide a convenient hardware implementation of the sensing mechanism and a fast reconstructor with better reconstruction quality than the state of the art.

5 Conclusions

In this paper, we propose a novel algorithm to train a pair of a highly sparse ternary projection matrix and a reconstruction operator for compressed sensing of images. The sparse and ternary structure of the learned projection matrix can be exploited in efficient hardware implementations. Experimental results on real images show that the achieved reconstruction performance for a projection matrix with % nonzero binary entries and a corresponding reconstructor trained end-to-end with the proposed algorithm outperforms state-of-the-art methods. Extremely sparse projection matrices with only % nonzero entries learned using the same algorithm yield acceptable performance as well.

Acknowledgment

We would like to thank the authors of [10] for providing us the test data and the code implementing the related method. The research has been supported by Fonds Wetenschappelijk Onderzoek (project no. G0A2617) and Vrije Universiteit Brussel (PhD bursary Duc Minh Nguyen, research programme M3D2).

References

  • [1] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
  • [2] E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489–509, 2006.
  • [3] A. Griffin and P. Tsakalides, “Compressed sensing of audio signals using multiple sensors,” in European Signal Processing Conference (EUSIPCO), 2008, pp. 1–5.
  • [4] L. Gan, “Block compressed sensing of natural images,” in International Conference on Digital Signal Processing (ICDSP), 2007, pp. 403–406.
  • [5] J. F. C. Mota, N. Deligiannis, A. C. Sankaranarayanan, V. Cevher, and M. R. D. Rodrigues, “Adaptive-rate reconstruction of time-varying signals with application in compressive foreground extraction,” IEEE Trans. Signal Process., vol. 64, no. 14, pp. 3651–3666, 2016.
  • [6] S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM J. Sci. Comput., vol. 20, no. 1, pp. 33–61, 1999.
  • [7] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, “A simple proof of the restricted isometry property for random matrices,” Constr. Approx, vol. 28, no. 3, pp. 253–263, 2008.
  • [8] C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 2, pp. 295–307, 2016.
  • [9] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. Manzagol, “Stacked Denoising Autoencoders: Learning useful representations in a deep network with a local denoising criterion,” J. Mach. Learn. Res., vol. 11, pp. 3371–3408, Dec. 2010.
  • [10] A. Mousavi, A. B. Patel, and R. G. Baraniuk, “A deep learning approach to structured signal recovery,” in Annual Allerton Conference on Communication, Control, and Computing (Allerton), 2015, pp. 1336–1343.
  • [11] A. Adler, D. Boublil, M. Elad, and M. Zibulevsky, “A deep learning approach to block-based compressed sensing of images,” ArXiv e-prints, June 2016.
  • [12] H. Palangi, R. Ward, and L. Deng, “Exploiting correlations among channels in distributed compressive sensing with convolutional deep stacking networks,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2016, pp. 2692–2696.
  • [13] B. Sun, H. Feng, K. Chen, and X. Zhu, “A deep learning framework of quantized compressed sensing for wireless neural recording,” IEEE Access, vol. 4, pp. 5169–5178, 2016.
  • [14] M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “DeepBinaryMask: Learning a binary mask for video compressive sensing,” http://arxiv.org/abs/1607.03343, 2016.
  • [15] M. Elad, “Optimized projections for compressed sensing,” IEEE Trans. Signal Process., vol. 55, no. 12, pp. 5695–5702, 2007.
  • [16] E. Tsiligianni, L. P. Kondi, and A. K. Katsaggelos, “Construction of Incoherent Unit Norm Tight Frames with Application to Compressed Sensing,” IEEE Trans. Inf. Theory, vol. 60, no. 4, pp. 2319–2330, 2014.
  • [17] P. Song, J. F. C. Mota, N. Deligiannis, and M. R. D. Rodrigues, “Measurement matrix design for compressive sensing with side information at the encoder,” in Statistical Signal Processing Workshop (SSP). IEEE, 2016, pp. 1–5.
  • [18] J. F. C. Mota, N. Deligiannis, and M. R. D. Rodrigues, “Compressed sensing with prior information: Strategies, geometry, and bounds,” IEEE Trans. Inf. Theory, vol. 63, no. 7, pp. 4472–4496, 2017.
  • [19] L. Applebaum, S. D. Howard, S. Searle, and R. Calderbank, “Chirp sensing codes: Deterministic compressed sensing measurements for fast recovery,” App. Comp. Harm. Anal., vol. 26, no. 2, pp. 283–290, 2009.
  • [20] J. D. Haupt, W. U. Bajwa, G. Raz, and R. Nowak,

    “Toeplitz compressed sensing matrices with applications to sparse channel estimation,”

    IEEE Trans. Inf. Theory, vol. 56, no. 11, pp. 5862–5875, 2010.
  • [21] R. A. DeVore, “Deterministic constructions of compressed sensing matrices,” J. Complexity, vol. 23, no. 4-6, pp. 918–925, 2007.
  • [22] S. Li and G. Ge, “Deterministic construction of sparse sensing matrices via finite geometry,” IEEE Trans. Signal Process., vol. 62, no. 11, pp. 2850–2859, 2014.
  • [23] A. Amini and F. Marvasti, “Deterministic construction of Binary, Bipolar and Ternary compressed sensing matrices,” IEEE Trans. Inf. Theory, vol. 57, no. 4, pp. 2360–2370, 2011.
  • [24] D. Baron, S. Sarvotham, and R. G. Baraniuk, “Bayesian compressive sensing via belief propagation,” IEEE Trans. Signal Process., vol. 58, no. 1, pp. 269–280, 2010.
  • [25] M. Courbariaux, Y. Bengio, and J-P. David, “Binaryconnect: Training deep neural networks with binary weights during propagations,” in International Conference on Neural Information Processing Systems (NIPS), 2015, pp. 3123–3131.
  • [26] M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio, “Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or -1,” ArXiv e-prints, Feb. 2016.
  • [27] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi,

    XNOR-net: Imagenet classification using binary convolutional neural networks,”

    in

    European Conference on Computer Vision (ECCV)

    , 2016, pp. 525–542.
  • [28] S. Han, J. Pool, J. Tran, and W. J. Dally, “Learning both weights and connections for efficient neural networks,” in International Conference on Neural Information Processing Systems (NIPS), 2015, pp. 1135–1143.
  • [29] S. Han, H. Mao, and W. J. Dally, “Deep compression - compressing deep neural networks with pruning, trained quantization and huffman coding,” in International Conference on Learning Representations (ICLR), 2016.
  • [30] X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in

    International Conference on Artificial Intelligence and Statistics (AISTATS)

    , 2011, pp. 315–323.
  • [31] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in

    International Conference on Machine Learning (ICML)

    , 2015, pp. 448–456.
  • [32] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252, 2015.
  • [33] B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman, “Labelme: A database and web-based tool for image annotation,” Int. J. Comput. Vis., vol. 77, no. 1-3, pp. 157–173, 2008.
  • [34] D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations (ICLR), 2015.