deep-ternary
Source code for the paper "Deep Learning Sparse Ternary Projections For Compressed Sensing of Images"
view repo
Compressed sensing (CS) is a sampling theory that allows reconstruction of sparse (or compressible) signals from an incomplete number of measurements, using of a sensing mechanism implemented by an appropriate projection matrix. The CS theory is based on random Gaussian projection matrices, which satisfy recovery guarantees with high probability; however, sparse ternary 0, -1, +1 projections are more suitable for hardware implementation. In this paper, we present a deep learning approach to obtain very sparse ternary projections for compressed sensing. Our deep learning architecture jointly learns a pair of a projection matrix and a reconstruction operator in an end-to-end fashion. The experimental results on real images demonstrate the effectiveness of the proposed approach compared to state-of-the-art methods, with significant advantage in terms of complexity.
READ FULL TEXT VIEW PDF
Compressed sensing (CS) is a signal processing framework for efficiently...
read it
Compressed sensing (CS) computed tomography has been proven to be import...
read it
Compressed sensing (CS) is an innovative technique allowing to represent...
read it
In this paper, we study the security of a compressed sensing (CS) based
...
read it
This paper deals with the design of a sensing matrix along with a sparse...
read it
Compressed sensing (CS) shows that a signal having a sparse or compressi...
read it
Computational photography involves sophisticated capture methods. A new ...
read it
Source code for the paper "Deep Learning Sparse Ternary Projections For Compressed Sensing of Images"
Compressed sensing or compressive sampling (CS) is a theory [1, 2]
that merges compression and acquisition, exploiting sparsity to recover signals that have been sampled at a drastically lower rate than what the Shannon/Nyquist theorem imposes. The results of CS have an important impact on numerous signal processing applications including the efficient processing and analysis of high-dimensional data such as audio
[3], image [4] and video [5].Assume a finite-length, real-valued signal , CS yields a compressed representation of the treated signal using a sensing mechanism that is realized by a sensing or projection matrix. The linear measurement process is described by
(1) |
where , , is the projection matrix and
is the vector containing the obtained measurements. In CS, we assume that either
is a sparse signal or that has a sparse representation with respect to a suitable basis , that is, , , , where is the quasi-norm counting the non-vanishing coefficients of the treated signal. Therefore, we obtain the underdetermined linear system(2) |
A sparse vector satisfying (2) can be obtained as the solution of the -minimization problem
(3) |
employing well-known algorithms like Basis Pursuit [6].
Conventional CS theory is based on random Gaussian or random Bernoulli matrices, which can be used to recover a -dimensional -sparse signal, provided that the number of measurements is [7]. An important issue when considering random matrices is that such matrices are typically difficult to build in hardware. The difficulty in storing these matrices and certain physical constraints on the measurement process makes it challenging to realize CS in practice. Moreover, when multiplying arbitrary matrices with signal vectors of high dimension, the lack of any fast matrix multiplication algorithm results in high computational cost.
Deep learning is an emerging field that learns multiple levels of representation of data and has been used successfully in image processing tasks. Existing work has been presented for image super-resolution
[8], image denoising [9] and compressed sensing [10, 11]. Deep learning has also been applied in distributed CS [12], quantized CS [13] and video CS [14].In this paper, we adopt a deep learning approach to learn an optimized projection matrix and a non-linear reconstruction mapping from measurements to the original signal. In order to design projections suitable for hardware implementation, we focus on sparse matrices composed of , imposing sparsity and binary constraints on the proposed network architecture. The network is trained on image patches and the learned projection matrix is used to acquire images in a block-based manner. Our experimental results show that high quality reconstruction can be achieved with projections containing only % nonzero entries.
While compressed sensing was introduced utilizing random projection matrices, new results concern matrices that are more efficient than random projections, leading to fewer necessary measurements or improvement of the reconstruction performance [15, 16]. Alternative studies have proposed designs that leverage prior knowledge on the signal [17] akin to the theory of compressed sensing with prior information [18]. Important research directions focus on the hardware implementation of CS. In order to achieve efficient storage, fast encoding and decoding, structured matrices have been proposed [19, 20]. Unfortunately, these constructions come at the cost of poor recovery conditions. Binary and ternary matrices [21, 22, 23] yield fast computations; however, most of the proposed constructions are deterministic and impose restrictions on the matrix dimensions. Moreover, as it has been proven in [24], LDPC like matrices with entries may perform well as long as they are not too sparse.
Our approach addresses both hardware limitations and recovery requirements. The proposed projections can be extremely sparse (only of their elements are nonzero) allowing highly efficient storage, and have nonzero entries yielding fast computations during acquisition ^{1}^{1}1The source code of the proposed method is available at https://github.com/nmduc/deep-ternary. While such a sparse projection matrix would lead to unacceptable recovery performance when combined with conventional reconstruction algorithms, the joint optimization of the sensing mechanism and the reconstruction process yields state-of-the-art results in image recovery.
The first work concerning recovery from compressed measurements via deep learning is presented in [10]
. The authors use stacked denoising autoencoders to jointly train a non-linear sensing operator and a non-linear reconstruction mapping. While the reconstruction quality is comparable to state-of-the-art algorithms, the gain in reconstruction time is considerable. Deep learning for CS has also been employed in
[11]. Our approach follows similar principles as [10, 11], but focuses on simplifying the projection matrix. While the projection matrices in [10, 11] are dense matrices with real-valued entries, we enforce the projection matrix to be a sparse matrix with elements in so that it can be efficiently implemented.The proposed algorithm is based on recent work on simplifying deep neural networks, applied on image classification tasks, in which deep networks have achieved great progress. In
[25], neural networks are trained with binary weights . The study in [26] extends [25] to full binary neural networks (BNNs), with binary weights and binary hidden unit activations. A different technique [27]adds scaling factors to compensate for the loss introduced by weight binarization. Another direction in simplifying deep neural networks is to compress pre-trained networks. The method proposed in
[28] not only learns weights but also connections, producing sparsely-connected networks. Extending [28], in [29], connection pruning, weight quantization and Huffman coding are employed to compress deep neural networks.Our algorithm incorporates ideas from both these two directions. Nevertheless, compared to existing methods, our novelty is two fold: (i) We propose a sparsifying technique on the weights implementing the sensing layer; combined with binarization techniques, our method yields a highly sparse ternary projection matrix. The learned projections can be stored efficiently and allow fast computations during acquisition, therefore, they are suitable for hardware implementation. (ii) We only simplify the first layer in our network, which corresponds to the linear projection matrix and allow the reconstruction module to be non-linear in order to achieve high performance.
Next, we present our proposed method for efficient compressed sensing of images. The section starts with a description of our network architecture, followed by our proposed training algorithm.
Our network architecture is illustrated in Fig. 1. It consists of a sensing and a reconstruction module. The network takes vectorized image patches of size as input. The sensing layer projects the -dimensional input signal to the -dimensional domain; thus, the number of units in this layer is , where is the sensing rate. The sensing layer has weights , corresponding to the projection matrix ,
. In order to learn a simple linear projection matrix, we do not put bias and non-linearity activation function in this layer.
The first layer in the reconstruction module is a scaling layer, which linearly scales the outputs of the sensing layer by learned factors . This layer consists of hidden units connected “1-1” to the hidden units of the sensing layer and it is employed to compensate for the loss induced by binarizing the sensing weights, as explained in Sec. 3.2. Nevertheless, in deployment, only the projection matrix is implemented in sensing devices and the learned scaling factors are kept in the reconstruction module. The scaling layer is followed by
hidden layers. These hidden layers employ the Rectified Linear Unit (ReLU) activation function
[30]. The output layer is a linear fully connected layer with size equal to the input dimension. All layers in the reconstruction module, except for the scaling layer, are fully connected, and they are followed by a batch-normalization layer
[31].In general, our network training follows the standard mini-batch gradient descent method. Denote the input and reconstructed patches, respectively, with ,
. We employ the mean squared error between the input and the reconstruction as our loss function:
(4) |
where is the number of sample patches. In constrast to the conventional mini-batch gradient descent method, we introduce a sparsifying and a binarization step on the training of the sensing layer.
More particularly, we first sparsify the continuous-valued sensing weights , to get the sparse weights . For this step, we propose to retain in only the entries that correspond to the top-K largest absolute values in and set all the rest entries to zero. We refer to this procedure as function. The selection of the top weights can be applied column-wise, row-wise or over the whole matrix. Nevertheless, since the update of the scaling layer, presented below, involves column-wise operator on the sensing weights, we opt to perform
in a column-wise manner. Each neuron in the sensing layer is connected to
elements in the input signal, where is the sparsity ratio. Implementation-wise, we construct a sparse binary mask with entries equal to corresponding to the largest weights in . The sparse sensing weights are updated according to , where represents the Hadamard product.The binarization step involves a mapping of the sparse continuous valued weights to sparse binary weights . Nevertheless, both the sparse and the binarization step introduce some loss, which we want to recover during reconstruction. For this reason, the sensing layer is followed by a scaling layer similar to the one proposed in [27]. This layer with weights serves as an inverse mapping of the sparse binarized weights to the continuous weights . Thus, the output of the scaling layer is an approximation of , i. e., the measurements corresponding to the continuous projections . Let and be the columns of and , respectively. corresponds to the dense continuous weights of the hidden unit in the sensing layer. We approximate with , where is a scale factor, corresponding to the entry of the scaling weights . The values of and can be determined by minimizing the following mean square error with respect to , :
(5) |
By expanding (5), we have:
(6) |
where is a constant. As is a positive scalar, following [27] but also taking into account our sparsity constraint, we obtain the optimal sparse binary vector as a solution to the problem:
(7) |
where is the column of , and denotes the positions of the nonzero entries of the treated vector. The solution of (7) is a vector containing the signs of . After obtaining the optimal , we can solve for the optimal by making the derivative of equal to zero. Considering that , the optimal is given by:
(8) |
After the sparsifying and binarization steps, the resulting is sparse and has nonzero entries in in each of its column.
Following existing training algorithms for networks with binary weights [25, 26, 27], we use the sparse binary weights during forward and backward propagation. The high precision weights , on the other hand, are used during parameter update, to accommodate the small changes of the weights after each update step. In our training, is updated using the gradient of the loss function with respect to . It should be noted that even though contains only discrete weights in , the gradient of the loss with respect to it still lies in the continuous domain. Denoting the continuous, sparse, and sparse binary sensing weights, respectively, the scaling layer’s weights, the reconstruction weights and the column of at step , our training is summarized by Algorithm 1.
In order to evaluate the proposed algorithm, we carried out experiments in image recovery. We employed the ILSVRC2012 validation set [32], including K images for training, and tested our model on two testing sets of images of resolution . The first testing set consists of 10 images, taken from the ILSVCR2014 [32] dataset, and is provided by the authors of [10]. The second testing set is composed of 50 images randomly selected from the LabelMe dataset [33]. All the images were converted to grayscale in our experiments. To reduce the computational overhead, we ran our experiments in small image patches of size pixels. In total, we randomly sampled
millions patches to form our training set. During training, the input patches were preprocessed by subtracting the mean and dividing by the standard deviation. We trained our network using Algorithm
1, with the Adam parameter update [34], a batch size of , epochs and a learning rate of decaying by a factor of every epochs. The training samples were randomly shuffled after each epoch. To avoid over-fitting, we employ regularization on the reconstruction modules, with a weight equal to. During the testing phase, we sampled overlapping patches from each test image, with a stride of
pixels and determined the final image reconstruction as the average of the patches’ reconstructions. The methods are evaluated using PSNR values, expressed in dB. Concerning the network architecture, we empirically set the number of non-linear hidden layers to , each with hidden units since this configuration produces a good trade-off between training time and reconstruction quality.PSNR | 25.24 | 26.11 | 26.65 | 27.52 | 27.96 |
Sparsity ratio
Next, we experiment with different sparsity ratios, using and varying .
The mean PSNR values on the first testing set are presented in Table 2.
The dimension of input is and, for , we obtain .
With , the number of nonzero entries in each column of (i. e. ) is , , , , , , , respectively.
As can be seen, acceptable reconstruction performance can be achieved using extremely sparse projection matrices with only % nonzero entries ().
Varying from to considerably improves the performance, while the difference between and is negligible.
The network reaches its peak performance with and performs slightly worse with .
It should be pointed out that with , there are and nonzero entries in the projection matrix, respectively.
The former is not enough to fully cover the dimensional input signal.
We argue that this is the reason for the noticeable performance jump when increasing to .
During training, the network experiences over-fitting with . This explains why
gives better performance than .
As a result, the proposed sparse binary constraint can be considered as an extra regularizer to the network.
PSNR | 25.83 | 26.96 | 26.98 | 27.61 | 27.52 | 27.40 | 27.37 |
---|
Comparison with state of the art
As the proposed algorithm implements CS via deep learning, the next experiment involves a comparison with the method of [10],
which employs a stacked denoising autoencoder to jointly learn the sensing layer and the reconstructor.
We select the best algorithm from [10], referred to as O-NL-SDA for our comparison.
This algorithm uses a non-linear sensing mechanism, with overlapping image patches of size .
The results of O-NL-SDA on the first testing set is taken from [10].
To obtain the results on the second testing set, we train an O-NL-SDA model on our training set using the proposed configurations in [10].
Results obtained with a conventional reconstruction algorithm, namely, Basis Pursuit (BP) [6], using random ternary projections are also presented.
Sparse binary and ternary constructions like the ones proposed in [22, 23] could not be employed in our experiments
due to the constraints they impose on matrix dimensions.
To have a fair comparison with [10], we use the same sensing rate, .
We choose for the proposed method since it yields the best performance, while producing a highly sparse projection matrix.
The comparison between the selected methods on the first testing set is shown in Table 3. On the second testing set, the mean PSNR values for O-NL-SDA [10], BP [6] and the proposed algorithm are , and dB, respectively.
O-NL-SDA [10] | BP [6] | Proposed | |
---|---|---|---|
Damselfly | 30.85 | 26.49 | 31.56 |
Birds | 26.62 | 22.75 | 27.17 |
Rabbit | 27.24 | 22.63 | 27.89 |
Turtle | 34.65 | 26.31 | 35.59 |
Dog | 21.55 | 16.97 | 22.33 |
Eagle Ray | 26.57 | 22.65 | 27.20 |
Boat | 33.11 | 25.48 | 33.94 |
Monkey | 30.32 | 23.51 | 30.70 |
Panda | 21.00 | 17.85 | 21.53 |
Snake | 17.71 | 14.67 | 18.23 |
Mean PSNR | 26.96 | 21.93 | 27.61 |
As can be seen, the proposed algorithm yields significantly better results than the conventional reconstruction with BP [6] and ternary but not sparse projections. Despite having a sparse ternary matrix of only 5% of nonzero entries, our method outperforms O-NL-SDA [10] in terms of the recovery performance. Concerning the speed of the reconstructor, as it was reported in [10]
, a reconstructor implemented using a feed-forward neural network can perform orders of magnitude faster than a convex optimization solver. Clearly, our method can provide a convenient hardware implementation of the sensing mechanism and a fast reconstructor with better reconstruction quality than the state of the art.
In this paper, we propose a novel algorithm to train a pair of a highly sparse ternary projection matrix and a reconstruction operator for compressed sensing of images. The sparse and ternary structure of the learned projection matrix can be exploited in efficient hardware implementations. Experimental results on real images show that the achieved reconstruction performance for a projection matrix with % nonzero binary entries and a corresponding reconstructor trained end-to-end with the proposed algorithm outperforms state-of-the-art methods. Extremely sparse projection matrices with only % nonzero entries learned using the same algorithm yield acceptable performance as well.
We would like to thank the authors of [10] for providing us the test data and the code implementing the related method. The research has been supported by Fonds Wetenschappelijk Onderzoek (project no. G0A2617) and Vrije Universiteit Brussel (PhD bursary Duc Minh Nguyen, research programme M3D2).
“Toeplitz compressed sensing matrices with applications to sparse channel estimation,”
IEEE Trans. Inf. Theory, vol. 56, no. 11, pp. 5862–5875, 2010.“XNOR-net: Imagenet classification using binary convolutional neural networks,”
inEuropean Conference on Computer Vision (ECCV)
, 2016, pp. 525–542.International Conference on Artificial Intelligence and Statistics (AISTATS)
, 2011, pp. 315–323.International Conference on Machine Learning (ICML)
, 2015, pp. 448–456.
Comments
There are no comments yet.