I Introduction
Compressive sensing [1, 2, 3] (CS) has led to real applications, including the singlepixel camera [4], the lensless camera [5, 6], video compressive sensing [7, 8, 9, 10, 11], depth compressive sensing [8, 12], hyperspectral compressive imaging [13, 14, 15], polarization compressive sensing [16], terahertz imaging [17], and millimeter wave imaging [18]. In this paper, we focus on the twodimensional (2D) image case, though similar technique can be used for videos [19] and other bandwidths. Specifically, we develop our algorithm under the lensless compressive imaging architecture [5, 20], which has provided excellent reconstruction images from the compressive measurements using simple and offtheshelf hardware [20].
Diverse algorithms [21, 22, 23, 24, 25, 26] have been developed for compressive sensing recovery, which plays a pivot role in CS, to reconstruct the desired signal from compressive measurements. Sparsity, the key ingredient of CS, has been investigated extensively in these algorithms. The wavelet transformation [27, 28, 29, 21] is generally used since it provides the sparse representation of an image with fast transformations (thus very efficient). A parallel research is using the total variation (TV) for CS recovery [23, 30], which provides good results for piecewise smooth signals. The aforementioned algorithms do not increase the unknown parameters significantly during the reconstruction, as usually the wavelet coefficients will have similar (or the same) number of the image pixels. Recently, researchers have found that by exploiting the sparsity of local patches [31, 32], better results can be achieved. In summary, CS recovery algorithms fall into the following three categories: 1) global basis based algorithm, e.g., using the wavelet transformation, 2) TV based algorithm, and 3) local basis based algorithm, i.e., DCT or dictionary learning or denoising based algorithms. Stateoftheart CS inversion results have been obtained in [33, 34], which generally lie within the third category. In this paper, we propose an alternative inversion algorithm that exploits the lowrank property of image patches.
Generally, the CS recovery is an iterative process in which two steps are performed at each iteration [34]: projecting the measurements to the image level, which can be done by the majorizationminimization (MM) approach [22, 35], or the Euclidean projection [36], or the alternating direction method of multipliers (ADMM) [37], denoising this projected image and updating the recovery of the desired image. These two steps are performed iteratively until some criterion is satisfied. A general framework is developed in [34] under the approximate message passing (AMP) framework, in which diverse denoising algorithms [38] can be plugged in. One key difference of the denoising based CS inversion algorithms compared with the wavelet based CS inversion algorithms is that the former exploits the local sparsity based on (overlapping) patches, as stateoftheart denoising algorithm is using sparse representation of local patches, e.g., [39]. The number of coefficients for the patches (under some basis or dictionary) is usually larger than the image pixel number (because the overlapping patches are used).
Ia Contributions
The proposed algorithm in this paper also lies in the third category mentioned earlier, which is based on the local basis (for image patches). Specific contributions of this work can be summarized as below:

We investigate the lowrank property of image patches under the Gaussian mixture model (GMM) framework. Different from the lowrank model investigated in [33], which requires patch clustering (block matching) as an additional step, in our algorithm, each patch is modeled by a GMM with different weights corresponding to different Gaussian components, which can be seen as a soft clustering approach based on these weights. Furthermore, these weights are updated in each iteration.

We develop an general framework using ADMM to explore the sparsity (and lowrank property) of patches in order to recover the desired signal.

We conduct experiments using our proposed algorithm and other leading algorithms on the real data captured by our lensless camera. This verifies real applications of each algorithm.
IB Organization of This Paper
We start with the derivation of an ADMM formulation for CS recovery to investigate the sparsity of local overlapping patches in Section II. The proposed lowrank GMM algorithm is developed in Section III and the joint reconstruction algorithm is summarized in Section IV. Extensive results on both simulation and real data are reported in Sections VVI. Section VII concludes the paper.
Ii CS Inversion via Exploiting Sparsity of Patches
Under the CS framework, the problem we are solving can be formulated as:
(1)  
s.t.  (2) 
where is the sensing matrix, is the desired signal, is the coefficients which are sparse under the basis . From , we can recover easily via , which can be known a priori, (e.g., a wavelet or DCT basis) or learned based on during the reconstruction.
Considering the image case investigated here, let
denote the vectorized image and the sparse representation,
, is now modeled on image patches. Therefore, (2) can be reformulated as:(3) 
where denotes the patch extraction and vectorization operation and considering each patch, we have
(4) 
with indexes the patches.
The problem can be reformulated as:
(5)  
s.t.  (6) 
Note this is shared for all patches for current discussion, and in the following analysis, similar patches can be grouped together [33] and each group can have its own .
Next, we develop an ADMM [37] formulation of (5) to solve the problem, which will also be used in our GMM formulation in Section IV. Introducing Lagrange multipliers and parameter in (5) results in the objective function
(7) 
Define ,
(8)  
The ADMM cyclically solves the following 3 subproblems:
(9)  
(10)  
(11) 
where denotes the iteration index.
Equation (9) is a quadratic optimization problem and can be simplified to
(12) 
which admits the closedform solution,
(13) 
However, the dimension of is large (the pixels of the desired image), requiring a high computational workload. Alternatively, since is invertible, the matrix inversion formula can be used to reduce the computational workload. As is used to extract th patch from an image, is a diagonal matrix
(14) 
Each of the diagonal entries corresponds to an image pixel location and its value is the number of overlapping patches that cover that pixel. Therefore, and
(15) 
However, this is not necessarily easy to calculate though can be precomputed. needs to be saved for computation. Alternatively, (13) can be solved by the conjugate gradient algorithm [42].
To mitigate this problem, we apply the ADMM again on (5) and introduce another auxiliary variable , leading to the following optimization problem:
(16)  
s.t.  (17) 
Following this,
(18) 
which can be simplified to (by setting ):
(19) 
The optimization of (II) consists of the following iterations:
(20)  
(21)  
(22)  
(23) 
For fixed {}, admits the following closedform solution:
(24) 
which can be simplified to
(25) 
For the case considered in our work (as implemented in the lensless camera [5]), is the permuted Hadamard matrix and thus
is an identity matrix:
(26) 
Similarly, for fixed {}, admits the following closedform solution:
(27) 
Recall that is a diagonal matrix , thus can be computed element wise via
(28) 
where denotes the th entry of the vector inside .
Similar to (10), (22) can be considered as a dictionary learning model, where is the dictionary. If the orthonormal transformation is used (e.g., the DCT), can be solved by the shrinkage thresholding operation [22, 27].
Since the key of this algorithm is to investigate the sparsity of the local overlapping patches, we term this framework as SLOPE (Shrinkage of Local Overlapping Patches Estimator), where the ‘local’ stands for the local basis rather than the global basis such as wavelet. The ADMMSLOPE is summarized in Algorithm 1.
Iii The Gaussian Mixture Model
The Gaussian mixture model (GMM) has been rerecognized as an advanced dictionary learning approach and has achieved excellent results in image processing [40, 41] and video compressive sensing [43, 19]. Recall the image patches extracted from the 2D image, where the patch size is and there are in total patches. For th patch , it is modeled by a GMM with Gaussians [44]:
(29) 
where represent the mean and covariance matrix of the Gaussian, and denotes the weights of these Gaussian component.
In this paper, we further impose the GMM is lowrank and now the model in (2) becomes (29) and the problem to be solved becomes
(30)  
s.t.  (31) 
where denotes th patch from , which is an vectorized image. symbolize the lowrank GMM.
The following problem is to estimate this lowrank GMM. Recalling Section I, we review that the CS recovery is an iterative twostep procedure. In each iteration, one can get an estimate from the projection of the measurements (details discussed in Section IV). We hereby learn a (full rank) GMM from this estimate and then proposing the eigenvalue thresholding approach to derive the lowrank GMM based on this full rank GMM.
Iiia LowRank GMM
In the following, we provide a motivation for the next step in the algorithm. The random vector in (29) (modeled as a GMM) can be written as, dropping the subscript for simplicity,
(32) 
where each
is a random vector of multivariate normal distribution, given by
(33) 
The random vector
can be decomposed into independent random variables of normal distribution
[45, 43] as follows(34) 
where , is the rank of , and is a vector whose components are independent random variables.
In order to reduce noise, we use a model in which has a small number of independent random components, i.e., we require to be small. This is equivalent to requiring have a reduced rank. More specifically, introducing the parameter , we solve the following minimization problem [46]:
(35) 
where is the Frobenious norm, and
is the nuclear norm (sum of the singular values). It is shown in
[46] that the solution to (35) can be readily obtained by a shrinkage on the singular values (which are similar to the eigenvalues) of . Specifically, consider(36)  
(37) 
We impose that via
(38)  
(39) 
And we term this as the eigenvalue thresholding (EVT). Following this, is obtained by
(40) 
We further define
(41) 
Next, we define a new random vector
(42) 
which is modeled by a lowrank GMM, parameterized by .
IiiB Update Estimate via the LowRank GMM
Given an estimated image , the GMM in (29
) can be learned via the ExpectationMaximization (EM) algorithm
[44, 43] based on overlapping patches. Then for each Gaussian component, we adopt the EVT to the covariance matrix to obtain the lowrank GMM . Following this, the estimated image or patches can be updated via this lowrank GMM, to or . Dropping the subscript , given , the conditional distribution for maybe evaluated as(43) 
Since is a lowrank version of , we assume:
(44) 
where is modeled as an additive Gaussian noise, thus
(45) 
Plugging (45) into (43), we have
(46)  
(47)  
(48) 
which is an analytical solution with [44]
(49)  
(50)  
(51)  
Note that is lowrank obtained via EVT from , but by adding (), is invertible. While (48) provides a posterior distribution for , we obtain the point estimate of via the posterior mean:
(52) 
which is a closedform solution.
The procedure of learning and updating the GMM can be summarized as below:

Step 1: Lean a GMM (not lowrank) via EM from an estimate of , which can be obtained from IST, GAP or ADMM described below in Section IV.
IiiC Degrade to the Piecewise Linear Estimator
The piecewise linear estimator (PLE) proposed in [41]
has demonstrated excellent performance on diverse image processing tasks. If each patch is considered drawn from a single Gaussian distribution, our GMM degrades to the PLE and the weights
(or ) are not required. Furthermore, the update equation of will become the Winer filter. The MAPEM procedure proposed in [41] can still be used to determine which Gaussian each patch lies in and to estimate the denoising version of the each patch. However, this method has been shown that it is very sensitive to the initialization and selecting is critical to the performance of the method. We compare our proposed algorithm with PLE by experiments in Section VD.It is worthing nothing that, even using PLE, the eigenvalue thresholding method used to obtain the lowrank Gaussian model is first proposed in this paper. In this case, the PLE model is very similar to the NLRCS [33], where the lowrank is imposed on each cluster of patches, while in the PLE, the lowrank is imposed on the patches belonging to the same Gaussian component; this can also be seen as a cluster.
Next, we review the MAPEM algorithm proposed in [41] and adopt it to the current context. In the Estep, assuming that the estimates of the lowrank Gaussian parameters are known, (following the previous Mstep), for each patch, one calculates the MAP estimates of all the Gaussian models and selects the best Gaussian model to obtain the estimate of the patch . In the Mstep, assuming that the Gaussian models selection and the signal estimate , are known (following the previous Estep), one updates the Gaussian models and then impose them to be lowrank.

Estep: Signal Estimation and Model Selection:
For each image patch , the signal estimation and the model selection are calculated to maximize the log a posterioriprobability :
(53) Recall that we consider is a lowrank version of and
(54) Therefore:
(55) (56) This maximization is first calculated over and then over . Given a prior Gaussian signal model , can be estimated by the posteriori mean
(57) The best Gaussian model that generates the maximum MAP probability among all the models is then selected with the estimated
(58) The signal estimate is obtained by plugging in the best model in the MAP estimate
(59) 
Mstep: Model Estimation:
In the Mstep, the Gaussian model selection and the signal estimate of all the patches are assumed to be know (derived from the IST, GAP or ADMM as shown in Section IV). The parameters of each Gaussian model are estimated with the maximumlikelihood (ML) estimate using all the patches in the same Gaussian model:
(60) where denotes the ensemble of the patch indices that are assigned to the th Gaussian model and
(61) (62)
Return to the lowrank model proposed in this paper. The Estep is same as above and one more step is added in the Mstep. The full rank (not lowrank) Gaussian models are first estimated via (60)(62), and then each Gaussian model is imposed to be lowrank by thresholding the eigenvalues via (36)(40). The new lowrank PLE algorithm can be summarized into the following 3 steps:
Iv The Joint Reconstruction Algorithm
Section III presents an algorithm to obtain a better estimate (or a denoised version) of the signal given an initial estimate utilizing the lowrank GMM. In this section, the GMM will be wrapped into our joint reconstruction algorithm by different update methods to get the initial estimate, which can be considered as projecting the measurement to the image plane . This is obtained by minimizing the following objection function
(63) 
Diverse algorithms have been proposed and we review two of them below and develop an ADMM formulation in Section IVC. Other approaches, for example, the TwIST [27] can also be used.
Iva Iterative Shrinkage Thresholding
By using the majorizationminimization approach [22] to minimize , we can avoid solving a system of linear equations. At each iteration of the MM approach, we should find a function that coincides with at but otherwise upperbounds . We should choose a majorizer which can be minimized more easily (without having to solve a system of equations). The is defined as
(64) 
where denotes the identity matrix and must be chosen to be equal to or greater than the maximum eigenvalue of . For the Hadamard sensing matrix used in our camera, the maximum eigenvalue of is easily obtained. The update equation of in this Iterative Shrinkage Thresholding (IST) algorithm [35] is given by:
(65) 
IvB Generalized Alternating Projection
The Generalized Alternating Projection (GAP) algorithm proposed in [36], which enjoys the anytime property and has been demonstrated high performance in video compressive sensing [8], has the following update equation by using the Euclidean projection:
(66) 
Under some condition of the sensing matrix , as the Hadamard matrix used in our system, is the identity matrix and thus (66) is same as (65) with .
IvC An ADMM Formulation
Under the GMM framework, we don’t have the sparse variable as in (5), the objective function can be formulated as:
(69) 
where is obtained by the lowrank GMM model. Following the procedure in (16) by introducing the auxiliary variable {}, we have
(70) 
The optimization of (IVC) consists of the following iterations:
(71)  
(72)  
(73) 
where the update of is given by the lowrank GMM in (48)(52). Under the sensing matrix considered here in our work, , the solution of (71) is given by (II). Eq. (72) can be solved by
(74) 
Similar to (28), can be solved elementwise but in one shot.
The proposed lowrank GMM algorithm, integrated with the three approaches to update (projecting to ), constitutes the LRGMMSLOPE algorithm summarized in Algorithm 2. Similarly, when the lowrank constraint is imposed on the PLE, we obtain the LRPLESLOPE in Algorithm 3.
Image  Method  CSr  CSr  CSr  CSr  CSr  CSr  CSr  CSr 

Proposed  21.7416  22.7603  23.5292  24.1613  24.6844  25.2192  25.6663  26.0390  
NLRCS  20.4887  21.8187  20.9706  21.6355  24.5237  25.4407  25.5486  25.7166  
DAMP  19.5424  20.4412  21.6364  22.2731  23.0846  23.8539  24.5367  25.0486  
GAPw  20.0870  20.9902  21.6160  22.2509  22.6795  23.0811  23.4164  23.7371  
barbara  TVAL3  18.1065  19.3655  20.6883  21.4713  22.0738  22.8414  23.0771  23.4208 
Proposed  22.7729  22.8600  24.6319  25.2106  25.9662  26.5287  27.1382  27.7893  
NLRCS  21.8342  21.4521  24.3866  16.9015  22.2784  24.5024  23.7634  23.7528  
DAMP  19.7960  20.9017  21.9684  22.8501  23.5793  24.2540  24.9395  25.5490  
GAPw  20.6312  21.2215  21.8709  22.3753  22.8921  23.3859  23.7879  24.1355  
boat  TVAL3  18.2669  19.4750  20.2451  21.5041  21.9831  22.3542  22.7917  23.1918 
Proposed  21.3706  22.1542  22.8186  23.4861  24.0118  24.4425  24.8875  25.1473  
NLRCS  12.0313  16.6251  16.0262  17.7892  17.8507  18.7355  19.2338  21.5345  
DAMP  18.3514  19.3660  20.3618  21.2030  22.0050  22.7786  23.3635  24.0456  
GAPw  19.1932  19.7725  20.2935  21.1823  21.5875  21.8644  22.1527  23.8935  
cameraman  TVAL3  17.8787  18.6441  20.1835  20.2041  20.8832  21.2041  21.2917  21.4903 
Proposed  28.8157  29.8322  30.5442  31.5101  32.0363  32.5592  33.2439  33.5244  
NLRCS  29.7873  30.4427  31.9318  33.4732  34.0312  34.8470  34.9993  34.4573  
DAMP  22.3218  24.4925  26.1075  27.5484  28.9518  30.2165  31.4011  32.3636  
GAPw  24.5972  25.5398  26.2436  26.8443  27.3006  27.8175  28.2215  26.6864  
foreman  TVAL3  19.3428  20.9050  23.1337  24.1152  24.9492  25.42780  25.9380  26.5072 
Proposed  26.7712  28.5080  29.5543  30.1185  31.0571  31.5114  32.1238  32.6598  
NLRCS  25.8497  28.1219  30.6011  27.6256  30.1692  31.5297  28.8471  29.2505  
DAMP  21.7965  23.7841  25.2089  26.6130  27.8643  28.9586  30.0785  31.1040  
GAPw  23.0012  23.8285  24.5941  25.3161  25.8258  26.3531  26.8563  27.2498  
house  TVAL3  19.0674  20.6872  21.7140  23.4273  23.7382  24.0901  24.5912  25.1538 
Proposed  22.9046  23.9496  24.6221  25.3661  25.9910  26.3633  26.9981  27.3694  
NLRCS  22.6851  22.6874  24.8531  23.2517  21.2840  24.2211  25.7213  27.2040  
DAMP  20.4629  21.7906  22.5929  23.6507  24.2494  24.7895  25.3561  25.8032  
GAPw  20.7381  21.4840  22.2093  22.6754  23.0767  23.5415  23.9218  24.2559  
lena  TVAL3  19.1813  19.7752  20.7421  21.7209  22.1211  22.6207  23.0676  23.6665 
Proposed  19.2072  20.1870  21.3428  22.3022  22.9918  23.6285  24.2536  24.7206  
NLRCS  16.1196  17.5697  17.1601  14.6306  15.5702  18.8192  20.8003  20.8957  
DAMP  16.8849  17.6731  18.8636  19.7654  20.8845  21.8400  22.7742  23.4913  
GAPw  17.3904  18.0572  18.8062  19.3374  19.8313  20.2766  20.7840  21.1890  
monarch  TVAL3  16.2031  17.4510  18.0521  18.6358  19.0864  19.6194  19.9829  20.4556 
Proposed  23.1402  24.1143  24.9261  25.5033  26.4840  26.9776  27.4935  28.1257  
NLRCS  20.4401  20.9168  22.6560  22.1715  22.1582  22.4706  24.1134  26.3153  
DAMP  19.5304  20.8630  21.9746  22.8995  23.9191  24.6991  25.5513  26.5082  
GAPw  20.9576  21.9057  22.5781  23.1853  23.7812  24.2326  24.7597  25.1206  
parrot  TVAL3  18.2386  19.7337  21.4420  22.1800  21.8906  22.6345  22.9478  23.5418 
Proposed  23.3141  24.4109  25.2446  25.9507  26.6528  27.1457  27.6911  28.1719  
NLRCS  21.1545  22.4543  23.5732  22.1848  23.4832  25.0708  25.2534  26.1408  
DAMP  19.8358  21.1640  22.3393  23.3504  24.3173  25.1738  26.0001  26.7392  
GAPw  20.8245  21.5999  22.2766  22.8427  23.3212  23.7845  24.2015  24.5660  
average  TVAL3  18.2857  19.5046  20.7751  21.6573  22.0907  22.5990  22.9610  23.4285 
Image  Method  CSr  CSr  CSr  CSr  CSr  CSr  CSr  CSr 

Proposed  22.0258  23.0449  23.8714  24.4931  25.0737  25.6280  26.0476  26.4061  
NLRCS  12.9754  13.8812  19.0401  17.2073  17.5413  18.9149  22.0660  21.8692  
DAMP  17.3405  18.9956  20.6016  21.8303  22.8833  23.8037  24.6248  25.1828  
GAPw  18.9621  19.8426  20.5282  21.0880  21.5961  21.9764  22.2893  22.5441  
TVAL3  15.9549  17.0403  18.0474  18.9911  19.4949  20.1248  20.4482  20.8150  
Proposed  22.4159  23.4245  24.4111  25.3072  26.1026  26.7247  27.3431  27.8475  
NLRCS  15.5817  19.0300  19.9201  19.4926  21.1154  23.7973  24.8067  23.6006  
DAMP  17.3405  18.9956  20.6018  21.8303  22.8833  23.8037  24.6248  25.1828  
GAPw  18.9621  19.8426  20.5282  21.0880  21.5961  21.9764  22.2893  22.5441  
TVAL3  15.9549  17.0403  18.0474  18.9911  19.4949  20.1248  20.4482  20.8150  
V Simulation Results
We test the proposed algorithm on simulation datasets with 2D images. The proposed algorithm is compared with other leading algorithms 1) TVAL3 [30], 2) GAP based on wavelet [36], 3) DAMP [34] with BM3D denoising, and 4) NLRCS [33], which explores the low rank of similar patches. Stateoftheart results have been obtained by [34, 33]. The Gaussian components in our mixture model is set to for all the experiments and the analysis of this number is provided in Section VC. When updating , accelerated GAP in (67) is used and the comparison of different approaches is shown in Section VB. We obtained the lowrank GMM by setting the rank of each Gaussian component learned via the EM algorithm to the half of the full rank , where P = 64 for the patch size used in this paper. The proposed algorithm is further compared with JPEG compression in Section VA.
Following the formulation of the lensless camera [5], the permuted Hadamard matrix is used as the sensing matrix. Each image is resized to and these images are shown in Figure 1. The CSr is defined as:
(75) 
where the number of columns in is equivalent to the total pixel number of the image. The sensing matrix is constructed from rows of a Hadamard matrix of order . The columns of the Hadamard matrix is permuted according to a predetermined random permutation (the same permutation is used in the real data captured by our lensless camera). For each CSr, we use the first CSr rows of the columnpermuted Hadamard matrix as the sensing matrix. By selecting some other rows of the permutated Hadamard matrix, we can get better results [47]. However, here we just select the top rows to be consistent to the implementation of our lensless camera. Note that in this case, is an identity matrix and it is very fast to use accelerated GAP update for in (67). We observed that best results are obtained by the LRGMMSLOPE with GAP updates of and these results are reported in this section. For the comparison of IST, GAP and ADMM, please refer to Section VB.
Since when CSr=0.1, good results have been achieved for most images (see Figure 2 for one example), we here spend more efforts on the extremely low CSr, in particular CSr, which may be of interest to the real applications that are used to detect anomalous events [48] without caring too much about the image quality. The results are summarized in Table I. We can observe that best results are obtained by the proposed aglorithm, NLRCS or DAMP. When CSr is less than 0.1, the proposed algorithm usually provides best results except “foreman”, where NLRCS is the best. We also observed that NLRCS is very sensitive to the parameters and sometimes the PSNRs are not linearly increasing as the CSr increases, while the other algorithms including the proposed do not have this problem. On average, our proposed algorithm works best when CSr. Though did not reported here, when CSr, the proposed algorithm also provides comparable or better results than DAMP and NLRCS. But we need to tune the noise parameter used in , while for all the results presented here, we set it to the same value . In addition, we need to tune the rank thresholds , for which we have observed that a higher CSr requires a larger .
In addition to the grayscale images tested above as in other papers, we also conduct our proposed algorithm on the RGB images with results shown in Table II. Three sensors are simulated to capture the R, G and B components of the image. The “book” scene corresponds to the real data captured by our lensless camera. Again, our proposed algorithm provides the best results. One example with CSr is shown in Figure 3. It can be seen that our algorithms provide more details than NLRCS and DAMP; both of them presents “blob” artifacts.
Va Compare to JPEG Compression
JPEG Compression  CS Reconstruction  Difference  
Size  PSNR  PSNR  PSNR  
Quality  (bytes)  (dB)  CSr  (dB)  (dB) 
1  1,563  22.0683  0.0334  22.1507  0.0823 
3  1,716  22.7278  0.0367  22.4125  0.3154 
5  2,153  24.8416  0.0460  23.2433  1.5983 
7  2,559  26.0924  0.0547  23.8426  2.2548 
9  2,946  26.9623  0.0630  24.3308  2.6315 
10  3,141  27.2853  0.0672  24.5945  2.6908 
12  3,494  27.8379  0.0747  24.9514  2.8866 
14  3,815  28.2981  0.0816  25.3233  2.9748 
16  4,160  28.6912  0.0890  25.6880  3.0032 
18  4,478  29.0306  0.0958  25.9230  3.1077 
20  4,752 
Comments
There are no comments yet.