Improvements in tomographic reconstruction from sparse measurements are being widely sought after, due to their benefit of aiding lower exposure to the ionizing X-rays. However, the quality of reconstruction is directly limited by the number of measurements captured. When sampling is below the Nyquist limit, the data cannot be uniquely reconstructed by direct inversion methods. Hence, given an exceedingly smaller number of measurements, the reconstruction must rely on some known information about the data for better reconstruction fidelity. This information compensates for the sub-Nyquist sampling of the data. The information could be, for example, the sparsity of the data under some transform. Compressive sensing (CS)  routines exploit this information to recover the sparse coefficients of the underlying data, with proven guarantees for error bounds.
Our previous work 
partitioned a volume comprised of consecutive 2D slices into local groups of slices before performing the reconstruction of this volume from a sparse set of projected views. The grouping of these slices was based on their structural similarity, as measured through a comparison of the invariant moments of their projected views. Acquiring local knowledge of the object structure enabled significant improvement, as now the CS reconstruction could be performed on groups of similar slices that were able to be reconstructed much better than the individual, slice by slice CS reconstructions. In addition to this, recent research has also focussed on using known information that isspecific to the type of data being measured, for example, an existing similar data, also known as the prior. Such a technique has been referred to as Prior Image Constrained Compressed Sensing (PICCS) . Here, the similarity between the prior and the reconstructed data is explicitly enforced in the iterative routines. This offers further reduction in sampling when compared to plain CS, for a desired reconstruction quality.
However, in scenarios where the similarity of the prior and test data are not guaranteed, the application of PICCS will hurt the reconstruction. Moreover, selecting any one particular optimal slice or volume as a prior from a huge amount of available data, can be challenging. To evade this shortcoming, a large amount of work has been done on the usage of a set of slices or volumes as prior data , , , , 
. The similarity of the test slice to any one of the templates is then more probable. One application scenario for using a prior is in the case of long-term treatments, in which a subject might undergo CT scan multiple times. The subsequent scans can then be taken with fewer projections by exploiting the information available from all or a few of the previous good quality scans.
Contributions: In this work, we develop on the idea of using multiple templates by hypothesizing that the test slice may not be exactly similar to any of the slices in the template set, but lies in the subspace spanned by the templates. In such a case, the test slice can be expressed as a linear combination of the principal components of the template slices. Hence, we form an eigenspace from a set of template slices and use this as our prior data. We have compared our results with plain CS, filtered back-projection and patch based dictionary methods and observed a substantial improvement with highly undersampled and noisy measurements.
In Section II, we discuss the existing literature in prior-based reconstructions. In Section III, we describe the proposed global-prior-based technique, followed by the patch based dictionary technique in Section IV. The results of our experiments on five CT datasets are described in Section V, followed by some conclusions, and future work in Section VI.
Ii Previous work
The quality of image reconstruction depends on the type of recovery algorithm used and the way in which the prior is used. Among the various types of iterative recovery algorithms, Compressed Sensing (CS)  is being widely used in tomographic reconstruction due to its theoretical guarantee  of powerful data recovery from noisy sub-Nyquist measurements.
Among the prior aided methods, a direct extension of the PICCS based method is the usage of multiple priors. This is demonstrated in , where the optimal weights to the set of priors is adaptively tuned after each iteration of reconstruction. Many methods have also been proposed recently to exploit local prior data. In , volume reconstruction from low dose CT was performed using a 3D patch based overcomplete dictionary built offline from a set of good quality scan volumes. The performance of this dictionary was shown to be superior to Simultaneous Iterative Reconstruction Technique (SIRT), 3D-TV and 2D slice-wise patch based dictionary. There has also been previous work based on the update of the 2D patch based dictionary during the reconstruction process, based on the intermediate reconstructions . This is however computationally very expensive. In , dual dictionaries have been built for 3D MRI volume reconstruction, one corresponding to low resolution volume and other for high resolution volume. In this way, the similarity between the volumes with different resolutions is exploited. Since this dual-dictionary is fixed and sensitive to parameters of reconstruction, an adaptive dual dictionary was built in . The adaptive version was performed on 2D patches. In , both the MRI sampling position and the weight given to the prior are varied in real-time based on the differences between a single template volume and the test volume. Instead of using a set of templates, 
had used one template image and its patches were classified into multiple classes based on their geometrical orientation. A separate dictionary was then learned on each class of patch. The intermediate reconstructions were used to update the template slice and the whole process was repeated.
In contrast to all the previous work, we propose to use a more accurate and faster global Principal Component Analysis (PCA) based dictionary as part of the CS scheme.
Iii Computing global statistical prior
We begin with the assumption that the test slice can be expressed as a compact linear combination of the principal components extracted from a set of similar slices (called here as ‘templates’). Hence, we build the prior using PCA, which has been a widely used technique to represent data in terms of its significant modes or directions of variation.
In our algorithm, we choose a set of representative 2D slices as templates. These templates must represent a wide range of anatomy in order to contribute to an eigenspace that can encompass a range of possible test slice anatomy. If these slices are from different volumes, then they must be first registered before computing the prior. The prior is built by computing the covariance matrix from the template set , as shown in Eq. 1
, The space spanned by the eigenvectors(eigenspace) of the covariance matrix is our global prior and is assumed to contain any test slice that is similar, but not necessarily identical to the templates.
The covariance matrix is given by:
where denotes the mean of all templates and denotes the number of templates chosen.
Once the prior eigenspace is learned offline, the reconstruction is performed iteratively by minimizing an objective function. This minimization is done using the Compressive Sensing (CS) based routines. Let denote the Radon measurement matrix under a chosen set of angles, denote the sparse 2D-DCT (Discrete Cosine Transform) coefficients of the slice to be reconstructed, and denote the inverse DCT basis matrix. The DCT is commonly used to sparsely represent naturally occurring images in the CS framework. The objective function to be minimized in the plain CS technique is given by:
After inclusion of the PCA based prior, this objective function is given by :
where denotes the principal component or eigenvector and denotes the eigen coefficients of
(an estimate of the underlying test slice). We solve forand by alternately minimizing Eq. 4 by assuming a fixed , and Eq. 6, by using a fixed . Eq. 4 is solved using the solver , one of the popular solvers used in the literature.
Eq. 5 leads to the following closed form update for :
An optimal value of is empirically chosen a priori, based on the reconstructions of one of the template slices. The variation in relative mse (relative mean squared error) for different weights of the template based prior, is shown in figure 2.
Iv Comparison with Patch based techniques
We have compared the performance of the proposed PCA based global prior with the patch based dictionary prior, both using CS recovery routines. To build the patch based dictionary, we use the K-SVD algorithm. The template slices are divided into patches and an overcomplete dictionary is learned using the extracted patches. The K-SVD algorithm aims to sparsely represent all the patches using the dictionary. Once this dictionary is computed offline, the reconstruction of a new test slice is performed by minimizing the following objective function:
where D is the dictionary learned, is the total number of patches and is the operator to extract the patch from the intermediate reconstructed slice. The reconstruction is performed by alternate minimization of Eq 8 and Eq. 9, both using solver.
For each ,
This method requires prior tuning of three parameters: and . This is done by reconstructing one of the template slices a priori.
V Experiments and Results
We have built the template set using 5-7 slices from a volume of each dataset. Using more slices may not necessarily help if a subset is sufficient to represent all the variations that the test slice could possess. The chosen test slices are similar to, but not identical to any of the template slices. However, in our experiments so far, the templates and test slices have been chosen from the same volume. Figures 0(a), 5(a), 7(a), 9(a) and 11(a) show the chosen template sets for each of the five datasets and figures 0(b), 5(b), 7(b), 9(b) and 11(b) show the corresponding test slices. To build the patch based dictionary, we have chosen patch size of for the pelvis, colon, shoulder and humerus datasets and patch size of
for walnut dataset. In all the datasets, iid Gaussian noise with standard deviation equal toof mean of the projections was added to the measurements before reconstruction.
The global PCA based method has been compared with the following methods: a) filtered backprojection (using ‘Cosine’ filter), b) plain CS technique that uses only sparsity of DCT prior (i.e. with and set to 0); and c) plain CS DCT prior + patch based dictionary. The visual reconstruction results are shown in figures 5, 7, 9, 11 and 13, and the relative mse and the Structural SIMilarity (ssim) index of the results are shown in tables II, III, IV, V and VI.
Relative mse refers to the relative mean squared error and is given by where refers to the number of pixels. SSIM 
is a better evaluation metric that computes the similarity in shape and structure of the contents of one image to that of the other.
It is evident that the PCA based prior (coupled within the CS framework) works best. The patch based dictionary doesn’t contribute much to the further improvement over plain CS technique. In previous work, the performance of the patch based dictionaries has not been compared with plain CS reconstruction, as has been done here.
All experiments were performed on MATLAB on a 14-core Xeon CPU with a processor at 2.6GHz and RAM of 32GB. Table VII shows the approximate time taken for training and reconstruction phases for experiments on a sized slice from measurements along 12 angles and with noise. Computationally, the global prior based method is much faster than the patch based dictionary method during both the offline prior generation process and the online reconstruction process.
|Plain Compressive Sensing||0.7724||0.0230|
|Patch based dictionary prior||0.8511||0.0103|
|Global PCA based prior||0.0172||0.5150|
|Plain Compressive Sensing||0.5266||0.0261|
|Patch based dictionary prior||0.7951||0.0192|
|Global PCA based prior||0.0280||0.1783|
|Plain Compressive Sensing||0.2969||0.0743|
|Patch based dictionary prior||0.2839||0.0777|
|Global PCA based prior||0.2718||0.0759|
|Plain Compressive Sensing||0.3131||0.0438|
|Patch based dictionary prior||0.0975||0.0656|
|Global PCA based prior||0.0274||0.1366|
|Plain Compressive Sensing||0.3333||0.0428|
|Patch based dictionary prior||0.1154||0.0627|
|Global PCA based prior||0.0219||0.1389|
|Global PCA based prior||Patch based dictionary|
|Training||5 sec.||3 hrs.|
|Testing||5 min.||20 min.|
Vi Conclusion and Future Work
With an aim to reconstruct data from exceedingly sparse measurements, we propose to use a PCA based global prior within a Compressed Sensing framework. We have validated our method on 2D slices of multiple, diverse datasets. When sparse measurements of of the full Nyquist sampling are used, our results are significantly better than the state of the art. In the future, the proposed method needs to be extended for reconstruction of test slice from different subject volumes, and for the case of three dimensional data sets.
-  D.L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, April 2006.
Preeti Gopal, Sharat Chandran, Imants Svalbe, and Ajit Rajwade,
“Multi-slice tomographic reconstruction: To couple or not to
Proceedings of the Tenth Indian Conference on Computer Vision, Graphics and Image Processing, New York, NY, USA, 2016, ICVGIP ’16, pp. 85:1–85:7, ACM.
-  Guang-Hong Chen, Jie Tang, and Shuai Leng, “Prior image constrained compressed sensing (PICCS): A method to accurately reconstruct dynamic CT images from highly undersampled projection data sets,” Medical Physics, vol. 35, no. 2, pp. 660–663, 2008.
-  J. Liu, Y. Hu, J. Yang, Y. Chen, H. Shu, L. Luo, Q. Feng, Z. Gui, and G. Coatrieux, “3D feature constrained reconstruction for low dose CT imaging,” IEEE Transactions on Circuits and Systems for Video Technology, vol. PP, no. 99, pp. 1–1, 2016.
-  Q. Xu, H. Yu, X. Mou, L. Zhang, J. Hsieh, and G. Wang, “Low-dose X-ray CT reconstruction via dictionary learning,” IEEE Transactions on Medical Imaging, vol. 31, no. 9, pp. 1682–1697, Sept 2012.
-  Ying Song, Zhen Zhu, Yang Lu, Qiegen Liu, and Jun Zhao, “Reconstruction of magnetic resonance imaging by three-dimensional dual-dictionary learning,” Magnetic Resonance in Medicine, vol. 71, no. 3, pp. 1285–1298, 2014.
-  Jiansen Li, Ying Song, Zhen Zhu, and Jun Zhao, “Highly undersampled MR image reconstruction using an improved dual-dictionary learning method with self-adaptive dictionaries,” Medical & Biological Engineering & Computing, pp. 1–16, 2016.
-  Lior Weizman, Yonina C. Eldar, and Dafna Ben Bashat, “Compressed sensing for longitudinal MRI: An adaptive-weighted approach,” Medical Physics, vol. 42, no. 9, pp. 5195–5208, 2015.
-  Emmanuel J Candès, Justin Romberg, and Terence Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” Information Theory, IEEE Transactions on, vol. 52, no. 2, pp. 489–509, 2006.
-  H. Van Luong, J. Seiler, A. Kaup, and S. Forchhammer, “Sparse signal reconstruction with multiple side information using adaptive weights for multiview sources,” in 2016 IEEE International Conference on Image Processing (ICIP), Sept 2016, pp. 2534–2538.
-  Z. Zhan, J. F. Cai, D. Guo, Y. Liu, Z. Chen, and X. Qu, “Fast multiclass dictionaries learning with geometrical directions in MRI reconstruction,” IEEE Transactions on Biomedical Engineering, vol. 63, no. 9, pp. 1850–1861, Sept 2016.
-  “Walnut CT dataset,” https://www.uni-muenster.de/Voreen/, last viewed–April, 2017.
-  K. Koh, S.-J. Kim, and S. Boyd, “l1-ls: Simple matlab solver for l1-regularized least squares problems,” https://stanford.edu/~boyd/l1_ls/, last viewed–July, 2016.
-  “CT Colonography,” https://idash.ucsd.edu/data-collections, last viewed–April, 2017.
-  “Humerus CT dataset,” http://isbweb.org/data/vsj/humeral/, last viewed–July, 2016.
-  “Pelvis CT dataset,” https://medicine.uiowa.edu/, last viewed–June, 2017.
-  “Shoulder CT dataset,” https://medicine.uiowa.edu/, last viewed–June, 2017.
-  Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, April 2004.