1 Introduction
Current research seeks to significantly reduce the number of tomographic measurements required to reconstruct an object with adequate fidelity. SubNyquist sampling requires prior information or making some assumptions about the object shape. Compressive Sensing (CS) [1] assumes sparsity in the computed image, often via information redundancy that is optimised through some mathematical transform. The measurement set can be reduced further by prior based techniques [2, 3, 4, 5, 6], that utilize previously scanned spatial data to reconstruct new volumes from sparse sets of additional measurements. A cost function used in these iterative reconstruction schemes penalizes any dissimilarity between the templates (prior) and the volume to be reconstructed (test). Another class of methods [7, 8] impose informationtheoretic similarity between the prior and the test volume. In all of these methods, it is critical and challenging to choose an optimal representative template. Recent work [9, 10, 11]
relaxed the above limitation by building dictionary based and eigenspace based priors.
[11]showed that the eigenspace based prior is better able to accommodate the variation in the match of a test volume to a set of templates. The technique assumed the new test volume lies within the space spanned by the eigenvectors of multiple representative templates, effectively capturing the global properties of this set of templates. However, even with that approach, the global prior still imposes an inflexible constant weight when reconstructing the data. This results in inaccurate reconstruction for those critical regions where the test data may be marginally different from any of the prior data. In all prior based methods, the overall reconstruction improves, however, there is a likelihood of suppressing new features in the test object that may develop over time (refer Fig.
1). This scenario is particularly relevant in longitudinal studies, wherein the same object is scanned on multiple occasions to monitor for small changes that may occur over time.
Contributions: In this work, we use, in a novel way, sparse projections of the test volume to first detect the regions in the test volume that may be different from all priors. Those regions are the most likely ones to be of interest in any longitudinal study. We compute spatially varying weights to temper the role played by the priors across different regions of the object. With these weights, we blend the advantages of existing reconstruction algorithms to create an optimal reconstruction. Our reconstructions are validated on new, real, biological 3D datasets, and existing medical datasets.
2 Weighted prior based reconstruction
When an object is scanned multiple times, a set of high quality reconstructions may be chosen as templates for the reconstruction of future scan volumes, which, in turn, may be scanned using far fewer measurements. The eigenspace of the prior templates is precomputed. This is used as an orthogonal basis to represent the unknown test volume. While the prior compensates very well for the new sparse measurements, it dominates the regions with new changes masking the signal. Our weighted prior based reconstruction overcomes (details appear here) this limitation by minimizing:
(1) 
Here, denotes the reconstructed volume, its measured tomographic projections, the sparse coefficients of , the basis in which data is assumed to be sparse, a matrix modelling the measurement geometry, the mean of the templates, the principal component of the set of templates, the eigen coefficients, and , are tunable weights given to the sparsity and prior terms respectively. The unknowns and are solved by alternate minimization. is solved for using the basis pursuit CS solver [12].
The key to our method is the discovery of the weights , a matrix with varying weights assigned to individual voxels of the prior. is first constructed using some preliminary reconstruction methods, following which Eqn. 1 is used to get the final reconstruction. In regions of change in test data, we want lower weights for the prior when compared to regions that are similar to the prior.
2.1 Computation of weights matrix
The test volume is unknown to begin with. Hence, it is not possible to decipher the precise regions in it that are new, and different from all the templates. Schematic 1 describes the evolution of the procedure used to detect the new regions in the unknown volume. We start with , the initial backprojection reconstruction of the test volume using the FeldkampDavisKress (FDK) algorithm [13] in an attempt to discover the difference between the templates and the test volume.
However, the difference between and its projection onto the eigenspace will detect the new regions along with many false positives (false new regions). This is because, while has many artefacts arising from sparse measurements, the eigenspace is built from high quality templates. To discover unwanted artifacts of the imaging geometry, in a counter intuitive way, we generate low quality reconstruction of the templates.
[t!]0.5 Schematic 1: Motivation behind our algorithm. (The plus and the minus operators are placeholders; precise details available in Section 2.2).
Let prior old regions ()
Let test volume old regions () new regions ()

[noitemsep]

Compute pilot reconstruction of . Let this be called .
, where
denote the reconstruction artefacts that depend on the old regions, the imaging geometry and the reconstruction method, and
denote the reconstruction artefacts that depend on the new regions, imaging geometry and the reconstruction method. 
Note that gives the new regions, but along with lots of artefacts due to the imaging geometry (sparse views). To eliminate these unwanted artefacts, compute by simulating projections from using the same imaging geometry used to scan , and then reconstructing a lower quality prior volume .

Note that contains the artefacts due to the new regions only. These are different for different reconstruction methods. To eliminate these method dependent artefacts, compute and using different reconstruction methods. Let these be denoted by and respectively.

Compute

New regions are obtained by computing

Finally, assign spacevarying weights based on step .
2.2 Algorithm to find

[noitemsep,wide=0pt, leftmargin=]

Perform a preliminary reconstruction of the test volume using FDK.

Compute low quality template volumes . In Schematic 1, for ease of exposition, we assumed a single template. In the sequel, we assume templates from which we build an eigenspace.

[noitemsep,wide=0pt, leftmargin=]

Generate simulated measurements for every template , using the exact same projections views and imaging geometry with which the measurements of the test volume were acquired, and

Perform preliminary FDK reconstructions of each of the templates from . Let this be denoted by .


Build eigenspace from . Let denote projection of onto . The difference between and will not contain false positives due to imaging geometry, but will have false positives due to artefacts that are specific to the reconstruction method used. To resolve this, perform steps and .

Project with multiple methods.

[noitemsep,wide=0pt, leftmargin=]

From , perform reconstructions of the template using the different aforementioned algorithms, for each of the templates. Let this set be denoted by where , .

For each of the algorithms (indexed by ), build an eigenspace from .

Next, for each , project onto . Let this projection be denoted by . To reiterate, this captures those parts of the test volume that lie in the subspace (i.e., are similar to the template reconstructions). The rest, new changes and their reconstruction methoddependentartefacts are not captured by this projection and need to be eliminated.


To remove all reconstruction method dependent false positives, we compute .

Finally, the weight to prior for each voxel coordinate is given by
For each voxel , the weight must be low whenever the preliminary test reconstruction is different from its projection onto the prior’s eigenspace, for every method . This is because it is unlikely that every algorithm would produce a significant artefact at a voxel. decides the sensitivity of the weights to the difference and hence it depends on the size of the new regions we want to detect. We found that our final reconstruction results obtained by solving Eqn. 1 were robust over a wide ^{2}^{2}2Please refer to supplementary material: goo.gl/D4YjMQ for details. range of values.
3 Results
Our algorithm is validated on new^{3}^{3}3These and our code will be made available to the community. scans of biological specimens in a longitudinal setting. In all figures, ‘plain prior’ refers to optimizing Eqn. 1 with .
The first (Potato) dataset consisted of four scans of the humble potato, chosen for its simplicity (Fig. 2). While the first scan was taken of the undistorted potato, subsequent scans were taken of the same specimen, each time after drilling a new hole halfway into it. Projections were obtained using circular cone beam geometry. The specimen was kept in the same position throughout the acquisitions. In case where this alignment is not present, all the template volumes must be prealigned before computing the eigenspace. The test must be registered to the templates after its preliminary pilot reconstruction. Further, this alignment can be refined following every update of the reconstructed test volume. The ground truth consists of FDK reconstructions from the full set of acquired measurements from 900 projection views. The test volume was reconstructed using measurements from 45 projection views, i.e, of the projection views from which ground truth was reconstructed. The selected 3D ground truth of template volumes, test volume, as well as the 3D reconstructions are shown here. As seen in Fig. 4, our method reconstructs new structures while simultaneously reducing streak artefacts. The new regions detected by different reconstruction methods are shown in Fig. 3. The lower intensities denote new regions which were assigned lower weights. This ensures that the new regions are reconstructed using projection measurements alone.
In order to test on data with intricate structures, a second (Okra) dataset consisting of five scans of an okra specimen was acquired (Fig. 5). The projections were obtained by circular cone beam projection. Prior to the first scan, two holes were drilled on the surface of the specimen. This was followed by four scans, each after introducing one new cut. The ground truth consists of FDK reconstructed volumes from the the full set of 450 view projections. The test volume was reconstructed from a partial set of 45 projections, i.e, of the projection views from which ground truth was reconstructed. The selected 3D ground truth of template volumes, the test volume as well as the 3D reconstructions can be seen here. One of the slices of the reconstructed volumes is shown in Fig. 6. The red and green 3D RoI in the video and images show the regions where new changes are present.
Based on the potato and the okra experiments, we see that our method is able to discover both the presence of a new structure, as well as the absence of a structure.
The third (Sprouts) dataset consists of six scans of a sprouts specimen imaged at its various stages of growth (Fig. 7). In contrast to the scientific experiment performed for the case of the okra and the potato where we introduced manmade defects, the changes here are purely the work of nature.
The ground truth consists of FDK reconstructed volumes from a set of 1800 view projections. The test volume was reconstructed from partial set of 45 projections, i.e, of the projection views from which ground truth was reconstructed. The selected 3D ground truth of template volumes, test volume, as well as the 3D reconstructions are shown here. One of the slices of the reconstructed volumes is shown in Fig. 8. For the sake of exposition, the red and blue regions of interest (RoI) have been culled out from 7 consecutive slices in the 3D volume to indicate new structures; other changes can be viewed in the video.
FDK  CS  Plain prior  Our method  
Potato  0.744  0.817  0.856  0.857 
Okra  0.737  0.836  0.858  0.883 
Sprouts  0.852  0.843  0.834  0.881 
Table 1 shows the improvement in the Structure Similarity Index (SSIM) of the reconstructed new regions as compared to other methods.
4 Conclusions
In a longitudinal study, the reconstruction of localized new information in the data should not be affected by priors used, given that the new measurements are taken with substantially fewer views (approximately 2.5%–10% data). In this work, we have improved the state of the art with a weighted priorbased reconstruction that detects these regions of change and assigns low prior weights wherever necessary. The probability of presence of a ‘new region’ is enhanced considerably by a novel combination of different reconstruction techniques. Our method is general as shown in Schematic 1, but has been demonstrated on new, real, biological 3D datasets for longitudinal studies. The method is also largely robust to the number of templates used. We urge the reader to see the videos of reconstructed volumes in the
supplementary material [18]. In a medical setting, we note that the detection of the new regions will enable irradiation of the patient only in the region of interest by keeping specific detector bins active, thereby reducing the radiation exposure.References
 [1] D.L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, April 2006.
 [2] GuangHong Chen, Jie Tang, and Shuai Leng, “Prior image constrained compressed sensing (PICCS): A method to accurately reconstruct dynamic CT images from highly undersampled projection data sets,” Medical Physics, vol. 35, no. 2, pp. 660–663, 2008.
 [3] G. Chen, P. TheriaultLauzier, J. Tang, B. Nett, S. Leng, J. Zambelli, Z. Qi, N. Bevins, A. Raval, S. Reeder, and H. Rowley, “Timeresolved interventional cardiac Carm conebeam CT: An application of the PICCS algorithm,” IEEE Transactions on Medical Imaging, vol. 31, no. 4, pp. 907–923, April 2012.
 [4] Meghan G. Lubner, Perry J. Pickhardt, Jie Tang, and GuangHong Chen, “Reduced image noise at lowdose multidetector CT of the abdomen with prior image constrained compressed sensing algorithm,” Radiology, vol. 260, pp. 248–256, July 2011.
 [5] J Webster Stayman, Hao Dang, Yifu Ding, and Jeffrey H Siewerdsen, “PIRPLE: A penalizedlikelihood framework for incorporation of prior images in CT reconstruction,” Physics in medicine and biology, vol. 58, no. 21, Nov 2013.
 [6] J. F. C. Mota, N. Deligiannis, and M. R. D. Rodrigues, “Compressed sensing with prior information: Strategies, geometry, and bounds,” IEEE Transactions on Information Theory, vol. 63, no. 7, pp. 4472–4496, July 2017.
 [7] Dominique Van de Sompel and Michael Brady, “Robust incorporation of anatomical priors into limited view tomography using multiple cluster modelling of the joint histogram,” 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, pp. 1279–1282, 2009.
 [8] S. Somayajula, C. Panagiotou, A. Rangarajan, Q. Li, S. R. Arridge, and R. M. Leahy, “PET image reconstruction using information theoretic anatomical priors,” IEEE Transactions on Medical Imaging, vol. 30, no. 3, pp. 537–549, March 2011.
 [9] J. Liu, Y. Hu, J. Yang, Y. Chen, H. Shu, L. Luo, Q. Feng, Z. Gui, and G. Coatrieux, “3D feature constrained reconstruction for low dose CT imaging,” IEEE Transactions on Circuits and Systems for Video Technology, vol. PP, no. 99, pp. 1–1, 2016.
 [10] Q. Xu, H. Yu, X. Mou, L. Zhang, J. Hsieh, and G. Wang, “Lowdose Xray CT reconstruction via dictionary learning,” IEEE Transactions on Medical Imaging, vol. 31, no. 9, pp. 1682–1697, Sept 2012.
 [11] P. Gopal, R. Chaudhry, S. Chandran, I. Svalbe, and A. Rajwade, “Tomographic reconstruction using global statistical priors,” in Digital Image Computing: Techniques and Applications (DICTA), Sydney, Australia, Nov. 2017.
 [12] K. Koh, S.J. Kim, and S. Boyd, “l1ls: Simple matlab solver for l1regularized least squares problems,” https://stanford.edu/~boyd/l1_ls/, last viewed–July, 2016.
 [13] Lee Feldkamp, L. C. Davis, and James Kress, “Practical conebeam algorithm,” J. Opt. Soc. Am, vol. 1, pp. 612–619, 01 1984.
 [14] Robert Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society. Series B (Methodological), vol. 58, no. 1, pp. 267–288, 1996.
 [15] Richard Gordon, Robert Bender, and Gabor T. Herman, “Algebraic reconstruction techniques (ART) for threedimensional electron microscopy and Xray photography,” Theoretical Biology, vol. 29, no. 3, pp. 471–481, Dec 1970.
 [16] A.H. Andersen and A.C. Kak, “Simultaneous algebraic reconstruction technique (SART): A superior implementation of the ART algorithm,” Ultrasonic Imaging, vol. 6, no. 1, pp. 81 – 94, 1984.
 [17] Peter Gilbert, “Iterative methods for the threedimensional reconstruction of an object from projections,” Journal of Theoretical Biology, vol. 36, no. 1, pp. 105 – 117, 1972.
 [18] “Supplementary material,” https://www.dropbox.com/sh/mhc7grq9gcy2jws/AABNBpdswfLlz8UCNuxCuXKra?dl=0, last viewed–Oct, 2018.