DeepAI
Log In Sign Up

Learning from past scans: Tomographic reconstruction to detect new structures

12/23/2018
by   Preeti Gopal, et al.
0

The need for tomographic reconstruction from sparse measurements arises when the measurement process is potentially harmful, needs to be rapid, or is uneconomical. In such cases, prior information from previous longitudinal scans of the same or similar objects helps to reconstruct the current object whilst requiring significantly fewer `updating' measurements. However, a significant limitation of all prior-based methods is the possible dominance of the prior over the reconstruction of new localised information that has evolved within the test object. In this paper, we improve the state of the art by (1) detecting potential regions where new changes may have occurred, and (2) effectively reconstructing both the old and new structures by computing regional weights that moderate the local influence of the priors. We have tested the efficacy of our method on synthetic as well as real volume data. The results demonstrate that using weighted priors significantly improves the overall quality of the reconstructed data whilst minimising their impact on regions that contain new information.

READ FULL TEXT VIEW PDF

page 3

page 4

09/11/2019

Tomographic reconstruction to detect evolving structures

The need for tomographic reconstruction from sparse measurements arises ...
12/06/2017

Tomographic Reconstruction using Global Statistical Prior

Recent research in tomographic reconstruction is motivated by the need t...
03/02/2022

Structural Gaussian Priors for Bayesian CT reconstruction of Subsea Pipes

A non-destructive testing (NDT) application of X-ray computed tomography...
06/27/2022

NeuRIS: Neural Reconstruction of Indoor Scenes Using Normal Priors

Reconstructing 3D indoor scenes from 2D images is an important task in m...
04/08/2016

A method for locally approximating regularized iterative tomographic reconstruction methods

In many applications of tomography, the acquired projections are either ...
12/14/2020

Deep Optimized Priors for 3D Shape Modeling and Reconstruction

Many learning-based approaches have difficulty scaling to unseen data, a...

1 Introduction

Current research seeks to significantly reduce the number of tomographic measurements required to reconstruct an object with adequate fidelity. Sub-Nyquist sampling requires prior information or making some assumptions about the object shape. Compressive Sensing (CS) [1] assumes sparsity in the computed image, often via information redundancy that is optimised through some mathematical transform. The measurement set can be reduced further by prior based techniques [2, 3, 4, 5, 6], that utilize previously scanned spatial data to reconstruct new volumes from sparse sets of additional measurements. A cost function used in these iterative reconstruction schemes penalizes any dissimilarity between the templates (prior) and the volume to be reconstructed (test). Another class of methods [7, 8] impose information-theoretic similarity between the prior and the test volume. In all of these methods, it is critical and challenging to choose an optimal representative template. Recent work [9, 10, 11]

relaxed the above limitation by building dictionary based and eigenspace based priors.

[11]

showed that the eigenspace based prior is better able to accommodate the variation in the match of a test volume to a set of templates. The technique assumed the new test volume lies within the space spanned by the eigenvectors of multiple representative templates, effectively capturing the global properties of this set of templates. However, even with that approach, the global prior still imposes an inflexible constant weight when reconstructing the data. This results in inaccurate reconstruction for those critical regions where the test data may be marginally different from any of the prior data. In all prior based methods, the overall reconstruction improves, however, there is a likelihood of suppressing new features in the test object that may develop over time (refer Fig. 

1

). This scenario is particularly relevant in longitudinal studies, wherein the same object is scanned on multiple occasions to monitor for small changes that may occur over time.

Figure 1: Overview of our method (a) Reconstruction with existing methods: new structures are not accurate (b) Spatially varying weights map (c) Our reconstruction.

Contributions: In this work, we use, in a novel way, sparse projections of the test volume to first detect the regions in the test volume that may be different from all priors. Those regions are the most likely ones to be of interest in any longitudinal study. We compute spatially varying weights to temper the role played by the priors across different regions of the object. With these weights, we blend the advantages of existing reconstruction algorithms to create an optimal reconstruction. Our reconstructions are validated on new, real, biological 3D datasets, and existing medical datasets.

2 Weighted prior based reconstruction

When an object is scanned multiple times, a set of high quality reconstructions may be chosen as templates for the reconstruction of future scan volumes, which, in turn, may be scanned using far fewer measurements. The eigenspace of the prior templates is pre-computed. This is used as an orthogonal basis to represent the unknown test volume. While the prior compensates very well for the new sparse measurements, it dominates the regions with new changes masking the signal. Our weighted prior based reconstruction overcomes (details appear here) this limitation by minimizing:

(1)

Here, denotes the reconstructed volume, its measured tomographic projections, the sparse coefficients of , the basis in which data is assumed to be sparse, a matrix modelling the measurement geometry, the mean of the templates, the principal component of the set of templates, the eigen coefficients, and , are tunable weights given to the sparsity and prior terms respectively. The unknowns and are solved by alternate minimization. is solved for using the basis pursuit CS solver [12].

The key to our method is the discovery of the weights , a matrix with varying weights assigned to individual voxels of the prior. is first constructed using some preliminary reconstruction methods, following which Eqn. 1 is used to get the final reconstruction. In regions of change in test data, we want lower weights for the prior when compared to regions that are similar to the prior.

2.1 Computation of weights matrix

The test volume is unknown to begin with. Hence, it is not possible to decipher the precise regions in it that are new, and different from all the templates. Schematic 1 describes the evolution of the procedure used to detect the new regions in the unknown volume. We start with , the initial backprojection reconstruction of the test volume using the Feldkamp-Davis-Kress (FDK) algorithm [13] in an attempt to discover the difference between the templates and the test volume.

However, the difference between and its projection onto the eigenspace will detect the new regions along with many false positives (false new regions). This is because, while has many artefacts arising from sparse measurements, the eigenspace is built from high quality templates. To discover unwanted artifacts of the imaging geometry, in a counter intuitive way, we generate low quality reconstruction of the templates.

[t!]0.5 Schematic 1: Motivation behind our algorithm. (The plus and the minus operators are placeholders; precise details available in Section 2.2).

Let prior old regions ()
Let test volume old regions () new regions ()

  1. [noitemsep]

  2. Compute pilot reconstruction of . Let this be called .
    , where
    denote the reconstruction artefacts that depend on the old regions, the imaging geometry and the reconstruction method, and
    denote the reconstruction artefacts that depend on the new regions, imaging geometry and the reconstruction method.

  3. Note that gives the new regions, but along with lots of artefacts due to the imaging geometry (sparse views). To eliminate these unwanted artefacts, compute by simulating projections from using the same imaging geometry used to scan , and then reconstructing a lower quality prior volume .

  4. Note that contains the artefacts due to the new regions only. These are different for different reconstruction methods. To eliminate these method dependent artefacts, compute and using different reconstruction methods. Let these be denoted by and respectively.

  5. Compute

  6. New regions are obtained by computing

  7. Finally, assign space-varying weights based on step .

2.2 Algorithm to find

  1. [noitemsep,wide=0pt, leftmargin=]

  2. Perform a preliminary reconstruction of the test volume using FDK.

  3. Compute low quality template volumes . In Schematic 1, for ease of exposition, we assumed a single template. In the sequel, we assume templates from which we build an eigenspace.

    1. [noitemsep,wide=0pt, leftmargin=]

    2. Generate simulated measurements for every template , using the exact same projections views and imaging geometry with which the measurements of the test volume were acquired, and

    3. Perform preliminary FDK reconstructions of each of the templates from . Let this be denoted by .

  4. Build eigenspace from . Let denote projection of onto . The difference between and will not contain false positives due to imaging geometry, but will have false positives due to artefacts that are specific to the reconstruction method used. To resolve this, perform steps and .

  5. Project with multiple methods.

    1. [noitemsep,wide=0pt, leftmargin=]

    2. Perform pilot reconstructions of the test using different reconstruction algorithms111CS [14], ART [15], SART [16] and SIRT [17]. Let this set be denoted as where is an index for the reconstruction method, and

    3. From , perform reconstructions of the template using the different afore-mentioned algorithms, for each of the templates. Let this set be denoted by where , .

    4. For each of the algorithms (indexed by ), build an eigenspace from .

    5. Next, for each , project onto . Let this projection be denoted by . To reiterate, this captures those parts of the test volume that lie in the subspace (i.e., are similar to the template reconstructions). The rest, new changes and their reconstruction method-dependent-artefacts are not captured by this projection and need to be eliminated.

  6. To remove all reconstruction method dependent false positives, we compute .

  7. Finally, the weight to prior for each voxel coordinate is given by

For each voxel , the weight must be low whenever the preliminary test reconstruction is different from its projection onto the prior’s eigenspace, for every method . This is because it is unlikely that every algorithm would produce a significant artefact at a voxel. decides the sensitivity of the weights to the difference and hence it depends on the size of the new regions we want to detect. We found that our final reconstruction results obtained by solving Eqn. 1 were robust over a wide 222Please refer to supplementary material: goo.gl/D4YjMQ for details. range of values.

(a)
(b)
(c)

(d)
Figure 2: Potato 3D dataset: One slice each from the templates (the first three from left) and a slice from the test volume (extreme right). Notice the appearance of the fourth hole in the test slice.
(a) FDK
(b) CS
(c) ART
(d) SART
(e) SIRT
(f) Our method
Figure 3: New regions (shown in lower intensities) detected by different reconstruction methods. These are different because the reconstruction artifacts of the new structures is different for every reconstruction method used.

(a) Test
(b) FDK
(c) CS
(d) Plain prior
(e) Our method
Figure 4: 3D reconstruction of the potato with projection views–(b) has strong streak artefacts with unclear shadow of the potato, (c) largely blurred, (d) no new information detected (prior dominates) and (e) new information detected while simultaneously reducing streak artefacts.

3 Results

Our algorithm is validated on new333These and our code will be made available to the community. scans of biological specimens in a longitudinal setting. In all figures, ‘plain prior’ refers to optimizing Eqn. 1 with .

The first (Potato) dataset consisted of four scans of the humble potato, chosen for its simplicity (Fig. 2). While the first scan was taken of the undistorted potato, subsequent scans were taken of the same specimen, each time after drilling a new hole halfway into it. Projections were obtained using circular cone beam geometry. The specimen was kept in the same position throughout the acquisitions. In case where this alignment is not present, all the template volumes must be pre-aligned before computing the eigenspace. The test must be registered to the templates after its preliminary pilot reconstruction. Further, this alignment can be refined following every update of the reconstructed test volume. The ground truth consists of FDK reconstructions from the full set of acquired measurements from 900 projection views. The test volume was reconstructed using measurements from 45 projection views, i.e, of the projection views from which ground truth was reconstructed. The selected 3D ground truth of template volumes, test volume, as well as the 3D reconstructions are shown here. As seen in Fig. 4, our method reconstructs new structures while simultaneously reducing streak artefacts. The new regions detected by different reconstruction methods are shown in Fig. 3. The lower intensities denote new regions which were assigned lower weights. This ensures that the new regions are reconstructed using projection measurements alone.

(a)
(b)
(c)
(d)

(e)
Figure 5: Okra 3D dataset: One slice each from the templates (the first four from the left), and one from the test volume (extreme right). In the regions marked in red and green, while all slices have deformities, the test has none.

(a) Test
(b) FDK
(c) CS
(d) Plain prior
(e) Our method
Figure 6: 3D reconstruction of the okra from projection views (b) has strong streak artefacts, (c) blurred, (d) no new information detected (prior dominates – the deformity from the prior shows up as a false positive) and (e) new information detected (no deformities corresponding to red and green regions) while simultaneously reducing streak artefacts.

In order to test on data with intricate structures, a second (Okra) dataset consisting of five scans of an okra specimen was acquired (Fig. 5). The projections were obtained by circular cone beam projection. Prior to the first scan, two holes were drilled on the surface of the specimen. This was followed by four scans, each after introducing one new cut. The ground truth consists of FDK reconstructed volumes from the the full set of 450 view projections. The test volume was reconstructed from a partial set of 45 projections, i.e, of the projection views from which ground truth was reconstructed. The selected 3D ground truth of template volumes, the test volume as well as the 3D reconstructions can be seen here. One of the slices of the reconstructed volumes is shown in Fig. 6. The red and green 3D RoI in the video and images show the regions where new changes are present.

Based on the potato and the okra experiments, we see that our method is able to discover both the presence of a new structure, as well as the absence of a structure.

(a)
(b)
(c)
(d)
(e)

(f)
Figure 7: Sprouts 3D dataset: One slice each from the templates (the first five from left) and a slice from the test (extreme right).

The third (Sprouts) dataset consists of six scans of a sprouts specimen imaged at its various stages of growth (Fig. 7). In contrast to the scientific experiment performed for the case of the okra and the potato where we introduced man-made defects, the changes here are purely the work of nature.

(a) Test
(b) FDK
(c) CS
(d) Plain prior
(e) Our method
Figure 8: 3D reconstruction of sprouts from projection views (b, c) have poor details (d) no new information detected (the prior dominates as can be seen in the blue and red regions) and (e) new information detected in the regions of interest.

The ground truth consists of FDK reconstructed volumes from a set of 1800 view projections. The test volume was reconstructed from partial set of 45 projections, i.e, of the projection views from which ground truth was reconstructed. The selected 3D ground truth of template volumes, test volume, as well as the 3D reconstructions are shown here. One of the slices of the reconstructed volumes is shown in Fig. 8. For the sake of exposition, the red and blue regions of interest (RoI) have been culled out from 7 consecutive slices in the 3D volume to indicate new structures; other changes can be viewed in the video.

FDK CS Plain prior Our method
Potato 0.744 0.817 0.856 0.857
Okra 0.737 0.836 0.858 0.883
Sprouts 0.852 0.843 0.834 0.881
Table 1: Quantitative SSIM values (a value of 1 is the best possible). For targeted and clear quantitative comparison, the intricate regions of interests in the okra and the sprouts have been used.

Table 1 shows the improvement in the Structure Similarity Index (SSIM) of the reconstructed new regions as compared to other methods.

4 Conclusions

In a longitudinal study, the reconstruction of localized new information in the data should not be affected by priors used, given that the new measurements are taken with substantially fewer views (approximately 2.5%–10% data). In this work, we have improved the state of the art with a weighted prior-based reconstruction that detects these regions of change and assigns low prior weights wherever necessary. The probability of presence of a ‘new region’ is enhanced considerably by a novel combination of different reconstruction techniques. Our method is general as shown in Schematic 1, but has been demonstrated on new, real, biological 3D datasets for longitudinal studies. The method is also largely robust to the number of templates used. We urge the reader to see the videos of reconstructed volumes in the

supplementary material [18]. In a medical setting, we note that the detection of the new regions will enable irradiation of the patient only in the region of interest by keeping specific detector bins active, thereby reducing the radiation exposure.

References

  • [1] D.L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, April 2006.
  • [2] Guang-Hong Chen, Jie Tang, and Shuai Leng, “Prior image constrained compressed sensing (PICCS): A method to accurately reconstruct dynamic CT images from highly undersampled projection data sets,” Medical Physics, vol. 35, no. 2, pp. 660–663, 2008.
  • [3] G. Chen, P. Theriault-Lauzier, J. Tang, B. Nett, S. Leng, J. Zambelli, Z. Qi, N. Bevins, A. Raval, S. Reeder, and H. Rowley, “Time-resolved interventional cardiac C-arm cone-beam CT: An application of the PICCS algorithm,” IEEE Transactions on Medical Imaging, vol. 31, no. 4, pp. 907–923, April 2012.
  • [4] Meghan G. Lubner, Perry J. Pickhardt, Jie Tang, and Guang-Hong Chen, “Reduced image noise at low-dose multidetector CT of the abdomen with prior image constrained compressed sensing algorithm,” Radiology, vol. 260, pp. 248–256, July 2011.
  • [5] J Webster Stayman, Hao Dang, Yifu Ding, and Jeffrey H Siewerdsen, “PIRPLE: A penalized-likelihood framework for incorporation of prior images in CT reconstruction,” Physics in medicine and biology, vol. 58, no. 21, Nov 2013.
  • [6] J. F. C. Mota, N. Deligiannis, and M. R. D. Rodrigues, “Compressed sensing with prior information: Strategies, geometry, and bounds,” IEEE Transactions on Information Theory, vol. 63, no. 7, pp. 4472–4496, July 2017.
  • [7] Dominique Van de Sompel and Michael Brady, “Robust incorporation of anatomical priors into limited view tomography using multiple cluster modelling of the joint histogram,” 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, pp. 1279–1282, 2009.
  • [8] S. Somayajula, C. Panagiotou, A. Rangarajan, Q. Li, S. R. Arridge, and R. M. Leahy, “PET image reconstruction using information theoretic anatomical priors,” IEEE Transactions on Medical Imaging, vol. 30, no. 3, pp. 537–549, March 2011.
  • [9] J. Liu, Y. Hu, J. Yang, Y. Chen, H. Shu, L. Luo, Q. Feng, Z. Gui, and G. Coatrieux, “3D feature constrained reconstruction for low dose CT imaging,” IEEE Transactions on Circuits and Systems for Video Technology, vol. PP, no. 99, pp. 1–1, 2016.
  • [10] Q. Xu, H. Yu, X. Mou, L. Zhang, J. Hsieh, and G. Wang, “Low-dose X-ray CT reconstruction via dictionary learning,” IEEE Transactions on Medical Imaging, vol. 31, no. 9, pp. 1682–1697, Sept 2012.
  • [11] P. Gopal, R. Chaudhry, S. Chandran, I. Svalbe, and A. Rajwade, “Tomographic reconstruction using global statistical priors,” in Digital Image Computing: Techniques and Applications (DICTA), Sydney, Australia, Nov. 2017.
  • [12] K. Koh, S.-J. Kim, and S. Boyd, “l1-ls: Simple matlab solver for l1-regularized least squares problems,” https://stanford.edu/~boyd/l1_ls/, last viewed–July, 2016.
  • [13] Lee Feldkamp, L. C. Davis, and James Kress, “Practical cone-beam algorithm,” J. Opt. Soc. Am, vol. 1, pp. 612–619, 01 1984.
  • [14] Robert Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society. Series B (Methodological), vol. 58, no. 1, pp. 267–288, 1996.
  • [15] Richard Gordon, Robert Bender, and Gabor T. Herman, “Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and X-ray photography,” Theoretical Biology, vol. 29, no. 3, pp. 471–481, Dec 1970.
  • [16] A.H. Andersen and A.C. Kak, “Simultaneous algebraic reconstruction technique (SART): A superior implementation of the ART algorithm,” Ultrasonic Imaging, vol. 6, no. 1, pp. 81 – 94, 1984.
  • [17] Peter Gilbert, “Iterative methods for the three-dimensional reconstruction of an object from projections,” Journal of Theoretical Biology, vol. 36, no. 1, pp. 105 – 117, 1972.
  • [18] “Supplementary material,” https://www.dropbox.com/sh/mhc7grq9gcy2jws/AABNBpdswfLlz8UCNuxCuXKra?dl=0, last viewed–Oct, 2018.