X-ray Computed Tomography (CT) is one of the most powerful clinical imaging imaging tools, delivering high-quality images in a fast and cost effective manner. However, the X-ray is harmful to the human body, so many studies has been conducted to develop methods that reduce the X-ray dose. Specifically, X-ray doses can be reduced by reducing the number of photons, projection views or the size of the field-of-view of X-rays. Among these, the CT technique for reducing the field-of-view of X-ray is called interior tomography. Interior tomography is useful when the region-of-interest (ROI) within a patient’s body is small (such as heart), because interior tomography aims to obtain an ROI image by irradiating only the ROI with x-rays. Interior tomography not only can dramatically reduce the X-ray dose, but also has cost benefits by using a small-sized detector. However, the use of an analytic CT reconstruction algorithm generally produces images with severe artifacts due to the transverse directional projection truncation.
Sinogram extrapolation is a simple approximation method to reduce the artifacts. However, sinogram extrapolation method still generates biased CT number in the reconstructed image. Recently, Katsevich et al 
proved the general uniqueness results for the interior problem and provided stability estimates. Using the total variation (TV) penalty, the authors in showed that a unique reconstruction is possible if the images are piecewise smooth. In a series of papers [3, 4], our group has shown that a generalized L-spline along a collection of chord lines passing through the ROI can be uniquely recovered ; and we further substantiated that the high frequency signal can be recovered analytically thanks to the Bedrosian identify, whereas the computationally expensive iterative reconstruction need only be performed to reconstruct the low frequency part of the signal after downsampling . While this approach significantly reduces the computational complexity of the interior reconstruction, the computational complexity of existing iterative reconstruction algorithms prohibits their routine clinical use.
In recent years, deep learning algorithms using convolutional neural network (CNN) have been successfully used for low-dose CT[5, 6], sparse view CT [7, 8], etc. However, the more we have observed impressive empirical results in CT problems, the more unanswered questions we encounter. In particular, one of the most critical questions for biomedical applications is whether a deep learning-based CT does create any artificial structures that may mislead radiologists in their clinical decision. Fortunately, in a recent theory of deep convolutional framelets , we showed that the success of deep learning is not from a magical power of a black-box, but rather comes from the power of a novel signal representation using non-local basis combined with data-driven local basis. Thus, the deep network is indeed a natural extension of classical signal representation theory such as wavelets, frames, etc; so rather than creating new informations, it attempts to extract the most information out of the the input data using the optimal signal representation.
Inspired these findings, here we propose a deep learning framework for interior tomography problem. Specifically, we demonstrate that the interior tomography problem can be formulated as a reconstruction problem in an end-to-end manner under the constraints that remove the null space signal components of the truncated Radon transform. Numerical results confirmed the proposed deep learning architecture outperforms the existing interior tomography methods in image quality and reconstruction time.
Ii-a Problem Formulation
Here, we consider 2-D interior tomography problem and follow the notation in . The variable
denotes a vector on the unit sphere. The collection of vectors that are orthogonal to is denoted as
We refer to real-valued functions in the spatial domain as images and denote them as for . We denote the Radon transform of an image as
where and . The local Radon transform for the truncated field-of-view is the restriction of to the region which is denoted as . Then, the interior reconstruction is to find the unknown within the ROI from .
Ii-B Null Space of Truncated Radon Transform
The main technical difficulty of the interior reconstruction is the existence of the null space [3, 10]. To analyze the null space, we follow the mathematical analysis in . Specifically, the analytic inversion of can be equivalently represented using the differentiated backprojection followed by the truncated Hilbert transform along the chord lines, so we analyze the interior reconstruction problem to take advantages of this. More specifically, if the unit vector along the chord line is set as a coordinate axis, then we can find the unit vector such that consists of the basis for the local coordinate system and denotes its coordinate value (see Fig. 1). We further define 1-D index set parameterized by the :
for some functions . A typical example of the null space image is illustrated in Fig. 2. This is often called as the cupping artifact. The cupping artifacts reduce contrast and interfere with clinical diagnosis.
Note that the null space signal is differentiable in any order due to the removal of the origin in the integrand. Accordingly, an interior reconstruction algorithm needs an appropriate regularization term that suppresses by exploiting this. Specifically, one could find an analysis transform such that its null space is composed of entire function, and use it for an analysis-based regularization term. For example, the regularization using TV  and L-spline model [3, 4] correspond to this. The main result on the perfect reconstruction in  is then stated as follows. If the null space component is equivalent to a signal within the ROI, then is identically zero due to the characterization of Hilbert transform pairs as boundary values of analytic functions on the upper half of the complex plane ; so TV or L-spline regularization provides the unique solution.
Ii-C CNN-based Null Space Removal
Instead of designing a linear operator such that the common null space of and to be zero, we can design a frame and its dual such that and for all and the ground-truth image . This frame-based regularization is also an active field of research for image denoising, inpainting, etc .
One of the most important contributions of the deep convolutional framelet theory  is that and correspond to the encoder and decoder structure of a CNN, respectively, and the shrinkage operator emerges by controlling the number of filter channels and nonlinearities. Accordingly, a convolutional neural network represented by can be designed such that
Then, our interior tomography algorithm is formulated to find the solution for the following problem:
where denotes the ground-truth data available for training data, and denotes the CNN satisfying (2). Now, by defining as a right-inverse of , i.e. , we have
and the data fidelity constraint is automatically satisfied due to the definition of the right inverse. Therefore, the neural network training problem to satisfy (4) can be equivalently represented by
where denotes the training data set composed of ground-truth image an its truncated projection. A typical example of the right inverse for the truncated Radon transform is the inverse Radon transform, which can be implemented by the filtered backprojection (FBP) algorithm. Thus, in (5) can be implemented using the FBP.
After the neural network is learned, the inference can be done simply by processing FBP reconstruction image from a truncated radon data using the neural network , i.e. . The details of the network and the training procedure will be discussed in the following section.
Iii-a Data Set
Ten subject data sets from AAPM Low-Dose CT Grand Challenge were used in this paper. Out of ten sets, eight sets were used for network training. The other two sets were used for validation and test, respectively. The provided data sets were originally acquired in helical CT, and were rebinned from the helical CT to angular scan fan-beam CT. The size artifact free CT images are reconstructed from the rebinned fan-beam CT data using FBP algorithm. From the CT image, sinogram is numerically obtained using a forward projection operator. The number of detector in numerical experiment is 736. Only 350 detectors in the middle of 736 detectors are used to simulate the truncated projection data. Using this, we reconstruct ROI images.
Iii-B Network Architecture
The proposed network is shown in Fig. 3. The first layer is the FBP layer that reconstructs the cupping-artifact corrupted images from the truncated projection data, which is followed by a modified architecture of U-Net . A yellow arrow in Fig. 3 is the basic operator and consists ofaverage pooling operator and is located between the stages. The average pooling operator doubles the number of channels and reduces the size of the layers by four. Conversely, a blue arrow is average unpooling operator, reducing the number of channels by half and increasing the size of the layer by four. A violet arrow is the skip and concatenation operator. A green arrow is the simple convolution operator generating the final reconstruction image.
Iii-C Network training
The proposed network was implemented using MatConvNet toolbox in MATLAB R2015a environment. Processing units used in this research are Intel Core i7-7700 central processing unit and GTX 1080-Ti graphics processing unit. Stochastic gradient reduction was used to train the network. As shown in Fig. 3, the inputs of the network are the truncated projection data, i.e. . The target data corresponds to the 256
256 size center ROI image cropped from the ground-truth data. The number of epochs was 300. The initial learning rate was, which gradually dropped to . The regularization parameter was . Training time lasted about 24 hours.
We compared the proposed method with existing iterative methods such as the TV penalized reconstruction  and the L-spline based multi-scale regularization method by Lee et al . Fig. 4 shows the ground-truth images and reconstruction results by FBP, TV, Lee method  and the proposed method. The graphs in the bottom row in Fig. 4 are the cross-section view along the white lines on the each images. Fig. 5 shows the magnitude of difference images between the ground truth image and reconstruction results of each method. The reconstructed images and the cut-view graphs in Fig. 4 show that the proposed method results have more fine details than the other methods. The error images in Fig. 5 confirm that the high frequency components such as edges and textures are better restored in the proposed method than other method.
We also calculated the average values of the peak signal-to-noise ratio (PSNR) and the normalized mean square error (NMSE) in Table I. The proposed method achieved the highest value in PSNR and the lowest value in NMSE with about dB improvement. The computational times for TV, Lee method  and the proposed method were 1.8272s, 0.3438s, and 0.0532s, respectively, for each slice reconstruction. The processing speed of the proposed method is about 34 times faster than the TV method and 6 times faster than Lee method .
|FBP||TV||Lee method ||Proposed|
In this paper, we proposed a deep learning network for interior tomography problem. The reconstruction problem was formulated as a constraint optimization problem under data fidelity and null space constraints. Based on the theory of deep convolutional framelet, the null space constraint was implemented using the convolutional neural network with encoder and decoder architecture. Numerical results showed that the proposed method has the highest value in PSNR and the lowest value in NMSE and the fastest computational time.
The authors would like to thanks Dr. Cynthia McCollough, the Mayo Clinic, the American Association of Physicists in Medicine (AAPM), and grant EB01705 and EB01785 from the National Institute of Biomedical Imaging and Bioengineering for providing the Low-Dose CT Grand Challenge data set. This work is supported by National Research Foundation of Korea, Grant number NRF-2016R1A2B3008104. Yoseob Han and Jawook Gu contributed equally to this work.
-  E. Katsevich, A. Katsevich, and G. Wang, “Stability of the interior problem with polynomial attenuation in the region of interest,” Inverse problems, vol. 28, no. 6, p. 065022, 2012.
-  H. Yu and G. Wang, “Compressed sensing based interior tomography,” Physics in Medicine and Biology, vol. 54, no. 9, p. 2791, 2009.
-  J. P. Ward, M. Lee, J. C. Ye, and M. Unser, “Interior tomography using 1D generalized total variation – part I: mathematical foundation,” SIAM Journal on Imaging Sciences, vol. 8, no. 1, pp. 226–247, 2015.
-  M. Lee, Y. Han, J. P. Ward, M. Unser, and J. C. Ye, “Interior tomography using 1d generalized total variation. part II: Multiscale implementation,” SIAM Journal on Imaging Sciences, vol. 8, no. 4, pp. 2452–2486, 2015.
-  E. Kang, J. Min, and J. C. Ye, “A deep convolutional neural network using directional wavelets for low-dose x-ray CT reconstruction,” Medical physics, vol. 44, no. 10, 2017.
-  H. Chen, Y. Zhang, M. K. Kalra, F. Lin, Y. Chen, P. Liao, J. Zhou, and G. Wang, “Low-dose CT with a residual encoder-decoder convolutional neural network,” IEEE transactions on medical imaging, vol. 36, no. 12, pp. 2524–2535, 2017.
-  Y. Han, J. Yoo, and J. C. Ye, “Deep residual learning for compressed sensing CT reconstruction via persistent homology analysis,” arXiv preprint arXiv:1611.06391, 2016.
-  K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4509–4522, 2017.
-  J. C. Ye, Y. S. Han, and E. Cha, “Deep convolutional framelets: A general deep learning framework for inverse problems,” arXiv preprint arXiv:1707.00372, 2017.
A. Katsevich and A. Tovbis, “Finite Hilbert transform with incomplete data: null-space and singular values,”Inverse Problems, vol. 28, no. 10, p. 105006, 2012.
J.-F. Cai, R. H. Chan, and Z. Shen, “A framelet-based image inpainting algorithm,”Applied and Computational Harmonic Analysis, vol. 24, no. 2, pp. 131–149, 2008.
-  O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015, pp. 234–241.