I Introduction
Xray Computed Tomography (CT) is one of the most powerful clinical imaging tools, delivering highquality images in a fast and cost effective manner. However, Xray radiation from CT increases the potential risk of cancers to patients, so many studies has been conducted to reduce the Xray dose. In particular, low dose Xray CT technology has been extensively developed by reducing the number of photons, projection views, or ROI sizes. Among these, the interior tomography aims to obtain an ROI image by irradiating only within the ROI. Interior tomography is useful when the ROI within a patient’s body is small (such as heart). In some applications, interior tomography has additional benefits thanks to the cost saving from using a smallsized detector. However, the use of an analytic CT reconstruction algorithm generally produces images with severe cupping artifacts due to the transverse directional projection truncation.
Sinogram extrapolation is a simple but inaccurate approximation method to reduce the artifacts Hsieh et al. (2004). Recently, it has been shown that an ROI image can be reconstructed uniquely from truncated projection data when the intensity of the subregions inside the ROI are known a priori Courdurier et al. (2008). Assuming some prior knowledges of the functional space for images, Katsevich et al Katsevich, Katsevich, and Wang (2012)
proved the general uniqueness results for the interior problem and provided stability estimates. In Jin et al
Jin et al. (2012), a continuous domain singular value decomposition of the finite Hilbert transform operator characterized in Katsevich et al
Katsevich and Tovbis (2012) was used to represent an interior image with a linear combination of the eigenfunctions, after which the null space compensation was performed by using a more general set of prior subregion knowledge. Using the total variation (TV) penalty, Yu et alYu and Wang (2009) showed that a unique reconstruction is possible if the images are piecewise smooth. In a series of papers Ward et al. (2015); Lee et al. (2015), our group has also shown that a generalized Lspline along a collection of chord lines passing through the ROI can be uniquely recovered Ward et al. (2015); and we have further confirmed that the high frequency signal can be analytically recovered thanks to the Bedrosian identity, whereas the computationally intensive iterative reconstruction only need to be performed to reconstruct the low frequency part of the signal after downsampling Lee et al. (2015). While this approach significantly reduces the computational complexity of the interior reconstruction, the computational complexity of this technique, as well as most existing iterative reconstruction algorithms, still prohibits their routine clinical use.In recent years, deep learning algorithms using convolutional neural network (CNN) has made remarkable success in various applications
Krizhevsky, Sutskever, and Hinton (2012); Ronneberger, Fischer, and Brox (2015); Kang, Min, and Ye (2017); Chen et al. (2017); Han, Yoo, and Ye (2016); Jin et al. (2017). In particular, various deep learning architecture have been successfully used for lowdose CT Kang, Min, and Ye (2017); Kang et al. (2018); Chen et al. (2017), sparse view CT Han, Yoo, and Ye (2016); Jin et al. (2017); Han and Ye (2018), etc. These deep learning applications surpassed the previous iterative methods in image quality and reconstruction time. Moreover, in a recent theory of deep convolutional framelets Ye, Han, and Cha (2018), the authors showed that the success of deep learning comes from the power of a novel signal representation using nonlocal basis combined with datadriven local basis. Thus, the deep network is indeed a natural extension of classical signal representation theory such as wavelets, frames, etc, which is useful for inverse problems.Inspired by these findings, here we propose deep learning frameworks for interior tomography problem. One of the most important contributions of this paper is the observation that there are two ways of addressing the interior tomography that can be directly translated into two distinct neural network architectures. More specifically, it is wellknown that the technical difficulties of interior tomography arises from the existence of the null space in the finite Hilbert transformKatsevich and Tovbis (2012). One way to address this difficulty is a postprocessing approach to remove the null space image from the analytic reconstruction. In fact, our preliminary work Han, Gu, and Ye (2018) is the realization of such idea in neural network domain, which was trained to learn the cupping artifacts corresponding to the null space images. On the other hand, a direct inversion can be done from the truncated DBP data using an inversion formula for finite Hilbert transform King (2009). While this approach has been investigated by several pioneering works for interior tomography problems Defrise et al. (2006), the main limitation of these approaches is that the inversion formula is not unique due to the existence of nullspace, but the selection of the optimal parameter for the nullspace image to ensure the uniqueness is intractable. Another novel contribution of this work is the second type of neural network that is designed to learn to invert the finite Hilbert transform from the truncated DBP data by learning the null space parameters and convolutional kernel for Hilbert transform from the training data.
Although the two neural network approaches appear similar except their inputs, there are fundamental differences in their generalization capability. The first type network learns the nullspace components from the artifacts corrupted input images. Although the approach provides nearperfect reconstruction with about dB improvement in PSNR over existing methods Han, Gu, and Ye (2018), the nullspace component of the analytic reconstruction contains the singularity at the ROI boundary with strong intensity saturation, so the trained network for particular ROI size does not generalize well for different ROI sizes. On the other hand, the input image for the second type network is the truncated DBP images, which corresponds to the full DBP images on an ROI mask. Therefore, there are no singularities in the DBP images, which generalizes the network for different ROI sizes. Numerical results showed that while the second thype network outperforms the existing interior tomography techniques for all ROIs in terms of image quality and reconstruction time, the first type network degrades rapidly if the ROI size differs from the training data.
This paper is structured as follows. In Section II, the basic theory of differentiated backprojection (DBP) and Hilbert transform are reviewed, the interior tomography problem is formally defined, from which two types of neural network architectures are derived. Then, Section III describes the methods to implement and validate the proposed method, which is followed by experimental results in Section IV. Conclusions are provided in Section V.
Ii Theory
For simplicity, we consider 2D interior tomography problem throughout the paper, but the extension to 3D problem is straightforward.
ii.1 Differentiated Backprojection and Hilbert Transform
The variable
denotes a vector on the unit sphere
. The collection of vectors that are orthogonal to is denoted asWe refer to realvalued functions in the spatial domain as images and denote them as for . We denote the Radon transform of an image as
(1) 
where and . We further define the Xray transform that maps a function on into the set of its line integrals:
(2) 
where refer to the Xray source location. For a given object function and a given source trajectory , the differentiated backprojection (DBP) is then computed by Pack and Noo (2005); Zou and Pan (2004a, b):
(3) 
where denotes the appropriate intervals from the source segments between and , and denotes the distance weighting.
One of the most important aspects of the DBP formula in (3) is its relation to the analytic reconstruction methods. More specifically, let the source trajectory have no discontinuities. Suppose, furthermore, is on the connecting line of the two source positions and . Then, the differentiated backprojection data in (3) can be represented as Pack and Noo (2005); Zou and Pan (2004a, b):
(4) 
where
(5)  
and denotes the signum function.
The connecting line between the two sources position is often called a chord line Pack and Noo (2005); Zou and Pan (2004a, b). If the unit vector along the chord line is set as a coordinate axis, then we can find the orthonormal basis such that consists of the basis for the local coordinate system at (see Fig. 1). Suppose denotes the coordinate value on the new coordinate system composed of and . Then, the authors in Pack and Noo (2005); Zou and Pan (2004a, b) showed that (4) can be converted into the following form:
(6) 
where and denote the restriction of and on the chord line index , respectively; and denotes the Hilbert transform along the chord line. Because , we have
(7) 
which is known as the backprojection filtration (BPF) method that recovers the object on each chord line by taking the Hilbert transform of the DBP data Pack and Noo (2005); Zou and Pan (2004a, b).
ii.2 Problem Formulation
The measurement of the interior tomography problem is the restriction of the radon mesurement to the region where denotes the radius of the ROI. In the DBP domain, this is equivalent to find the unknown on the chord line indexed by using the DBP measurement , where denotes the chord line dependent 1D restriction of the ROI (see Fig. 1).
More specifically, let be the indicator function between :
We further define the truncated Hilbert transform:
(8) 
Then, the resulting 1D interior tomography problem can be formally stated as
Find such that  (9) 
In order to obtain 2D image within the ROI, this problem should be solved for all .
In the following, we denote with a slight abuse of notation and , if there are no concerns about the potential confusion.
ii.3 Inversion of Finite Hilbert Transform using Neural Networks
The main technical difficulty of the interior reconstruction is the existence of the null space of the truncated Hilbert transform Katsevich and Tovbis (2012); Ward et al. (2015). More specifically, there exists the nonzero such that
Indeed, can be expressed such that
(10) 
for any function outside of the ROI. A typical example of an 1D null space image for a given is illustrated in Fig. 2(a) for the case of , where the null space signal contains the singularities at . An example of 2D image null space image is also shown in Fig. 2(b), in which the singularities also exists at the ROI boundary. These are often called as the cupping artifact because they are shaped like a cup with a stronger bias of CT number near the ROI boundary. The cupping artifacts reduce contrast and interfere with clinical diagnosis.
In the first type of neural network (type I neural network), which is a direction extension of our preliminary work Han, Gu, and Ye (2018), a neural network is designed such that
(11) 
where and denote the collection of 2D groundtruth signal and its nullspace images. For this network, the null space corrupted input images can be easily obtained by
where denotes an analytic inversion formula such as the filtered backprojection (FBP) algorithm, and
denotes the zeropadded truncated projection data. See Fig.
3(a) for such network. Then, the neural network training problem can be performed as(12) 
where denotes the training data set composed of groundtruth image an its truncated projection. This method is simple to implement, and provides significant gain over the existing iterative methods Han, Gu, and Ye (2018).
However, one of the main technical issues of this network architecture is that the input images is corrupted with the singularities from the nullspace images at the ROI boundaries as shown in Fig. 2. Due to the strong intensity at the ROI boundaries, the network training is strongly dependent on ROI sizedependent cupping artifacts and the trained network does not generalize well as will be shown in experimental section. This would not be a problem when a specific ROI size is used for all interior tomography problems. However, in many practical applications such as interventional imaging, cardiac imaging, etc, the size of the ROI is mainly dependent on the subject size and clinical procedures, so there are many demands for flexible ROI sizes during the imaging. In this case, various neural network models for numerous ROI sizes should be stored, which is not practical.
To design such neural networks that generalizes well for all ROI sizes, let us revisit the truncated Hilbert transform (8). For simplicity, we now assume and . Then, the following formula is wellknown as an inversion formula for the finite Hilbert transform King (2009):
(13) 
where the constant is given by
(14) 
This formula has been used in some of the existing interior tomography approaches Defrise et al. (2006). Although the formula (13) with (14) appear the desired inversion formula for the finite Hilbert transform that can be directly used for interior tomography problems, the main weakness of this formula is that the expression is not unique. More specifically, the constant is in fact arbitrary since lives in the null space of the truncated Hilbert transformKing (2009):
(15) 
Thus, finding the optimal choice of is not possible by only considering 1D problem. In fact, the value must be chosen by considering adjacent chord lines to make the final 2D image realistic and free of line artifacts, which is however intractable and has not been attempted to our best understanding.
To investigate how this problem can be addressed using the second type of neural network (type II network), note that the inversion formula can be converted to
(16) 
where denotes the window size for the chord line index , denotes the elementwise product, and is the analytic form of weighting given by
and is the convolution kernel for Hilbert transform, and is the unknown constant. Since the analytic weighting can be readily calculated once the FOI size is detected from the truncated DBP input, the required parameters for the reconstruction of is the convolutional kernel and the constant for all chord line index . Then, after the reconstruction, the weight can be removed and the final image can be obtained for all .
In fact, this algorithmic procedure can be readily learned using a deep neural network. Specifically, we construct a neural network such that
where denotes the truncated DBP data for all chord lines, and is the 2D groundtruth image. In fact, the roles of the neural network are to estimate the ROI size (and its restriction ) from the truncated DBP input to calculate the weighting, and to learn the convolutional kernel for Hilbert transform as well as the constant for all .
Such neural network training problem can be performed as
(17) 
where denotes the training data set composed of groundtruth image and its 2D DBP data. Here it is important to note that the network could learn the reverse Hilbert transform for the full DBP data if no truncated DBP data is used during the training. Therefore, truncated DBP data and the corresponding truncated groundtruth image should be used as input and label data along with the nontruncated DBP data so that the network can learn to invert the finite Hilbert transform.
In contrast to the type I neural network , the type II neural network has truncated DBP data as input, which are just ROI images of the full DBP data. So there exists no singularities in the input data. Later we will show that such a trained neural network has a significant generalization power so that it can be used for any ROI sizes.
Iii Method
iii.1 Data Set
Ten subject data sets from American Association of Physicists in Medicine (AAPM) LowDose CT Grand Challenge was used in this paper. The provided data sets were originally acquired in helical CT, and were rebinned from the helical CT to angular scan fanbeam CT. The size artifactfree CT images are reconstructed from the rebinned fanbeam CT data using filtered backprojection (FBP) algorithm. From the CT images, sinogram are numerically obtained using forward fanbeam projection operator for our experiments. The number of detector is 1440 elements with pitch of 1 mm. The number of views is 1200. The distance from source to rotation axis (DSO) is 800 mm and the distance from source to detector (DSD) is 1400 mm. Out of ten sets, eight sets were used for network training, one set was used for validation, and the other set was used for test. This corresponds to 3720, 254, and 486 slices of 512 512 size for training, validation, and test data, respectively.
Fig. 3(a) shows a flowchart of the training scheme for the type I neural network that learns the artifact patterns in the analytic reconstruction images from the truncated projection data. In this case, the input image is corrupted with the cupping artifact, whereas the clean data with the same ROI is used as the ground truth. In this experiment, we used 380 detectors and the radius of the ROI was 107.59 mm, which is about 30% of the total ROI.
Fig. 3(b)(c) shows a flowchart of the training scheme for the type II neural networks that learn the inverse of the finite Hilbert transform. The truncated DBP data has no singularities regardless of the truncation ratio. In fact, the truncated DBP data are exactly the same as the full DBP data within the ROI mask. We trained two networks. One network was trained with only 380 detectors and full detectors (see Fig. 3(b)), whereas the other network is trained with various ROIs generated by 240, 380, 600, 1440 detectors (see Fig. 3(c)). This corresponds to the ratio of 19, 29, 46, and 100%, respectively.
It is important to note that the type I network in Fig. 3(a) cannot be trained with the complete projection data similar to the type II network in Fig. 3(b). This is because, in this case, the input and label data are both artifactfree FBP data, so that the neural network becomes identity mapping. This suggests another key benefit of the Type II network, which can use both truncated and full DBP data as input so that the network can be used not only for the interior problems but also for standard CT reconstruction.
For quantitative evaluation, we use the peak signal to noise ratio (PSNR), defined by
(18) 
where and denote the reconstructed image and ground truth, respectively; and are the number of pixel for row and column. We also used the structural similarity (SSIM) index Wang et al. (2004), defined as
(19) 
where is a average of ,
is a variance of
and is a covariance of and . There are two variables to stabilize the division such as and . is a dynamic range of the pixel intensities. and are constants by default and . We also use the normalized mean square error (NMSE).iii.2 Network Architecture
The same network architecture shown in Fig. 4 is used for the type I and type II networks, in which only difference is from their input images. The type I network uses the FBP images as input, while the type II network uses the DBP data. The network backbone corresponds to a modified architecture of UNet Ronneberger, Fischer, and Brox (2015). A yellow arrow in Fig. 4 is the basic operator and consists of
convolutions followed by a rectified linear unit (ReLU) and batch normalization. The yellow arrows between the seperate blocks at every stage are omitted. A red arrow is a
average pooling operator and located between the stages. Average pooling operator doubles the number of channels and reduces the size of the layers by four. In addition, a blue arrow is average unpooling operator, reducing the number of channels by half and increasing the size of the layer by four. A violet arrow is the skip and concatenation operator. A green arrow is the simple convolution operator generating final reconstruction image. Finally, a gray arrow is the skip and addition operator for residual learning.iii.3 Network training
The type I and type II networks were implemented using MatConvNet toolbox (ver.24) in MATLAB R2015a environment Vedaldi and Lenc (2015)
. Processing units used in this research are Intel Core i77700 (3.60GHz) central processing unit (CPU) and GTX 1080 Ti graphics processing unit (GPU). Stochastic gradient descent (SGD) method was used to train the network. The number of epochs was 300. The initial learning rate was
, which gradually dropped to at each epoch. The regularization parameter was . For data augmentation, input data was performed with horizontal and vertical flipping. In addition, minibatch data was used, and the size of input patch is 256 256. Since the trained convolution kernel has spatially invariant properties, these filters can be used for the entire input data in the inference phase. The size of entire input data is 512 512. Training time lasted about 24 hours.Iv Experimental Results
Due to the null space image that has singularity in the ROI boundary, the type I network training with analytic reconstruction is quite dependent upon the input ROI size. Thus, we conjectured that the type I network with a specific ROI may not generalize well for other ROI sizes.
PSNR [dB]  TV  Lee Lee et al. (2015)  Type I (Fig. 3(a))  Type II (Fig. 3(b))  Type II (Fig. 3(c))  

# of detector 
240  19.0360  22.8487  20.9016  28.1598  
380  24.2809  27.0543  31.2188  32.2786  
600  25.7784  31.1304  27.7033  34.1692  
1440      23.1952  35.6034  
SSIM  TV  LeeLee et al. (2015)  Type I (Fig. 3(a))  Type II (Fig. 3(b))  Type II (Fig. 3(c))  
# of detector 
240  0.8657  0.9599  0.9262  0.9613  
380  0.9161  0.9701  0.9723  0.9758  
600  0.9289  0.9777  0.9608  0.9761  
1440      0.9162  0.9345 
To confirm the performance degradation of type I network with respect to varying ROI sizes, the trained network was applied to the test data with 240, 380, 600, and 1440 detectors. The average PSNR and SSIM values are described in the Table 1. Since the type I network was trained with 380 detector data, it performed better than the type II network for the case of the 380 detector as shown in Fig. 5 and Table 1. Type II network shows the highest values in PSNR and SSIM, except 380 detector. However, the PSNR of the type II network is more than 32 dB and shows about 5 dB improvement over the conventional iterative methods such as TV and Lee methods. Moreover, in spite of the difference in PSNR, the SSIM values for both methods shown in Table 1 are beyond the 0.97, implying that both methods can be used for clinical applications. On the other hand, the type II neural network trained with 380 detectors, as shown in Fig. 3(b), generalizes well for all ROI sizes, including full projection data cases. Also with training data augmentation with 240, 380, 600, 1440 detectors, as shown in Fig. 3(c), there is consistent 1dB improvement for all ROI sizes. Hence, in the following experiments, the enhanced version of the type II network shown in Fig. 3(c) will be used as the type II network.
Regarding the reconstruction results by type I and type II networks for the case of 380 detector, there are nondistinguishable differences in their reconstruction profiles as shown in Figs. 6(b)(v). On the other hand, if the type I network was used for smaller ROI cases, it tends to overestimate as shown in Fig. 6(a)(v), whereas it tends to underestimate for larger ROI cases as shown in in Figs. 6(cd)(v). In contrast to the type I network, the type II network provides accurate reconstruction profiles regardless of ROI sizes. This again confirms the generalization capability of the type II network.
We also compared our methods with existing iterative methods such as total variation penalized reconstruction (TV) Yu and Wang (2009) and the Lspline based multiscale regularization method by Lee et al Lee et al. (2015).
Fig. 7(iv) shows the reconstruction results of truncated images by 380 detectors. The graphs in Fig. 7(vi) are the profiles along the white line on the each result. Fig. 7(a) shows that type I and type II networks clearly remove the cupping artifact and preserves detailed structures of underlying images. The profiles in Fig. 7(a)(vi) confirmed that the detailed structures are very well preserved by both networks. However, TV method has residual artifacts at the ROI boundaries, and the Lee method showed a drop in intensity at the ROI boundary. Fig. 7(b) shows the reconstruction results from the sagittal direction. Type I network performs slightly better than type II since the type I network is trained only with 380 detector.
Fig. 8 shows the reconstruction images from truncated 600 detectors. Type II network outperforms other methods including type I network. Type II network clearly preserves the smallscale lung nodule as well as the largescale organs and provides the minimum NMSE values. However, type I network shows global degradation as shown in blue line of Fig. 8(vi). Similar to truncated 380 detector, the TV and Lee methods have visible artifacts at the ROI boundaries.
Table 2 shows the computation time. The proposed networks took about 0.05 sec/slice with GPU and 4 sec/slice with CPU, respectively. However, the TV approach in GPU took about 11.5 sec/slice and the Lee method in CPU is about 3 9 sec/slice along the number of detector. Because the Lee method is based on a onedimensional operation, it is faster than TV on the GPU, even though the Lee approach is implemented on the CPU. The proposed method in the GPU environment is about 60 times faster than other methods. In addition, the proposed method is 1.5 times faster on the average CPU environment. This confirms that the proposed method, regardless of the ROI sizes, shows very fast reconstruction times and provides remarkably improved image qualities compared to conventional methods.
Time [sec/slice]  TV  Lee method Lee et al. (2015)  Proposed  

GPU  CPU  CPU  GPU  
# of detector 
240  11.6  3.3  4.0  0.05 
380  11.7  5.1  
600  11.9  9.3  
1440     
V Conclusion
In this paper, we proposed and compared two types of deep learning network for interior tomography problem. The type I network architecture is designed to learn the cupping artifacts from the analytic reconstruction, whereas the type II network architecture is to learn the inverse of the finite Hilbert transform. Due to the singularity in the artifactcorrupted images, the Type I network was not well generalizable, although its performance was best at the specific ROI size used for training data. On the other hand, the input images for the Type II network are truncated DBP data that is free of singularities. Therefore, the network was shown to be well generalized for all ROI sizes. Numerical results showed that the proposed method significantly outperforms existing iterative methods in terms of quantitative and qualitative image quality as well as computation time.
Acknowledgment
The authors would like to thanks Dr. Cynthia McCollough, the Mayo Clinic, the American Association of Physicists in Medicine (AAPM), and grant EB01705 and EB01785 from the National Institute of Biomedical Imaging and Bioengineering for providing the LowDose CT Grand Challenge data set. This work is supported by National Research Foundation of Korea, Grant number NRF2016R1A2B3008104. This work is also supported by the R&D Convergence Program of NST (National Research Council of Science & Technology) of Republic of Korea (Grant CAP133KERI).
References
 Hsieh et al. (2004) J. Hsieh, E. Chao, J. Thibault, B. Grekowicz, A. Horst, S. McOlash, and T. Myers, “Algorithm to extend reconstruction fieldofview,” in Biomedical Imaging: Nano to Macro, 2004. IEEE International Symposium on (IEEE, 2004) pp. 1404–1407
 Courdurier et al. (2008) M. Courdurier, F. Noo, M. Defrise, and H. Kudo, “Solving the interior problem of computed tomography using a priori knowledge,” Inverse Problems 24, 065001 (2008)
 Katsevich, Katsevich, and Wang (2012) E. Katsevich, A. Katsevich, and G. Wang, “Stability of the interior problem with polynomial attenuation in the region of interest,” Inverse Problems 28, 065022 (2012)
 Jin et al. (2012) X. Jin, A. Katsevich, H. Yu, G. Wang, L. Li, and Z. Chen, “Interior tomography with continuous singular value decomposition,” IEEE transactions on medical imaging 31, 2108–2119 (2012)
 Katsevich and Tovbis (2012) A. Katsevich and A. Tovbis, “Finite Hilbert transform with incomplete data: nullspace and singular values,” Inverse Problems 28, 105006 (2012)
 Yu and Wang (2009) H. Yu and G. Wang, “Compressed sensing based interior tomography,” Physics in Medicine and Biology 54, 2791 (2009)
 Ward et al. (2015) J. P. Ward, M. Lee, J. C. Ye, and M. Unser, “Interior tomography using 1D generalized total variation – part I: mathematical foundation,” SIAM Journal on Imaging Sciences 8, 226–247 (2015)
 Lee et al. (2015) M. Lee, Y. Han, J. P. Ward, M. Unser, and J. C. Ye, “Interior tomography using 1d generalized total variation. part ii: Multiscale implementation,” SIAM Journal on Imaging Sciences 8, 2452–2486 (2015)

Krizhevsky, Sutskever, and Hinton (2012)
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in
Advances in neural information processing systems (2012) pp. 1097–1105  Ronneberger, Fischer, and Brox (2015) O. Ronneberger, P. Fischer, and T. Brox, “Unet: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and ComputerAssisted Intervention (Springer, 2015) pp. 234–241
 Kang, Min, and Ye (2017) E. Kang, J. Min, and J. C. Ye, “A deep convolutional neural network using directional wavelets for lowdose xray ct reconstruction,” Medical physics 44 (2017)
 Chen et al. (2017) H. Chen, Y. Zhang, M. K. Kalra, F. Lin, Y. Chen, P. Liao, J. Zhou, and G. Wang, “Lowdose ct with a residual encoderdecoder convolutional neural network,” IEEE transactions on medical imaging 36, 2524–2535 (2017)
 Han, Yoo, and Ye (2016) Y. Han, J. Yoo, and J. C. Ye, “Deep residual learning for compressed sensing CT reconstruction via persistent homology analysis,” arXiv preprint arXiv:1611.06391 (2016)
 Jin et al. (2017) K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Processing 26, 4509–4522 (2017)
 Kang et al. (2018) E. Kang, W. Chang, J. Yoo, and J. C. Ye, “Deep convolutional framelet denosing for lowdose CT via wavelet residual network,” IEEE Transactions on Medical Imaging 37, 1358–1369 (2018)
 Han and Ye (2018) Y. Han and J. C. Ye, “Framing UNet via deep convolutional framelets: Application to sparseview CT,” IEEE Transactions on Medical Imaging 37, 1418–1429 (2018)
 Ye, Han, and Cha (2018) J. C. Ye, Y. Han, and E. Cha, “Deep convolutional framelets: A general deep learning framework for inverse problems,” SIAM Journal on Imaging Sciences 11, 991–1048 (2018)
 Han, Gu, and Ye (2018) Y. Han, J. Gu, and J. C. Ye, “Deep learning interior tomography for regionofinterest reconstruction,” in Proceedings of The Fifth International Conference on Image Formation in XRay Computed Tomography (2018)
 King (2009) F. W. King, Hilbert transforms, Vol. 2 (Cambridge University Press Cambridge, UK, 2009)
 Defrise et al. (2006) M. Defrise, F. Noo, R. Clackdoyle, and H. Kudo, “Truncated Hilbert transform and image reconstruction from limited tomographic data,” Inverse Problems 22, 1037 (2006)
 Pack and Noo (2005) J. D. Pack and F. Noo, “Conebeam reconstruction using 1D filtering along the projection of Mlines,” Inverse Problems 21, 1105 (2005)
 Zou and Pan (2004a) Y. Zou and X. Pan, “Exact image reconstruction on PIlines from minimum data in helical conebeam CT,” Physics in Medicine and Biology 49, 941 (2004a)
 Zou and Pan (2004b) Y. Zou and X. Pan, “Image reconstruction on PIlines by use of filtered backprojection in helical conebeam CT,” Physics in Medicine and Biology 49, 2717 (2004b)
 Wang et al. (2004) Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing 13, 600–612 (2004)
 Vedaldi and Lenc (2015) A. Vedaldi and K. Lenc, “Matconvnet: Convolutional neural networks for matlab,” in Proceedings of the 23rd ACM international conference on Multimedia (ACM, 2015) pp. 689–692
Comments
There are no comments yet.