A deep network for sinogram and CT image reconstruction

01/20/2020 ∙ by Wei Wang, et al. ∙ Chongqing University University of Delaware Shenzhen University 19

A CT image can be well reconstructed when the sampling rate of the sinogram satisfies the Nyquist criteria and the sampled signal is noise-free. However, in practice, the sinogram is usually contaminated by noise, which degrades the quality of a reconstructed CT image. In this paper, we design a deep network for sinogram and CT image reconstruction. The network consists of two cascaded blocks that are linked by a filter backprojection (FBP) layer, where the former block is responsible for denoising and completing the sinograms while the latter is used to removing the noise and artifacts of the CT images. Experimental results show that the reconstructed CT images by our methods have the highest PSNR and SSIM in average compared to state of the art methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

X-ray computed tomography (CT) has been widely used in clinical, industrial and other applications since its ability to achieve the inner vision of an object without destructing it. With the increased usage of CT in clinics, the potential risk that inducing cancers by the X-ray radiation has been alarmed [1]. Therefore, many techniques have been developed to decrease the radiation dose of CT including lowering the x-ray exposure in each tube by decreasing the current and shortening the exposure time of the tubes, and decreasing the number of scanning angles. Lowering the x-ray exposure will result in a noisy sinogram while decreasing the number of scanning angles will make the system ill posed and the reconstructed CT image will suffer from undesirable artifacts. To address these issues, many algorithms were proposed to improve the quality of the reconstructed CT images, which can be generally divided into three categories: (a) sinogram domain processing, (b) iterative algorithm, and (c) image domain post-processing.

Sinogram domain processing techniques first upsample and denoise the sinograms before converting them to CT images. Balda et al. [2] introduced a structure adaptive sinogram filter to reduce the noise of sinograms. Cao et al. [3]

proposed a dictionary learning based inpainting method to estimate the missing projection data. Lee et al.

[4]

proposed a deep U-network to interpolate the sinogram for sparse-view CT images.

The iterative algorithms reconstruct a CT image by solving the following model:

where is a data-fidelity term that forces the solution to be consistent with the measured data , is Radon transform, and are two regularization terms that incorporate the prior knowledge of the data in image domain and sinogram domain, respectively, into the reconstructed image and are two trade-off parameters. In the literature, many different forms of and are utilized, such as the total variation (TV) [5] and its improved versions [6][7][8], nonlocal means (NLM)[9], dictionary learning [10][11], low-rank [12] and -norm of the wavelet coefficients [13].

The post-processing techniques improve the quality of CT images by removing the noise and artifacts in the CT images already reconstructed by other methods (such as FBP). In theory, all the methods of removing noise and artifacts of usual optical images can be applied to the CT image post-processing, such as the NLM [14], dictionary learning [15][16], block-matching 3D (BM3D) [17][18], and so on.

Recently, inspired by the development of deep learning [19][20][21][22]

in computer vision and natural language processing, many deep-learning (DL) based algorithms have been proposed for CT reconstruction. Most of them were utilized as a post-processing step to remove noise and artifacts of CT images reconstructed by other techniques. For example, in

[23]

, Wang et al. proposed a residual encoder–decoder convolutional neural network (RED-CNN) to remove the artifacts from the FBP reconstruction. In

[24], Jin et al. proposed a deep convolutional network (FBPConvNet) that combines FBP, U-net and the residual learning to remove the artifacts while preserving the image structures. In [25], Zhang et al. proposed a DenseNet and deconvolution based network (DD-Net) to improve the quality of the CT images reconstructed by FBP. In [26], Jiang et al. proposed symmetric residual convolutional neural network (SR-CNN) to enhance the sharpness of edges and detailed anatomical structures in under-sampled CT image reconstructed by the ASDPOCS TV algorithm [5]. In [27], a framelet-based deep residual network was proposed to denoise the low-dose CT images. Some of other DL based algorithms learn the mapping between sinogram and CT image space, which directly decodes sinograms into CT images. For instances, in [28], Zhu et al. proposed a deep neural network AUTOMAP to learn a mapping between sensors and images. In [29], Li et al. proposed a deep learning network iCT-Net to address difficult CT image reconstruction problems such as view angle truncation, the view angle under-sampling, and interior problems. In [30], Chen et al. proposed a reconstruction network for sparse-data CT by unfolding an iterative reconstruction scheme up to a number of iterations for data-driven training.

In this paper, we propose a deep network for sinogram denoising and CT image reconstruction simultaneously. Specifically, our network consists of two cascaded blocks, which are linked by a FBP layer. The former block is responsible for denoising and completing the sinograms while the latter is used to removing the noise and artifacts of the CT images. Different from [28] and [29], we utilize the FBP layer to decode the sinograms into CT images instead of using the fully connected layer, which reduces the number of parameters of the network and avoids the overfitting problem.

This paper is organized as follows. Section II describes detailed structure of the proposed network and its training method. Section III presents the experimental results. Discussion and conclusion are given in Section IV.

Ii Method

Let us first introduce the proposed network.

Ii-a Overall network architecture

Assuming that is the image of a test object, and is its corresponding sinogram. The relationship between them can be modeled as

where and are the number of detectors and projection angles, respectively, and is the measurement process involving with a Radon transform and some noisy factors. Our goal is to use the deep learning (DL) techniques to learn a map

such that approximates for all training data pairs . The pipeline of our proposed network has three cascade steps: The first step is to denoise and complete the sinograms followed by the second step to convert the processed sinograms to CT images and the third step to remove the noise and artifacts resided in the CT images. This general process can be expressed as , where

is the sinogram domain map function (The numbers of the angles of are still since we interpolate along the angle direction before inputting it to ),

is the image domain map function and

is the FBP transform layer that decodes sinograms into CT images. When designing layers that map elements from sinograms space () to CT image space (), fully connected layers are usually utilized [28] [29], which increases the size of the parameters and makes the network prone to be overfit. Therefore, in our network we use the FBP layer to replace the fully connected layers. Thus, learning one single map

is converted into learning two maps

and

which is easier since the latter two functions and map elements between the same spaces. The overall architecture of our network can been seen in Figure 1.

Fig. 1: Overall architecture of our proposed network. It consists of the sinogram domain map and CT image domain map which are linked by the FBP layer.

Ii-B Network architecture for preprocessing sinograms

We construct our sinogram domain network based on the residual DnCNN [31], which was originally developed to remove blind Gaussian noise. According to [32], residual learning in DnCNN makes the residual mapping much easier to be optimized since it is more like an identity mapping. In our paper, the architecture of network also consists of three types of layers. For the first layer, convolution unit with filters of size

is used to generate 64 features, and activation unit ReLU

is then used for nonlinearity. For Layers

, convolution units, batch normalized (BN) units and activation units ReLU are used, where the filter sizes of convolutions are

. For Layer 6, a single convolution unit with filters of size is used to reconstruct the sinograms. At last, a shortcut is utilized to connect the input and output.

Ii-C Network architecture for post-processing CT images

In the literature, large amounts of deep networks were proposed to post-process the CT images reconstructed by other methods. In our network, the Red-CNN [23] is adopted to construct our CT image domain network . The detailed structures about the network are as follows. For Layers (Layer 7 is the FBP), convolution units are used to generate feature maps and activation units ReLU are used for nonlinearity, where the filter size of Layer 8 is and of Layers . For Layers , deconvolution units are used to decode features and activation units ReLU are used for nonlinearity, where the filter size of Layers is and of Layer

, and the strides of the deconvolutions are all 1. Also, there are 3 shortcuts that connect the FBP layer and the deconvolution units of Layer 17, the ReLU units of Layer 9 and the deconvolution units of Layer 15, and the ReLU units of Layer 11 and the deconvolution units of Layer 13, respectively.

Ii-D Loss function and training

The loss function of our whole network is composed of two terms,

(1)

where

are the parameters to be learned, is an set of associated data, represents the input sinogram, is a sinogram label and is a CT image label, is the output of the sinogram domain network and is the output of the whole network, and is a balance parameter. Our network is an end-to-end system mapping sinograms to CT images. Once the architecture of the network is configured, its parameters can be learned by optimizing the loss function (1

) using the backpropagation algorithm (BP)

[33]. In this study, the loss function is optimized by the Adam algorithm [34], where the learning rate was set as .

Ii-E Gradients and backpropagation

In our experiments, we use the software Tensorflow to train the network and compute the gradients of the loss function with respect to its parameters. During the training process, two main gradients need to be calculated,

and , where can be computed by Tensorflow automatically while

needs more efforts. By the chain rules, we have

where

and is the output of the FBP layer that decodes the sinograms into CT images.

The gradients and can be calculated by Tensorflow automatically. For the FBP layer, using “@tf.function” in Tensorflow and “iradon” function in “scikit-image” package can constitute it. However, when automatically computing the gradient of this version of FBP layer by Tensorflow, an error will be raised. To solve this issue, we implement the FBP layer by using the sparse matrix multiplication.

Let

be the vectorization

of a sinogram , i.e.

then there exists a real sparse matrix such that the matrix multiplication equals to the vectorization of the backprojection of . Since the FBP method reconstructs CT images by convoluting with sinograms followed by a backprojection, the output of our FBP layer for input can be rewritten as

where

is the “ramp filter”, i.e. ,

is the Fourier transform of

, and represents that each columns of convolutes with circularly. After constructing the FBP layer by using the sparse matrix and filter , we can compute the gradient .

where

is the entry of and is a matrix with its entry being 1 and others 0.

Iii Experimental results

We now present some simulation results.

Iii-a Data preparation

  1. Train dataset. A Clinical Proteomic Tumor Analysis Consortium Lung Adenocarcinoma (CPTAC-LUAD) [35] dataset was downloaded from The Cancer Imaging Archive (TCIA). We randomly chose 500 CT images from CPTAC-LUAD, extracted their central patch of size and stretched the value in the interval linearly. The 500 extracted patches were then used as the CT image labels and the sinogram image labels were generated by

    where is Radon transform. The input sinograms were generated by adding noise to via the following equations [36]:

    (2)

    where we set and in our experiments.

  2. Test dataset. A Pancreas-CT [37] dataset was downloaded from TCIA. The Pancreas-CT contains 82 abdominal contrast enhanced 3D CT scans from 53 male and 27 female subjects. Since the CT images in Pancreas-CT are of size , we downsampled them by factor 2 and randomly chose 500 of them as the test data. The input sinograms of the test set were generated in the way as generating the training sinograms via equation (2).

Iii-B Reconstructed results for 180 angles

In this subsection, we demonstrate that our network can be trained to reconstruct CT images from the sparse-viewed angle sinograms. To this end, we sampled the sinograms at 180 angles ([0:1:180]) that are uniformly spaced in the interval to get the simulated data and . Therefore, in this set of experiments, the number of angles of the input sinograms and sinogram image labels are both 180. For comparison, referenced CT images are reconstructed by state of the art deep learning methods, Red-CNN [23], DD-Net [25] and FBP-Conv [24]. For these comparted methods, we use the standard FBP algorithm with the “Ram-Lak” filter to reconstruct the CT images from the sinograms as their inputs. We set in our loss function for this set of experiments. In Figure 2, the compared results reconstructed from the test data are shown, from which we can observe that the reconstructed result by DD-Net still has some artifacts while those reconstructed by the other methods have similar visual effect. In Figure 3, we present the absolute difference images between the reconstructed and the original. We can see that the reconstructed image by our method lost the least details compared to those by the other methods.

(a) Original
(b) FBP
(c) Red-CNN
(d) DD-Net
(e) FBP-Conv
(f) Ours
Fig. 2: The reconstructed results of the compared methods for 180 angles.
(a) Red-CNN
(b) DD-Net
(c) FBP-Conv
(d) Ours
Fig. 3: The absolute difference images between the reconstructed and the original for 180 angles.

To evaluate the performance of these networks objectively, PSNR and SSIM are used to measure the similarity of the reconstructed images and the original. In Table I, the average values of PSNR and SSIM of the results reconstructed from the test dataset by the five methods (including FBP) are listed, from which we can observe that our network gets the highest PSNR and SSIM in average.

PSNR SSIM
FBP 32.21 0.788
Red-CNN 36.08 0.929
DD-Net 34.24 0.889
FBP-Conv 35.74 0.928
Ours 36.33 0.933
TABLE I: The averaged PSNR and SSIM of the compared methods for 180 angles.

Iii-C Reconstructed results for 90 angles

In this subsection, sparser sinogram data are used to examine the ability of our network to reconstruct CT images. We first sampled the sinograms at 90 angles ([0:2:180]) that are uniformly spaced in the interval , then add noise to the samples via equation (2) and at last interpolate them along the angle direction to 180 angles to get the input sinograms . The sinogram image labels were obtained by sampling the sinograms at 180 angles ([0:1:180]). Thus, the actual number of angles of the input sinograms is 90 while the one of the labels is 180. We set in our loss function for this set of experiments. We also compared our results to those of Red-CNN [23], DD-Net [25] and FBP-Conv [24]. Figure 4 shows the results of the compared methods reconstructed from the test dataset. We can see that the results of DD-Net and FBP-Conv have some noise and artifacts while those of Red-CNN and ours have the best visual effect. Similarly, we display the absolute difference images between the reconstructed results and the original in Figure 5, from which we can observe that the result of our network preserve more details.

(a) Original
(b) FBP
(c) Red-CNN
(d) DD-Net
(e) FBP-Conv
(f) Ours
Fig. 4: The reconstructed results of the compared methods for 90 angles.
(a) Red-CNN
(b) DD-Net
(c) FBP-Conv
(d) Ours
Fig. 5: The absolute difference images between the reconstructed and the original for 90 angles.

Quantitative analysis for the reconstructed results of the entire test data using these methods has also been carried out. The average PSNR and SSIM of the results are listed in Table II. We can observe that our network clearly outperforms the other methods and has the highest PSNR and SSIM in average.

PSNR SSIM
FBP 28.33 0.593
Red-CNN 33.58 0.896
DD-Net 32.05 0.833
FBP-Conv 33.84 0.898
Ours 34.39 0.918
TABLE II: The averaged PSNR and SSIM of the compared methods for 90 angles.

Iii-D Effect test of parameter

In this subsection, we test the effect of the parameter in our loss function. First, we test its effect on the sinograms of 180 angles. We set to train the network with the training data of 180 angles. The average PSNR and SSIM of the sinograms and CT images reconstructed from the test dataset are listed in Table III. From Table III, we can see that the network with can reconstruct the best CT images but its subnetwork has least denoising effect. Conversely, the subnetwork with has good denoising effect but the average PSNR of its results is lower than that of .

Next, we train our network using the training data of 90 angles with . The average PSNR and SSIM of the reconstructed sinograms and CT images with different are listed in Table IV. We can observe that the subnetwork with large can output sinograms of better quality. But the CT images reconstructed by the whole network with have the highest average PSNR, which is different from that using sinograms of 180 angles. This may be because that when we train the network using sinograms of 90 angles, the corresponding labels we used are of 180 angles, which may reconstruct better sinograms and the better sinograms output by the network with also have a positive effect on the final CT images reconstruction.

Sinograms CT images
PSNR SSIM PSNR SSIM
51.43 0.993 36.33 0.933
56.00 0.997 36.13 0.931
56.12 0.998 36.28 0.933
TABLE III: The effect test of on sinograms of 180 angles.
Sinograms CT images
PSNR SSIM PSNR SSIM
52.96 0.996 34.15 0.918
53.27 0.996 34.39 0.918
56.12 0.996 34.32 0.917
TABLE IV: The effect test of on sinograms of 90 angles.

Iv Conclusion

In this paper, we proposed an end-to-end deep network for CT image reconstruction, which inputs the sinograms and outputs the reconstructed CT images. The network consists of two blocks, which are linked by an FBP layer. The former block pre-processes the sinograms such as denoising and upsampling, the latter block post-processes the CT images such as denoising and removing the artifacts and the FBP layer decodes the sinograms into CT images. By using the sparse matrix multiplication, the problem that computes the gradients of the FBP layer with respect to the parameters of the first block was addressed. Experimental results demonstrated that our method outperforms state of the art deep learning method in CT reconstruction. One reason why the performance of our network is better than others is that we train our network using extra information, i.e. the sinogram image labels.

References

  • [1] E. J. Hall and D. J. Brenner, “Cancer risks from diagnostic radiology: the impact of new epidemiological data,” BRITISH JOURNAL OF RADIOLOGY, vol. 85, no. 1020, pp. E1316–E1317, DEC 2012.
  • [2] M. Balda, J. Hornegger, and B. Heismann, “Ray Contribution Masks for Structure Adaptive Sinogram Filtering,” IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 31, no. 6, pp. 1228–1239, JUN 2012.
  • [3] S. Li, Q. Cao, Y. Chen, Y. Hu, L. Luo, and C. Toumoulin, “Dictionary learning based sinogram inpainting for CT sparse reconstruction,” OPTIK, vol. 125, no. 12, pp. 2862–2867, 2014.
  • [4] H. Lee, J. Lee, H. Kim, B. Cho, and S. Cho, “Deep-Neural-Network-Based Sinogram Synthesis for Sparse-View CT Image Reconstruction,” IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES, vol. 3, no. 2, SI, pp. 109–119, MAR 2019.
  • [5] E. Y. Sidky and X. Pan, “Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization,” PHYSICS IN MEDICINE AND BIOLOGY, vol. 53, no. 17, pp. 4777–4807, SEP 7 2008.
  • [6] S. Niu, Y. Gao, Z. Bian, J. Huang, W. Chen, G. Yu, Z. Liang, and J. Ma, “Sparse-view x-ray CT reconstruction via total generalized variation regularization,” PHYSICS IN MEDICINE AND BIOLOGY, vol. 59, no. 12, pp. 2997–3017, JUN 21 2014.
  • [7] Y. Zhang, Y. Wang, W. Zhang, F. Lin, Y. Pu, and J. Zhou, “Statistical iterative reconstruction using adaptive fractional order regularization,” BIOMEDICAL OPTICS EXPRESS, vol. 7, no. 3, pp. 1015–1029, MAR 1 2016.
  • [8] Y. Zhang, W.-H. Zhang, H. Chen, M.-L. Yang, T.-Y. Li, and J.-L. Zhou, “Few-view image reconstruction combining total variation and a high-order norm,” INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, vol. 23, no. 3, pp. 249–255, SEP 2013.
  • [9]

    Y. Chen, D. Gao, C. Nie, L. Luo, W. Chen, X. Yin, and Y. Lin, “Bayesian statistical reconstruction for low-dose X-ray computed tomography using an adaptive-weighting nonlocal prior,”

    COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, vol. 33, no. 7, pp. 495–500, OCT 2009.
  • [10] Q. Xu, H. Yu, X. Mou, L. Zhang, J. Hsieh, and G. Wang, “Low-Dose X-ray CT Reconstruction via Dictionary Learning,” IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 31, no. 9, pp. 1682–1697, SEP 2012.
  • [11] P. Bao, W. Xia, K. Yang, W. Chen, M. Chen, Y. Xi, S. Niu, J. Zhou, H. Zhang, H. Sun, Z. Wang, and Y. Zhang, “Convolutional Sparse Coding for Compressed Sensing CT Reconstruction,” IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 38, no. 11, pp. 2607–2619, NOV 2019.
  • [12] J.-F. Cai, X. Jia, H. Gao, S. B. Jiang, Z. Shen, and H. Zhao, “Cine Cone Beam CT Reconstruction Using Low-Rank Matrix Factorization: Algorithm and a Proof-of-Principle Study,” IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 33, no. 8, pp. 1581–1591, AUG 2014.
  • [13] S. Ramani and J. A. Fessler, “Parallel MR Image Reconstruction Using Augmented Lagrangian Methods,” IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 30, no. 3, pp. 694–706, MAR 2011.
  • [14] Z. Li, L. Yu, J. D. Trzasko, D. S. Lake, D. J. Blezek, J. G. Fletcher, C. H. McCollough, and A. Manduca, “Adaptive nonlocal means filtering based on local noise level for CT denoising,” MEDICAL PHYSICS, vol. 41, no. 1, JAN 2014.
  • [15] M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE TRANSACTIONS ON SIGNAL PROCESSING, vol. 54, no. 11, pp. 4311–4322, NOV 2006.
  • [16] Y. Chen, X. Yin, L. Shi, H. Shu, L. Luo, J.-L. Coatrieux, and C. Toumoulin, “Improving abdomen tumor low-dose CT images using a fast dictionary learning based processing,” PHYSICS IN MEDICINE AND BIOLOGY, vol. 58, no. 16, pp. 5803–5820, AUG 21 2013.
  • [17] P. Fumene Feruglio, C. Vinegoni, J. Gros, A. Sbarbati, and R. Weissleder, “Block matching 3D random noise filtering for absorption optical projection tomography,” PHYSICS IN MEDICINE AND BIOLOGY, vol. 55, no. 18, pp. 5401–5415, SEP 21 2010.
  • [18] D. Kang, P. Slomka, R. Nakazato, J. Woo, D. S. Berman, C. C. J. Kuo, and D. Dey, “Image Denoising of Low-radiation Dose Coronary CT Angiography by an Adaptive Block-Matching 3D Algorithm,” in MEDICAL IMAGING 2013: IMAGE PROCESSING, ser. Proceedings of SPIE, Ourselin, S and Haynor, DR, Ed., vol. 8669.   SPIE; Aeroflex Inc; Univ Cent Florida, CREOL Coll Opt & Photon; DQE Instruments Inc; Medtronic Inc; PIXELTEQ, 2013, Conference on Medical Imaging - Image Processing, Lake Buena Vista, FL, FEB 10-12, 2013.
  • [19] I. J. Goodfellow, Y. Bengio, and A. Courville, Deep Learning.   Cambridge, MA, USA: MIT Press, 2016, http://www.deeplearningbook.org.
  • [20] J. Schmidhuber, “Deep learning in neural networks: An overview,” NEURAL NETWORKS, vol. 61, pp. 85–117, JAN 2015.
  • [21] M. I. Jordan and T. M. Mitchell, “Machine learning: Trends, perspectives, and prospects,” SCIENCE, vol. 349, no. 6245, SI, pp. 255–260, JUL 17 2015.
  • [22] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” NATURE, vol. 521, no. 7553, pp. 436–444, MAY 28 2015.
  • [23] H. Chen, Y. Zhang, M. K. Kalra, F. Lin, Y. Chen, P. Liao, J. Zhou, and G. Wang, “Low-Dose CT With a Residual Encoder-Decoder Convolutional Neural Network,” IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 36, no. 12, pp. 2524–2535, DEC 2017.
  • [24] K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep Convolutional Neural Network for Inverse Problems in Imaging,” IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 26, no. 9, pp. 4509–4522, SEP 2017.
  • [25] Z. Zhang, X. Liang, X. Dong, Y. Xie, and G. Cao, “A Sparse-View CT Reconstruction Method Based on Combination of DenseNet and Deconvolution,” IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 37, no. 6, SI, pp. 1407–1417, JUN 2018.
  • [26] Z. Jiang, Y. Chen, Y. Zhang, Y. Ge, F.-F. Yin, and L. Ren, “Augmentation of CBCT Reconstructed From Under-Sampled Projections Using Deep Learning,” IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 38, no. 11, pp. 2705–2715, NOV 2019.
  • [27] E. Kang, W. Chang, J. Yoo, and J. C. Ye, “Deep Convolutional Framelet Denosing for Low-Dose CT via Wavelet Residual Network,” IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 37, no. 6, SI, pp. 1358–1369, JUN 2018.
  • [28] B. Zhu, J. Z. Liu, S. F. Cauley, B. R. Rosen, and M. S. Rosen, “Image reconstruction by domain-transform manifold learning,” NATURE, vol. 555, no. 7697, pp. 487+, MAR 22 2018.
  • [29] Y. Li, K. Li, C. Zhang, J. Montoya, and G.-H. Chen, “Learning to Reconstruct Computed Tomography Images Directly From Sinogram Data Under A Variety of Data Acquisition Conditions,” IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 38, no. 10, pp. 2469–2481, OCT 2019.
  • [30] H. Chen, Y. Zhang, Y. Chen, J. Zhang, W. Zhang, H. Sun, Y. Lv, P. Liao, J. Zhou, and G. Wang, “LEARN: Learned Experts’ Assessment-Based Reconstruction Network for Sparse-Data CT,” IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 37, no. 6, SI, pp. 1333–1347, JUN 2018.
  • [31] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 26, no. 7, pp. 3142–3155, JUL 2017.
  • [32] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)

    , ser. IEEE Conference on Computer Vision and Pattern Recognition.   IEEE Comp Soc; Comp Vis Fdn, 2016, pp. 770–778, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, JUN 27-30, 2016.

  • [33] R. Hecht-Nielsen, “Theory of the backpropagation neural network,” in IJCNN: International Joint Conference on Neural Networks (Cat. No.89CH2765-6).   IEEE, 1989 1989, Conference Paper, pp. 593–605 vol.1, IJCNN: International Joint Conference on Neural Networks, 18-22 June 1989, Washington, DC, USA.
  • [34] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” preprint, arXiv:https://arxiv.org/abs/1412.6980v8, 2014.
  • [35] Https://wiki.cancerimagingarchive.net/display/Public/CPTAC-LUAD.
  • [36] X. Zheng, S. Ravishankar, Y. Long, and J. A. Fessler, “PWLS-ULTRA: An Efficient Clustering and Learning-Based Approach for Low-Dose 3D CT Image Reconstruction,” IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 37, no. 6, SI, pp. 1498–1510, JUN 2018.
  • [37] Https://wiki.cancerimagingarchive.net/display/Public/Pancreas-CT.