X-ray computed tomography (CT) has been widely used in clinics to visualize the internal organs of patients for diagnostic purpose. However, radiation dose involved in the X-ray CT scans constitutes a potential health concern to human body, since it may induce genetic, cancerous, and other diseases. Consequently, following the As Low As Reasonably Achievable (ALARA) principle is of utmost importance in clinics when determining the radiation dose level of a CT scan. One of the dominant ways to reduce the dose level is to decrease the exposure level of each projection angle. However, lower exposure level will inevitably induce stronger quantum noise into the CT image, possibly making accurate diagnosis, and organ contouring impossible, rendering decreased clinical value. Thus, denoising algorithm is essential for low dose CT (LDCT) image quality enhancement.
There has been a surge of research towards this direction, which roughly can be divided into three categories: projection domain-based denoising[36, 24], image domain-based denoising[10, 9], and regularized iterative reconstruction[2, 41, 11, 34, 6, 29, 35, 40]. Because it is more convenient to access the CT images directly from the picture archive and communication system (PACS), image domain-based denoising algorithms are becoming more and more popular. In this work, we also devote our efforts on this category to enhance the image quality of LDCT.
Inspired by the unprecedent successes of convolutional neural network (CNN)-based deep learning (DL) techniques on various image processing[4, 17, 25, 33, 22] and/or recognition[1, 30, 12, 14, 13] tasks, they have also been widely introduced by the medicine community to denoise the CT images[7, 8, 27, 42, 43, 28, 3, 38], achieving state-of-the-art denoising performance. For instance, Chen et al. devised a deep CNN that is consisting of several repeating convolutional modules to learn a mapping from LDCT image to normal dose CT (NDCT) image. To facilitate a more efficient information flow and compensate the potential spatial resolution loss, they further proposed a new architecture by using the well-known residual learning scheme, called residual encoder-decoder (RED)
, attaining better denoising performance. This performance improvement benefited from the more advanced architecture design. Indeed, it is well-recognized that neural network architecture is one of the dominant factors that affect the algorithm’s performance since it will directly affect the qualities of the features which are an abstract representation of the image. Moreover, the development history of CNN-based deep learning techniques in the past decade can be well-organized in terms of the evolvement of the associated architectures, such as from the original AlexNet that contains several convolutional layers and fully connected layers, to the VGG  network by substantially increasing the network’s depth, to the Inception network  by considering multiscale structures, to the well-known ResNet 
by proposing a residual connection to facilitate the information propagation, and to the feature pyramid network (FPN) by fusing the low-level and high-level features. There also exists many other architecture variants, such as SENet , DenseNet , and ResNeXt 
. As the network architectures evolving, the associated algorithm’s performance is also improved substantially, such as the top-5 error rate of the well-known ImageNet 1K dataset is reduced from 20.91% (AlexNet) to 5.47% (ResNeXt)111
These data come from the model zoo provided by the official implementations of PyTorch. Seehttps://pytorch.org/docs/stable/torchvision/models.html for more details..
Regarding our LDCT image denoising task, its goal is to suppress the noise effectively while preserve the resolution as much as possible since noise and resolution are two most important while combating metrics for the CT image quality evaluation. It is realized that high-level feature is the key to facilitate effective noise suppression by aggravating information from a large receptive field, while low-level feature is essential for nice resolution preservation. Therefore, high-quality low- and high-level feature extraction and effective fusion between them will be extremely important when design the CNN architectures for LDCT image denoising. Towards this technical line, U-Net might be one of the most well-known architectures by adding a skip connection to facilitate the fusion of the low-level features and the high-level features. Despite its great success in various tasks and datasets, when applying it on the LDCT image denoising task, people found that the U-Net architecture still leads to significant resolution loss, decreasing the clinical value of the denoised CT image. Therefore, there still exists room to further enhance the image quality of the denoised CT image by employing some more advanced network architecture that can extract and fuse both the low-level and high-level features more effectively.
proposed a high-resolution network (HRNet)-based representation learning method, attaining state-of-the-art performances among various computer vision tasks, such as human pose estimation, image classification, object detection, and segmentation. Specifically, HRNet uses multiple branches to extract multiscale features which are then internally fused together to facilitate the feature fusion. This nice architecture design can ensure a highly efficient low- and high-level features combination during the forward propagation process, and thus produce a high-quality feature for the downstream task.
To our best knowledge, this nice architecture has not been applied on denoising tasks. And we feel that HRNet is super suitable for medical image denoising tasks where high resolution is highly desirable to preserve the fidelity of anatomical structures. Realized this, in this work, we introduced HRNet into the medical image processing field, verified its superior denoising performance for the LDCT images. We hope this new introduced architecture can also inspire the other researchers in the medical image field to boost their tasks’ performance.
2 Methods and Materials
Let us first mathematically formulate our problem. Given a LDCT image whose noise component is represented as , then the LDCT denoising task is to restore the underlying clean image
, where both the noisy and clean images are vectorized,denotes the number of the pixels. Without loss of generality, we can assume the noise is additive as:
In the DL’s viewpoint, problem (1
) can be regarded as an image to image translation problem that can be solved by learning a CNNwho is parameterized by . Given a pre-collected training dataset with a size of , where indexes the training samples, the parameters W can be estimated by minimizing the following mean squared error (MSE)-based cost function:
It should be noted that in practice, one can never access the real clean image that does not contain any noise since quantum noise is inevitable. Therefore, the clean image in the training dataset is usually replaced with a NDCT image that is also contaminated by mild noise. In this work, we also use this relaxed but practical experimental setting to train our network.
For completeness, we here also illustrate the architecture of the HRNet.
As shown in Figure (1
), there are four different branches that are used to extract the features from different scales. The starting feature of each scale is calculated by applying a stride-two convolutional layer on the feature of the previous scale. All the features from different scales will be processed in a stage-by-stage fashion. Each stage contains two convolutional layers, as indicated by the solid arrow in Figure (1). Before proceeding to the next stage, the features from different scales will be firstly fused together. In this work, this fusion process is conducted based on the feature summation. More specifically, if the size of the previous feature is larger than the current feature, such as fuse the features from branch 1 into the features from branch 3, one or multiple stride-two convolutional layers are used to downsample the previous feature such that it has same size as current feature. For example, if the previous feature has a size of while the current feature has a size of
, then two continuous stride-two convolutional layers are used. On the other hand, if the size of the previous feature is smaller than the current feature, bilinear interpolation is used to upsample the previous feature. Once we get all the features from different scales, they will be further concatenated together to generate the final feature. Note that those features having a smaller size will be firstly bilinearly interpolated into a size same as the original input image size. A predictor will be attached into the last layer to output the denoised image.
In this work, all the convolutional layers except the predictor consist of three operators: convolution operator, instance normalization 
operator and the rectified nonlinear unit (ReLU). The predictor is consisting of the convolution operator and the ReLU. The channel number for the features in same scale is the same, and detailed in Figure (1). Since X-ray CT is gray-scaled image, the input and output channels are one.
2.2.1 Training and validation datasets
In this work, we used the public released AAPM Low Dose CT Grand Challenge dataset  that consists of contrast-enhanced abdominal CT examinations for the model training (https://www.aapm.org/grandchallenge/lowdosect/). Specifically, NDCT projections were firstly acquired based on the routine clinical protocols in Mayo clinic by using the automated exposure control and automated tube potential selection strategies, such that the referenced tube potential is 120kV and the quality referenced effective exposure is 200mAs. Then Poisson noise was inserted into the above NDCT projection data to reach a noise level that corresponded to 25% of the full dose, denoted as the LDCT projection data. Later, both the NDCT and the LDCT projection data were filtered back projected (FBP) into the image domain, producing the NDCT and LDCT images. The Challenge host provided twofour different reconstructions for both dose levels with two different slice thicknesses and two different kernels, i.e., and ., sharp kernel or not. In this work, we only used the sharp reconstructions to test our model performance.
In the official training dataset, there are ten patient data. For qualitative evaluation, we further randomly split them into eight/two patient data, serving as the training/validation datasets, respectively. All the data involved in this work are in 2D slice. In total, there are 4800 and 1136 2D CT slice images in the training and validation datasets, respectively.
For showcase purpose, we chose four representative slices in the validation datasets to demonstrate and compare the denoising performance. The first slice was used to check the denoising performance for the abdominal site which is the dominant site of the training dataset. The second slice corresponds to the liver which is one of the most important organs of human in the abdominal site. The third slice was selected to be in the lung region which was partially covered during the CT scan. The fourth slice contains a radiologists confirmed low-contrast lesion.
2.2.2 Testing dataset
We used a patient data from the testing dataset of the above Challenge to verify the model’s performance. Specifically, we firstly rebinned the raw helical-scanned projection data into fan-beam projection data with a slice thickness of and a pixel size of , which was then FBP reconstructed into the CT image domain. It is noted that the training dataset was reconstructed with the commercial implementation of the FBP algorithm, while our testing dataset was reconstructed with our homemade FBP algorithm. Thus, despite the same noise distribution in the projection domain, there exists certain domain gap in the image domain between this testing dataset and the training dataset due to different implementation details, such as different filter kernels.
2.2.3 Training details
The input and the target images were the LDCT and NDCT images, respectively. In details, both images would be firstly normalized by divided 2000 such that most of the image intensities fall into the range of 0-1. Note that the pixel values of the original CT images are CT values that were shifted by 1000 HU.
To enlarge the dataset, random translation (maximum 128 pixels along each axis), cropping, and rotation (
) were adopted to augment the training data samples. Zero-padding was employed when necessary to ensure the final image having a size of.
), with hyperparametersand . The algorithm was iterated by 100K iterations. The learning rate was initially set as to , which was then reduced to be and at iterations 50K and 75K, respectively. The batch size was set as to 1. The PyTorch framework was used to train the network.
2.2.4 Comparison studies
For comparison, we also implemented a baseline network architecture: U-Net.
To be specific, in the encoder part, the initial output channel number in the input layer is 32. Continuous strid-two convolutional layers are applied to extract the image high-level features. The channel number is doubled per each downsampling resulted from the stride-two convolutional operators until reaching a feature number of 512. There are nine layers in the encoder part, leading to a bottleneck feature with a size of , suggesting that the associated receptive field covers the whole image that have a size of . In the decoder part, concatenation operation is adopted to fuse both the low-level and the high-level features. Th output channel number is one for the last output layer in the decoder part. Same as HRNet, the convolutional layer also contains three operators: convolution, instance normalization and ReLU. All the other training details for the U-Net were the same as the above introduced HRNet.
2.2.5 Evaluation metrics
In this work, quantitative metrics in terms of the Root-Mean-Square-Error (RMSE) and the structure similarity (SSIM)  are calculated to evaluate all the results. The RMSE is defined as:
where and represented the evaluated image and the associated NDCT image if available.
The SSIM is defined as:
represents the mean value and the variance of the denoised image, same notations also apply for the NDCT image. We selected and to stabilize the calculation process.
Moreover, we also exploit the contrast-noise-ratio (CNR) to quantify the lesion detectability. The CNR is defined as follows:
correspond to the mean value and the standard deviation of the foreground (fg) region of interest (ROI), respectively. Same naming rules apply to the background (bg) ROI.
2.2.6 Noise analysis
As mentioned above, the model is trained with paired LDCT-NDCT images, where both the input LDCT and the target NDCT images are contaminated by noise. More specifically, the target NDCT image contains realistic quantum noise (denoted as target noise hereafter), while the input LDCT image contains both the target noise inherited from the NDCT image as well as the extra added simulated noise (denoted as added noise hereafter). It will be interesting and valuable to analyze whether the trained denoiser can remove both the target noise and the added noise.
We first define the difference between the input LDCT and the output denoised image as removed noise. We use the cosine correlations between removed noise and added noise or target noise to characterize the composition of the removed noise. More specifically, since the target noise is coupled with the underlying clean image, and there may exist certain amount of structures in the removed noise, the cosine correlations are calculated based on the high frequency components, where the majority is the noise. In this work, the high frequency components are defined as those frequencies in the ranges of and . Moreover, we also calculate the projection lengths from the removed noise to the target noise and the added noise to quantify how much target noise and/or added noise are removed.
As a showcase, all the slices from one patient in the validation dataset are used to analyze the noise correlations.
Figure (2) presents the denoised results for a slice in the validation dataset corresponding to the abdominal site. It is obvious that both denoisers can effectively suppress the noise, as shown in Figures (2)(b) and (2)(c). Compared to the NDCT image in Figure (2)(d), we can find that the UNet-based denoiser produces an image with much inferior resolution, while the introduced HRNet-based denoiser can lead to an image (Figure (2)(c)) with much better resolution than the UNet-based denoiser, while slightly inferior resolution than the NDCT image. These phenomena can be clearly observed from the zoomed-in views displayed in the second row of Figure (2). Besides, we can also find that the noise level of the image associated with the HRNet-based denoiser is much lower than that of the NDCT image. Another interesting finding is that the UNet-based denoiser leads to strong artifacts around the boundary of the bone, as indicated by the red arrows in the third row of Figure (2). By contrast, the HRNet-based denoiser can faithfully restore the underlying structures around that region.
Figure (3) demonstrates the denoised results of the liver organ. It can be seen that the strong noise in the LDCT image overwhelm the fine vessels in the liver. After applying any denoiser, the noise is suppressed effectively. Observation of the zoomed-in view in the second row of Figure (3) reveals that the HRNet-based denoiser produces an image that possesses more details, thereby suggesting higher resolution, than the UNet-based denoiser. Moreover, the noise in the image associated with the HRNet-based denoiser is also weaker than that of the UNet-based denoiser, indicating stronger denoising ability of the HRNet. Compared to the NDCT, as indicated by the red arrows, we can find that some of the vessels in the image with respect to the HRNet can be even more easily distinguished than those in the NDCT image which also suffers from the quantum noise.
Figure (4) depicts the denoised results for the lung slice. In this case, both the LDCT and the NDCT images can clearly show the locations and the shapes of the lung nodules. After processed by the UNet-based denoiser, the resolution decreases substantially despite the noise is also effectively removed. As for the image generated by the HRNet-based denoiser, we can see that it exhibits much weaker noise while slightly inferior, if not comparable, resolution property than the LDCT image. Indeed, as indicated by the red arrows in Figure (4), these small lung nodules have clearer structures than those in the LDCT image which are contaminated by the strong quantum noise.
Figure (5) compares different denoising results by using the slice that contains a low-contrast lesion, as indicated by the solid green box. Without surprise, it is not easy to delineate this lesion from the surrounding normal tissues by using the noisy LDCT image. This challenge has been greatly alleviated after using the denoisers, as shown by the images in Figures (5)(b) and (5)(c). By further inspecting both denoised images, we can observe that the HRNet-based denoiser lead to an image with much more natural noise texture. This can be clearly found from the zoomed-in views in the second row of Figure (5). For quantitative comparison, we also calculate the CNRs of this low-contrast lesion on different images, with results of 0.32 (LDCT), 2.45 (UNet), 2.70 (HRNet) and 0.90 (NDCT), further verifying the superior denoising performance of the introduced HRNet-based denoiser.
|Figure (2)||Figure (3)||Figure (4)||Figure (5)||Dataset|
To further quantify the denoising performance of both denoisers, we also calculate the RMSE and SSIM values for each showcase in Figures (2) to (5) that has the referenced NDCT image, as listed in Table (1). It can be seen that both denoisers can dramatically reduce the RMSE values, while the HRNet-based denoiser can lead to an extra improvement than the UNet-based denoiser. Similar findings can also be observed from the SSIM-based metrics. To have a more comprehensive comparison, we calculate and plot the RMSE and SSIM values for all the slices in the validation dataset, as demonstrated in Figure (6). Examination of this plot reveals that the introduced HRNet-based denoiser outperforms the UNet-based denoise in almost all the evaluated slices, despite both denoisers can substantially enhance the image quality. As an overall quantitative comparison, we also compute the averaged RMSE and SSIM values for the whole validation dataset. As tabulated in Table (1), the RMSE is decreased from 113.80 HU (LDCT) to 59.87 HU if applying the UNet-based denoiser, and is further decreased into 55.24 HU if one uses the HRNet-based denoiser. Again, the SSIM-based metrics also validate the enhancement of image quality, i.e., increased from 0.550 (LDCT) to 0.712 (UNet) and 0.745 (HRNet).
The denoising results for a slice in the testing dataset are demonstrated in Figure (7). Apparently, the HRNet-based denoiser delivers an image with sharper edges while comparable, if not weaker, noise level compared to the UNet-based denoiser.
The noise analysis result is shown in Figure (8). We can observe that the difference image (Figure (8)(b)) between the input and the denoised images almost only contains noise, suggesting that the introduced HRNet-based denoiser can preserve the structural details very well. The corresponding Fourier domain depicted in Figure (8)(c) suggests that the noise is dominant by both the low-frequency and high-frequency components while the middle-frequency noise is effectively removed. After masking out the low-frequency part (Figure (8)(d)) and inversely transforming back to the image domain (Figure (8)(e)), the noise now only contains the high-frequency component, which can be clearly observed by comparing Figures (8)(b) and (8)(e). Besides, since the low-frequency component is excluded, the noise power in Figure (8)(e) is lower than that in Figure (8)(b). The cosine correlations among different noise components are plotted in Figure (8)(d). We can find that the target noise and the extra added noise is almost orthogonal, with a cosine correlation of around -0.08 that is close to zero. The correlation between the removed and the extra added noise is around 0.9, while it is around 0.3 for the correlation between the removed noise and the target noise. These two values indicate that most of the removed noise comes from the extra added noise, while the denoiser also remove some noise from the target image. The projection length from the removed noise to the extra added noise is calculated to be 78.46%, while the projection length from the removed noise to the target noise is calculated to be 61.31%. After we subtract both the projected noise from the removed noise, the rest noise is computed to have an energy of 11.37% of all the removed noise.
4 Discussions and Conclusions
The goal of any CT denoisers is to suppress the noise as much as possible and to preserve the anatomical details as well as possible. Deep learning-based denoisers are proved to very effective for noise suppression by automatically extracting the image features, leading to state-of-the-art denoising performance. However, the feature quality highly depends on the model architecture. As for the denoising task, both the low-level and the high-level features are important. The former is of importance for detail-preservation, while the latter is essential for effective noise suppression by using context information from a large scale. The encoder-decoder architecture is efficient for high-level feature extraction, however, it lacks low-level information. The UNet can improve the low-level feature quality to some extent by using skip connections, but still cannot provide a high low-level feature quality that can be used to faithfully restore the fine details. This probably explain why the UNet-based denoiser leads to oversmoothed structures despite it can suppress the noise effectively. By contrast, the introduced HRNet can generate both high-quality low-level and high-level features by using different branches to extract the features from different levels, and also allowing the different level features to be fused together. Consequently, experimental results verified the superiority of this architecture design, showing that HRNet can effectively remove the noise while also preserve the fine anatomical structures very well.
We note that for some cases, the results associated with the HRNet are even better than the NDCT, such as for the low-contrast lesion detection task shown in Figure (5). This might be due to that we not only remove the added simulated noise but also remove the noise inherited from the target NDCT image. The noise analysis result demonstrated above can support our hypothesis, with the observation that the HRNet-based denoiser can remove 61.31% of the target noise from the NDCT image. It should be noted that we are not claiming that the HRNet-based denoiser can deliver an image whose quality surpasses the quality of NDCT. Actually, when the task is to visualize the high-contrast details which are robust to the noise while sensitive to the resolution, such as the lung nodules shown in Figure (4), the structures associated with the HRNet are slightly oversmoothed compared to the NDCT.
It is well-known that the data-driven deep learning model may suffer from the model generalizability problem when there exists distribution gap between the training and the testing environments. In this work, the models were only tested on the simulated datasets which have similar data distribution as the training dataset if ignoring the potential difference caused by different reconstruction parameters. More realistic testing datasets are required for further performance evaluation before the clinical translation of this model.
In summary, in this work, we introduced a HRNet-based denoiser to enhance the image quality of LDCT images. Benefiting from both the high-quality low-level and high-level features extracted by the HRNet, it can deliver an enhanced image with effectively suppressed noise and well preserved details. Compared to the UNet-based denoiser, the HRNet can produce an image with higher resolution. Quantitative experiments showed that the introduced HRNet-based denoiser can improve the RMSE/SSIM values from 113.80/59.87 (LDCT) to 55.24/0.745, and also outperforms than the UNet-based denoiser whose values are 59.87/0.712.
The data used in this paper can be publicly accessed from the official website (https://www.aapm.org/grandchallenge/lowdosect/) of the AAPM Low Dose CT Challenge.
We would like to thank Varian Medical Systems Inc. for supporting this study and Dr. Jonathan Feinberg for editing the manuscript. We also would like to thank Dr. Ge Wang from Rensselaer Polytechnic Institute for his constructive discussions and comments.
-  (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §1.
-  (2017) Z-index parameterization for volumetric ct image reconstruction via 3-d dictionary learning. IEEE Transactions on Medical Imaging 36 (12), pp. 2466–2478. External Links: Cited by: §1.
-  (2020) Probabilistic self-learning framework for low-dose ct denoising. ArXiv arXiv:2006.00327. Cited by: §1.
Real-time video super-resolution with spatio-temporal networks and motion compensation. pp. 2848–2857. Cited by: §1.
-  (2015) Development and validation of an open data format for ct projection data. Medical physics 42 (12), pp. 6964–6972. External Links: Cited by: §2.2.1.
-  (2008) Prior image constrained compressed sensing (piccs): a method to accurately reconstruct dynamic ct images from highly undersampled projection data sets. Medical physics 35 (2), pp. 660–663. External Links: Cited by: §1.
-  (2017) Low-dose ct with a residual encoder-decoder convolutional neural network. IEEE Transactions on Medical Imaging 36 (12), pp. 2524–2535. External Links: Cited by: §1.
-  (2017) Low-dose ct via convolutional neural network. Biomedical Optics Express 8 (2), pp. 679–694. External Links: Cited by: §1.
-  (2007) Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on image processing 16 (8), pp. 2080–2095. External Links: Cited by: §1.
Image denoising with block-matching and 3d filtering.
Conference Proceedings In
Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning, Vol. 6064, pp. 606414. Cited by: §1.
-  (2002) Statistical image reconstruction for polyenergetic x-ray computed tomography. IEEE transactions on medical imaging 21 (2), pp. 89–99. External Links: Cited by: §1.
-  (2015) Fast r-cnn. Computer Science. Cited by: §1.
-  (2017) Mask r-cnn. IEEE Transactions on Pattern Analysis and Machine Intelligence PP (99), pp. 1–1. Cited by: §1.
-  (2015) Deep residual learning for image recognition. Computer Science. Cited by: §1.
Conference Proceedings In
Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132–7141. Cited by: §1.
-  (2016) Densely connected convolutional networks. Cited by: §1.
-  (2016) Video super-resolution with convolutional neural networks. IEEE Transactions on Computational Imaging 2 (2), pp. 109–122. Cited by: §1.
-  (2014) Very deep convolutional networks for large-scale image recognition. ArXiv 1409.1556. Cited by: §1.
Deep high-resolution representation learning for human pose estimation. Eprint Arxiv. Cited by: §1.
-  (2014) Adam: a method for stochastic optimization. Computer Science. Cited by: §2.2.3.
-  (2015) Deep learning. Nature 521 (7553), pp. 436–44. Cited by: §1.
-  (2018) Noise2noise: learning image restoration without clean data. arXiv preprint arXiv:1803.04189. Cited by: §1.
-  Feature pyramid networks for object detection. Conference Proceedings In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125. Cited by: §1.
-  (2009) Projection space denoising with bilateral filtering and ct noise modeling for dose reduction in ct. Medical physics 36 (11), pp. 4911–4919. External Links: Cited by: §1.
-  (2016) Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. Cited by: §1.
-  U-net: convolutional networks for biomedical image segmentation. Conference Proceedings In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241. Cited by: §1.
3-d convolutional encoder-decoder network for low-dose ct via transfer learning from a 2-d trained network. IEEE Transactions on Medical Imaging 37 (6), pp. 1522–1534. External Links: Cited by: §1.
-  (2019) Competitive performance of a modularized deep neural network compared to commercial algorithms for low-dose ct image reconstruction. Nature Machine Intelligence 1 (6), pp. 269–269. Cited by: §1.
-  (2008) Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Physics in Medicine and Biology 53 (17), pp. 4777. External Links: Cited by: §1.
-  (2014) Very deep convolutional networks for large-scale image recognition. Computer Science. Cited by: §1.
-  (2016) Inception-v4, inception-resnet and the impact of residual connections on learning. Cited by: §1.
-  (2016) Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022. Cited by: §2.1.
-  (2017) Deep image prior. Cited by: §1.
-  (2006) Penalized weighted least-squares approach to sinogram noise reduction and image reconstruction for low-dose x-ray computed tomography. IEEE transactions on medical imaging 25 (10), pp. 1272–1283. External Links: Cited by: §1.
-  (2009) Iterative image reconstruction for cbct using edge‐preserving prior. Medical physics 36 (1), pp. 252–260. External Links: Cited by: §1.
Sinogram noise reduction for low-dose ct by statistics-based nonlinear filters. Conference Proceedings In Medical Imaging 2005: Image Processing, Vol. 5747, pp. 2058–2066. Cited by: §1.
-  (2020) Deep high-resolution representation learning for visual recognition. IEEE transactions on pattern analysis and machine intelligence. External Links: Cited by: §1.
-  (2020) Low-dose ct image denoising using parallel-clone networks. ArXiv 2005.06724v1. Cited by: §1.
-  Aggregated residual transformations for deep neural networks. Conference Proceedings In IEEE conference on computer vision and pattern recognition, Cited by: §1.
-  (2012) Low-dose x-ray ct reconstruction via dictionary learning. IEEE transactions on medical imaging 31 (9), pp. 1682–1697. External Links: Cited by: §1.
-  (2014) Towards the clinical implementation of iterative low-dose cone-beam ct reconstruction in image-guided radiation therapy: cone/ring artifact correction and multiple gpu implementation. Med Phys 41 (11), pp. 111912. External Links: Cited by: §1.
Low-dose ct image denoising using a generative adversarial network with wasserstein distance and perceptual loss. IEEE Transactions on Medical Imaging 37 (6), pp. 1348–1357. External Links: Cited by: §1.
-  (2018) Structurally-sensitive multi-scale deep neural network for low-dose ct denoising. IEEE Access 6, pp. 41839–41855. External Links: Cited by: §1.
-  (2004) Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13 (4), pp. 600–612. External Links: Cited by: §2.2.5.