Deep Learning for Low-Dose CT Denoising

02/25/2019 ∙ by Maryam Gholizadeh-Ansari, et al. ∙ Ryerson University Saskatoon Health Region 0

Low-dose CT denoising is a challenging task that has been studied by many researchers. Some studies have used deep neural networks to improve the quality of low-dose CT images and achieved fruitful results. In this paper, we propose a deep neural network that uses dilated convolutions with different dilation rates instead of standard convolution helping to capture more contextual information in fewer layers. Also, we have employed residual learning by creating shortcut connections to transmit image information from the early layers to later ones. To further improve the performance of the network, we have introduced a non-trainable edge detection layer that extracts edges in horizontal, vertical, and diagonal directions. Finally, we demonstrate that optimizing the network by a combination of mean-square error loss and perceptual loss preserves many structural details in the CT image. This objective function does not suffer from over smoothing and blurring effects caused by per-pixel loss and grid-like artifacts resulting from perceptual loss. The experiments show that each modification to the network improves the outcome while only minimally changing the complexity of the network.



There are no comments yet.


page 5

page 12

page 13

page 14

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Computed tomography (CT) is an accurate and non-invasive method to detect internal abnormalities of the body such as tumors, bone fractures, and vascular disease. It has been widely used by clinicians to diagnose and monitor conditions such as cancer, lung disease, and abnormalities of the internal organs.

As CT images are produced by transmitting X-ray beams through the body, there has been growing concern about the risk of CT radiation. The amount of exposure during one session of CT scan is much higher than a conventional X-ray. For example, the radiation that a patient receives in a chest X-ray radiography is equal to 10 days of background radiation radiation-dose . Background radiation is the amount of radiation that a person gets from cosmic and natural resources in daily life. During a chest CT scan, the radiation exposure is equal to two years of background radiation radiation-dose . Therefore, the radiation risk is much higher in computed tomography especially for those who require multiple CT scans. While radiation affects all age groups, children are more vulnerable than adults because of their developing body and the longer lifespan. Research has found that children who have cumulative doses from multiple head scans have increased risk (up to three times increased risk of diseases such as leukemia and brain tumors radiation-risks .

Considering the advantages of CT scans for diagnosis, it is critical to find a solution to minimize radiations. One approach to decreasing the radiation risk is to use lower levels of X-ray current; however, these CT images have increased noise and may not be as diagnostic.

In recent years, many types of research have been conducted to enhance the quality of the reconstructed CT images. Researchers have followed three paths to remove noise from low-dose CT images: processing the raw data obtained from sinograms (projection space denoising), iterative reconstruction methods, and processing reconstructed CT image (image space denoising) CT-denoising-methods .

In projection space denoising, the noise removal algorithm is applied to the CT sinogram data obtained from low-dose X-ray beams. Sinogram data, also called projection or raw data, is a 2-D signal that represents the sum of the attenuation coefficients for a beam passing through the body. The noise distribution of low-dose CT image in the projection space can be well-characterized poisson-noise1 ; poisson-noise2 which makes the noise removal task simple. Some researchers have applied traditional noise removal techniques on this data including bilateral filtering before image reconstruction bilateral ; sinogram-denoising . These methods incorporate system physics and photon statistics to reduce both noise and artifacts. However, it makes the algorithm, vendor dependent. These methods also need access to sinogram data which is generally not available for many commercial CT scanners. Finally, these techniques should be implemented on the scanner reconstruction system and increases the cost of denoising CT-denoising-methods .

Iterative reconstruction methods are other means to improve the quality of low-dose CT images MBIR ; SAFIRE1 . In these methods, the data is transformed to the image domain and projection space multiple times to optimize the objective function. In the first step, a CT image is reconstructed using the projection data and then it is transformed back to the projection space. In each iteration, the generated projection data from the reconstructed CT image is compared with the actual data from the scanner to get improved. The process stops when the convergence criteria are met. These methods may take into account system model geometry, photon counting statistics, as well as x-ray beam spectrum and they usually outperform projection space denoising methods. Iterative techniques are capable of removing artifacts and providing good spatial resolution. However, similar to the previous group, they need access to the projection data, are vendor dependent and should be implemented on the reconstruction system of the scanner. Moreover, the process is slow, and the computational cost of multiple iterations is high CT-denoising-methods .

Unlike to the previous methods, image space denoising algorithms do not require the projection data. They work directly on the reconstructed CT images and are generally fast, independent of the scanner vendor and can be easily integrated with the workflow. Many of the proposed algorithms in this category are adopted from natural image processing. KSVD ksvd

is a dictionary learning algorithm based on sparse representation and dictionary learning. It is used for tasks such as image denoising, feature extraction, and image compression. In some studies, KSVD is employed to improve the quality of low-dose CT scans

abdomen-ldct-ksvd ; abhari-ldct-ksvd . Non-local means non-local-means is another algorithm initially proposed for image denoising that has also been used for low-dose CT image enhancement ldct-non-local-means . The method calculates a weighted mean of the pixels in the image based on their similarity to the target pixel. The state of the art block matching 3D (BM3D) bm3d

is also proposed for dealing with natural image noise. It is similar to the non-local means but works in a transform domain like wavelet or discrete Fourier transform. The first step of BM3D is to group patches of the image that have similar local structure and then stack them and form a 3-dimensional array. After transforming the data, a filter is applied to remove the noise. This method has been followed in some studies to perform low-dose CT noise removal

ldct-bm3d-1 ; ldct-bm3d-2 .

Recently, many advances have been made in the image processing field using deep learning (DL). The high computational capacity of GPUs in combination with techniques such as batch normalization

batch-normalization and residual learning residual-learning have made training deep networks possible. Some of those proposed networks have outperformed traditional methods in challenging tasks such as image segmentation, image recognition, and image enhancement. Medical imaging has also benefitted from this advancement. One of the first networks to reduce the noise of the low-dose CT image was proposed by Chen et al. chen

. It was inspired from a network designed for image super-resolution with three convolutional layers

sr-cnn . Convolutional auto-encoders have been used in ldct-encoder-1 and ldct-encoder-2 while the later also takes advantage of residual learning. All of the mentioned networks offer an end to end solution for low-dose noise removal. They receive a low-dose CT image as an input and predict the normal-dose CT image as the output. However, Kang et al. firstly find the wavelet coefficients for low-dose and normal-dose CT images ldct-wavelet . Then these wavelet coefficients are given to a 24-layer convolutional network as data (input) and labels (output). The inverse wavelet transform should be performed on the output results to find the normal-dose CT image.

Generative adversarial networks (GAN) are a group of deep neural networks first introduced by Goodfellow gan . GAN has two sub-networks, a generative network (G) and a discriminative network (D) that are trained simultaneously. The discriminative network is responsible for defferentiating real data from fake data while the generative network tries to create fake data as close as possible to the real data and fool the discriminator. Generative adversarial networks have attracted much interest, and researchers have applied it to different fields such as text to image synthesis gan-text-to-image , image super-resolution gan-sr and video generation gan-video . GAN has also been used to remove noise from low-dose CT images ldct-gan-1 ; ldct-gan-2 ; sharpness-aware , where the generative network receives the low-dose CT images. It generates normal-dose appearing images that the discriminative network cannot distinguish from real normal-dose images.

Figure 1:

Architecture of the proposed network. BN stands for batch normalization, i-Dilated Conv represents convolution operator with dilatation rate i (i=2,3,4) and the activation function is the rectified linear unit (ReLU). Operator c⃝ performs concatenation.

In this paper, we have proposed a deep neural network to remove noise from low-dose CT images. Figure 1 displays this network. One approach to achieving higher performance in deep learning is to increase the number of layers which has become possible after introducing residual learning residual-learning , and batch normalization batch-normalization

. However, more layers essentially means more parameters and higher computational cost. In this research, we have looked for methods that enhance the efficiency of the network without adding to its complexity. For this purpose, our network employs batch normalization, residual learning and dilated convolution to perform denoising. We have also introduced an edge detection layer that improves the results with little increase in the number of training parameters. The edge detection layer extracts edges in four directions and helps to enhance the performance. Finally, we have shown that optimizing the mean-square error as the loss function do not capture all the texture details of the CT image. For this purpose, we have used a combination of perceptual loss and mean-square error (MSE) as an objective function that significantly improves the visual quality of the output and keeps the structural details. The perceptual loss is used in GAN to generate fake images that are visually close to the target image by comparing the feature maps of two images. Yang et al.

ct-perceptual have used the perceptual loss for CT image denoising but they compared the predicted image and the ground truth with one group of feature maps. In this study, feature maps have been extracted from four blocks of pre-trained VGG-16 vgg16 and used as a comparison tool in conjunction with the mean-square error.

2 Methods

2.1 Low-Dose CT Simulation

One of the challenges in applying machine learning techniques to the medical domain is the shortage of training samples. A neural network learns the probability distribution of the data from all the samples that it sees during the training process. If there are insufficient samples to train the network for all conditions, the predictions will not be accurate. To train a network for low-dose denoising, we generally need normal-dose and low-dose pairs. Obtaining such a dataset is not easy. For this reason, we have generated a simulated low-dose dataset from normal-dose CT images to be used for training besides two other datasets that we had.

According to the literature, the dominant noise of a low-dose CT image in the projection domain has a Poisson distribution

poisson-noise1 ; poisson-noise2 . Therefore, to simulate a low-dose CT image, we have added Poisson noise to the sinogram data of the normal-dose image. The following steps show this procedure simulate-low-dose-matlab ; simulate-low-dose :

  1. Compute the Hounsfield unit numbers of the normal-dose CT image from its pixel values, using the equation 1 HU-convert

    (if the CT image has padding, it should be removed, first),

    Figure 2: Simulation of a low-dose CT image from an upper abdominal CT image, a) Normal-dose CT image, simulated low-dose image with b), c), d)
  2. Compute the linear attenuation coefficients based on water attenuation ,

  3. Obtain the projection data for normal-dose image by applying radon transform on linear attenuation coefficinets . To eliminate the size factor, this should be multiplied by the voxel size,

  4. Compute the normal-dose transmission data ,

  5. Generate the low-dose transmission by injecting Poisson noise simulate-low-dose ,


    here, is simulated low-dose scan incident flux.

  6. Calculate the low-dose projection data ,

  7. Find the projection of the added Poisson noise,

  8. Compute the linear attenuation of the low-dose CT image ,

    where, represents the inverse Radon transform.

  9. Finally apply the inverse of equation 2 to find the Hounsfield unit numbers for the low-dose CT image. Figure 1(d) demonstrates a normal-dose image and the simulated low-dose images with different incident flux .

2.2 Dilated Convolution

Dilated convolution was introduced to deep learning in 2015 dilated ; atrous to increase the receptive field faster. Receptive field (RF) is the region of the input image that is used to calculate the output value. Larger receptive field means that more contextual information from the input image is captured. The classical methods to grow the receptive field is by employing pooling layers, larger filters and more layers in the network. A pooling layer performs downsampling and is a powerful technique to increase the receptive field. Although it is widely used in classification tasks, adoption of a pooling layer is not recommended in denoising or super-resolution tasks. Downsampling with a pooling layer may lead to the loss of some useful details that cannot be recovered completely by upsampling methods such as transposed convolution DAE_skip_connection . Utilizing larger filters or more layers increases the number of parameters drastically, meaning larger memory resources will be needed. Dilated convolution, also called atrous convolution, can increase the receptive field with just a fraction of weights. One-dimensional dilated convolution is defined as


where, and are the input and the output of the dilated convolution.

represents the weight vector of the filter with length

, and is the dilation rate.

Receptive field of the layer () with filter size and dilation rate of can be computed from the equation 5 dilated-residual .


Equation 6 computes the number of parameters needed for an N-layer convolutional network with a filter size .


here, is the number of filters in each layer and is the number of channels. For simplification, we assume all the layers have filters and the number of the channels in the input and output images are same. Table 1 compares the number of weights and layers needed to achieve receptive field equal to in different cases.

Filter size

Number of layers
needed for RF=13
6 5 4 4
Number of weights 148,608 310400 407680 74880
Table 1: Number of training weights to obtain with different filter sizes. The number of filters in each layer is

To better understand the capability of dilated convolution, Wang et al. replaced the standard convolutions in gaussian_noise with dilated convolutions with and achieved comparable performance in only 10 layers instead of 17 layers dilated-residual .

In this research, we have used an 8-layer dilated convolutional network to remove noise from low-dose CT images. The proposed network was inspired from a study by Zhang et al. prior . The dilation rates used are 1, 2, 3, 4, 3, 2, 1, and 1 for layers 1 to 8.

2.3 Residual Learning

One approach to improving the performance of a network is stacking more layers; nevertheless, researchers observed that networks with more layers do not always perform better. Contrary to expectations, it has been seen that in a deeper network the training loss grows. This degradation problem implies that the optimization of a deep network is not as easy as a shallow one. He et al. residual-learning proposed a residual learning framework to solve this problem by adding an identity mapping between the input and the output of a group of layers. The researchers have investigated many different combinations of adding shortcuts between different layers and achieved interesting results dense ; DAE_skip_connection .

In this study, we have exploited the residual learning to improve the performance of the network. Our experiments showed that adding symmetric shortcuts between the bottom and top layers boosts the performance. As shown in Figure 1, the input image and the output of layers 2 and 3 are concatenated with the output of layers 7, 6 and 5, respectively. These connections pass the details of the image to higher layers, as feature maps in the first layers contain more input information.

2.4 Edge Detection Layer

In image processing, edge detection refers to techniques that find the boundaries of objects in an image. Many of these methods search for discontinuities in the image brightness that are generally the result of the existence of an edge. Researchers have developed some advanced algorithms to extract edges from the image. In this study, we have adopted a simple edge detection technique to enhance the outcome of our network. Sobel edge detection operator sobel computes the 2-D gradient of the image intensity and emphasizes regions with high spatial frequency by convolving the image with a filter. The proposed edge detection layer is a convolutional layer that has four Sobel kernels as the non-trainable filters. The output edge maps are concatenated with the input image and given to the network. Our experiments confirm that the edge detection layer improves the performance of the network.

2.5 Objective Function

Mean-square error (MSE) is widely used as an objective function in low-level image processing tasks such as image denoising or image enhancement. MSE computes the difference of intensity between the pixels of output and the ground-truth images. It is also used in many of the proposed algorithms for low-dose CT denoising. We started our research by optimizing MSE, but we noticed that the results do not express all the details of a CT image, despite peak signal to noise (PSNR) being relatively high. This problem has been seen in image super-resolution tasks too perceptual-loss ; however, it is more pronounced in Dicom CT images displayed with different grey-level mappings (windowing). Windowing helps to highlight the appearance of different structures and make a diagnosis. Our experiments showed that MSE loss generates blur images that do not include all textural details.

Johnson et al. demonstrated that using a perceptual loss achieves visually appealing results perceptual-loss . To compute the perceptual loss, the ground-truth image and the predicted image are given to a pre-trained convolutional network, one at a time. Comparison is then made between the feature maps generated by the two images. VGG-16 vgg16

is a pre-trained network for classification on ImageNet dataset

imagenet which is generally used to calculate the perceptual loss in generative adversarial networks.

In this study, we have incorporated both MSE and perceptual loss to optimize the network. Our experiments showed that using the perceptual loss alone, results in a grid-like artifact in the output image. This effect has been perceived by other researchers, too perceptual-loss . Therefore, we have combined both per-pixel loss and perceptual loss to enhance optimization.


where, and are weighting scalars for mean-squre arror loss and perceptual loss, respectively. The mean-square error between the ground-truth and the denoised image from the proposed network is defined as,

Similar to other studies, we have employed VGG-16 network to measure the perceptual loss. In this study, we have extracted four groups of feature maps from VGG-16 network in different layers and used them to calculate the perceptual loss. As Figure 3

demonstrates, we have used the output of the last convolutional layer (after ReLU activation and before pooling layer) in blocks 1, 2 , 3 and 4. The perceptual loss function


here, refers to the extracted feature maps from block with size .

Our experiments reveal that utilizing perceptual loss with the mean-square error greatly improves the visual characteristics of the output image.

Figure 3: Perceptual loss is computed by extracting the feature maps of blocks 1, 2, 3, and 4 from a pre-trained VGG-16 network.

3 Experiments Setup

In this study, we have used three datasets to evaluate the performance of the proposed network in removing noise from low-dose CT images: simulated dataset, real piglet dataset, and a thoracic-abdominal (Thoracic) dataset.

To create the simulated dataset, we downloaded lung CT scans lung-dataset for a patient including slices from The Cancer Imaging Archive (TCIA) TCIA . The CT images were taken with X-ray current tube, peak voltage and slice thickness. Then, with the procedure explained in 2.1 we generated low-dose CT images. The incident flux of simulated low-dose CT () in equation 3 is define as equal to .

The second dataset is a real dataset acquired from a deceased piglet. It contains 900 images with , thickness. The X-ray currents for normal-dose and low-dose images are and , respectively.

Thoracic dataset thoracic-dataset includes pairs of CT image from an anthropomorphic thoracic phantom. The current tube for normal-dose and low-dose CT images are and , respectively with peak voltage of and slice thickness of .

In each dataset, of the images are used for training the proposed network and for testing. Contrary to other studies that built a test dataset randomly, our test dataset holds the last of images in the original dataset. The reason is that the consecutive CT images are very similar to each other and testing the network on the random dataset does not clearly examine the effectiveness of the network on the new images. Using the last portion of CT images assures us that the testing is performed on images that the network has not seen before. To prepare the data for training the network, we have used pixel values of low-dose, normal-dose images divided by . This maps the data between and which is suitable for training neural networks.

The original size of CT images in all the mentioned datasets is . To boost the number of training samples, we have extracted overlapping patches of from images, as the receptive field of the proposed network is in each direction. This also helps to reduce the memory resources needed during training. Since the network is fully convolutional, the input size does not have to be fixed, test images with their original size are fed to the network. To avoid boundary artifacts, zero padding in convolutional operators are used prior . The activation function is rectified linear unit and the number of filters in all convolutional layers is except layers and which have filter. To see how adding the edge detection layer and utilizing MSE and perceptual loss improve performance, we have trained three networks. Training of all the networks are performed with Adam optimizer in two stages: with learning rate

for 20 epochs and then learning rate

for 20 epochs. Glorot normal is used to initialise the weights glorot

. The implementation was based on Keras with Tensorflow backend on system with an Intel core

CPU , memory and GeForce GTX Graphics Card.

4 Results

To evaluate the performance of the proposed network, we have compared the results with state of the art BM3D bm3d and CNN200 network chen . As mentioned earlier, the initial idea of the proposed network was derived from prior designed for image super-resolution. To investigate how each change in the network architecture affects the performance, we have made three more comparisons with three more networks.

The first network is designed to examine how residual learning enhances the outcome. This network is similar to the one in prior , but there are shortcuts between the outputs of layers 2 and 3 with the outputs of layers 6 and 5, respectively. We call this network DRL (dilated residual learning). The objective function for this network is mean-square error (MSE), and we demonstrate that adding shortcut connections improves the results.

In the second network, we have added the edge detection layer to the beginning of the network. This network is named DRL-E and is shown in Figure 1. We have optimized this network by three objective functions to investigate the effects of choosing a loss function on the results. First, this network is optimized by MSE loss function and we refer to it as DRL-E-M. Next, we optimized the network by perceptual loss and it is called DRL-E-P in this paper. Finally, the proposed network is trained by the objective function defined in Equation 7. This network optimizes a combination of mean-square error and perceptual loss to learn the weights and achieve the best results. We refer to this combination as DRL-E-MP. To verify that the improvements are the result of the network alterations, all the networks are trained with equal learning rates and epochs as explained in section 3.

For each algorithm, we have provided the quantitative results proving that the proposed network DRL-E with dilated convolutions, shortcut connections, and the edge detection layer outperforms the other networks. Moreover,visual comparisons confirm that utilizing the proposed objective function improves the perceptual aspects of the DRL-E network further, and conserves most of the details in the image.

4.1 Denoising results on simulated lung dataset


Low-dose image


CNN200 chen





PSNR 14.59 24.76 33.19 33.74 34.17 36.64 33.47 35.57
SSIM 0.2008 0.6750 0.8768 0.8804 0.9281 0.9733 0.5880 0.6910
Table 2: The average PSNR and SSIM of the different algorithms for the Lung dataset.
(a) Low-Dose
(b) Normal-Dose
(c) BM3D
(d) CNN200 chen
(e) prior
(g) DRL-E-M
(h) DRL-E-P
(i) DRL-E-MP
Figure 4: Denoising results of the different algorithms on the Lung dataset in abdomen window with select regions magnified.
(a) Low-Dose
(b) Normal-Dose
(c) BM3D
(d) CNN200 chen
(e) prior
(f) DRL-E-MP
Figure 5: Denoising results of the different algorithms on Lung dataset in lung window.

Table 2 displays the average peak signal to noise ratio (PSNR) and structural similarity (SSIM) of applying the state of the art BM3D and six neural networks. Figures 4 and 5 give the visual results for the Lung dataset in two different window. Windowing helps to visualize the details of CT images properly. Here, we have shown the results in the lung and abdominal window to distinguish the differences better. Abdomen window helps to distinguish small changes in density and displays more texture details. Since lung is air-filled, it has very low density and appears black in the abdomen window. Lung window improves the visibility of the lung parenchyma and areas of consolidation.

Figures 4 and 5 demonstrate that the alterations of the network have enhanced the outcome step by step. This dataset demonstrates the effectiveness of perceptual loss. The results show using MSE as an objective function generates smooth regions and effects the details in the texture. On the other hand, perceptual loss forces the output of the network to be perceptually similar to the ground-truth. However, training the network solely by perceptual loss generates grid-like artifacts in the output image. As the results of DRL-E-MP demonstrate, the combined objective function saves most of the details in the textures and provides a better visual outcome.

As one can expect, exploiting perceptual loss do not improve PSNR. The reason is that high PSNR is the result of low MSE. If a network is trained to minimize MSE, it will always have higher PSNR compare to a network that is trained to minimize the perceptual loss.

4.2 Denoising results on real Piglet dataset

Table 3 displays the quantitative effects of performing the denoising on real low-dose CT images for the Piglet dataset which approves the results obtained from the simulated Lung dataset. Comparing the PSNR coefficients of the BM3D, CNN200, prior , DRL DRL and DRL-E demonstrates that when the objective function is MSE, the network with residual learning and edge detection layer outperforms the other ones. Figure 6 provides a visual comparison between the outcomes. It reveals that joining perceptual loss and per-pixel loss further improves the produced images by the proposed network. DRL-E-MP resembles the normal-dose CT image better by reconstructing the fine details.


Low-dose image


CNN200 chen





PSNR 39.93 41.46 44.18 44.83 44.96 45.10 44.01 44.12
SSIM 0.9705 0.9733 0.9804 0.9816 0.9881 0.9885 0.9782 0.9807
Table 3: The average PSNR and SSIM of the different algorithms for the Piglet dataset.
(a) Low-Dose
(b) Normal-Dose
(c) BM3D
(d) CNN200 chen
(e) prior
(f) DRL-E-M
(g) DRL-E-P
(h) DRL-E-MP
Figure 6: Denoising results of the different algorithms on an abdominal image from the Piglet dataset in abdomen window with select regions magnified.

4.3 Denoising results on phantom Thoracic dataset

Table 4 represents the PSNR and SSIM of denoising Thoracic dataset by all the methods. Results obtained for this dataset is consistent with the other experiments. Figure 7 clearly exhibits the effects of each alteration. Comparing the results obtained by DRL and DRL-E-M confirms that the edge detection layer helps to deliver sharper and more precise edges. As explained before, the only difference between these two models is using the edge detection layer.


Low-dose image


CNN200 chen





PSNR 25.66 30.86 33.57 33.73 34.02 34.03 26.25 31.50
SSIM 0.4485 0.6552 0.8001 0.8018 0.8059 0.8049 0.4224 0.6381
Table 4: The average PSNR and SSIM of the different algorithms for the Thoracic dataset.
(a) Low-Dose
(b) Normal-Dose
(c) BM3D
(d) CNN200 chen
(e) prior
(g) DRL-E-M
(h) DRL-E-P
(i) DRL-E-MP
Figure 7: Denoising results of the different algorithms on Thoracic dataset in abdomen window.

5 Conclusion

In this paper, we have combined the benefits of dilated convolution, residual learning, edge detection layer, and perceptual loss to design a noise removal deep network that produces normal-dose CT image from low-dose CT image. First, we have designed a network by the adoption of dilated convolution instead of standard convolution and also, using residual learning by adding symmetric shortcut connections. We have, also, implemented an edge detection layer that acts as a Sobel operator and helps to capture the boundaries in the image better. In the case of the objective function, we have observed that optimizing by a joint function of MSE loss and perceptual loss provides better visual results compared to each one alone. The obtained results do not suffer from over smoothing and loss of details that are the results of per-pixel optimization and the grid-like artifacts occurring with perceptual loss optimization.

This work was supported in part by a research grant from Natural Sciences and Engineering Research Council of Canada (NSERC).The authors would like to thank Dr. Paul Babyn and Troy Anderson for the acquisition of the piglet dataset. The results shown here are in whole or part based upon data generated by the TCGA Research Network:


  • (1) Bencardino, J.T.: Radiological society of north america (rsna) 2010 annual meeting. Skeletal Radiology 40, 1109–1112 (2011)
  • (2) Donya, M., Radford, M., ElGuindy, A., Firmin, D., Yacoub, M.H.: Radiation in medicine: Origins, risks and aspirations. Global Cardiology Science and Practice p. 57 (2015)
  • (3) Ehman, E.C., Yu, L., Manduca, A., Hara, A.K., Shiung, M.M., Jondal, D., Lake, D.S., Paden, R.G., Blezek, D.J., Bruesewitz, M.R., et al.: Methods for clinical evaluation of noise reduction techniques in abdominopelvic CT. Radiographics 34(4), 849–862 (2014)
  • (4) Wang, J., Lu, H., Liang, Z., Eremina, D., Zhang, G., Wang, S., Chen, J., Manzione, J.: An experimental study on the noise properties of x-ray CT sinogram data in radon space. Physics in Medicine & Biology 53(12), 3327 (2008)
  • (5) Macovski, A.: Medical imaging systems, vol. 20. Prentice-Hall Englewood Cliffs, NJ (1983)
  • (6) Manduca, A., Yu, L., Trzasko, J.D., Khaylova, N., Kofler, J.M., McCollough, C.M., Fletcher, J.G.: Projection space denoising with bilateral filtering and CT noise modeling for dose reduction in CT. Medical physics 36(11), 4911–4919 (2009)
  • (7) Wang, J., Li, T., Lu, H., Liang, Z.: Penalized weighted least-squares approach to sinogram noise reduction and image reconstruction for low-dose x-ray computed tomography. IEEE transactions on medical imaging 25(10), 1272–1283 (2006)
  • (8) Pickhardt, P.J., Lubner, M.G., Kim, D.H., Tang, J., Ruma, J.A., del Rio, A.M., Chen, G.H.: Abdominal CT with model-based iterative reconstruction (mbir): initial results of a prospective trial comparing ultralow-dose with standard-dose imaging. American journal of roentgenology 199(6), 1266–1274 (2012)
  • (9) Fletcher, J.G., Grant, K.L., Fidler, J.L., Shiung, M., Yu, L., Wang, J., Schmidt, B., Allmendinger, T., McCollough, C.H.: Validation of dual-source single-tube reconstruction as a method to obtain half-dose images to evaluate radiation dose and noise reduction: phantom and human assessment using CT colonography and sinogram-affirmed iterative reconstruction (safire). Journal of computer assisted tomography 36(5), 560–569 (2012)
  • (10) Aharon, M., Elad, M., Bruckstein, A., et al.: K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on signal processing 54(11), 4311 (2006)
  • (11) Chen, Y., Yin, X., Shi, L., Shu, H., Luo, L., Coatrieux, J.L., Toumoulin, C.: Improving abdomen tumor low-dose CT images using a fast dictionary learning based processing. Physics in Medicine & Biology 58(16), 5803 (2013)
  • (12) Abhari, K., Marsousi, M., Alirezaie, J., Babyn, P.: Computed tomography image denoising utilizing an efficient sparse coding algorithm. 2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA) pp. 259–263 (2012)
  • (13) Buades, A., Coll, B., Morel, J.M.: A non-local algorithm for image denoising.

    In: Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 2, pp. 60–65. IEEE (2005)

  • (14) Chen, Y., Yang, Z., Hu, Y., Yang, G., Zhu, Y., Li, Y., Chen, W., Toumoulin, C., et al.: Thoracic low-dose CT image processing using an artifact suppressed large-scale nonlocal means. Physics in Medicine & Biology 57(9), 2667 (2012)
  • (15) Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on image processing 16(8), 2080–2095 (2007)
  • (16) Hashemi, S., Paul, N.S., Beheshti, S., Cobbold, R.S.: Adaptively tuned iterative low dose CT image denoising. Computational and mathematical methods in medicine 2015 (2015)
  • (17) Kang, D., Slomka, P., Nakazato, R., Woo, J., Berman, D.S., Kuo, C.C.J., Dey, D.: Image denoising of low-radiation dose coronary CT angiography by an adaptive block-matching 3d algorithm. In: Medical Imaging 2013: Image Processing, vol. 8669, p. 86692G. International Society for Optics and Photonics (2013)
  • (18) Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: ICML (2015)
  • (19) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 770–778 (2016)
  • (20)

    Chen, H., Zhang, Y., Zhang, W., Liao, P., Li, K., Zhou, J., Wang, G.: Low-dose CT via convolutional neural network.

    Biomedical optics express 8(2), 679–694 (2017)
  • (21) Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence 38(2), 295–307 (2016)
  • (22) Nishio, M., Nagashima, C., Hirabayashi, S., Ohnishi, A., Sasaki, K., Sagawa, T., Hamada, M., Yamashita, T.: Convolutional auto-encoder for image denoising of ultra-low-dose CT. Heliyon 3(8), e00,393 (2017)
  • (23) Chen, H., Zhang, Y., Kalra, M.K., Lin, F., Chen, Y., Liao, P., Zhou, J., Wang, G.: Low-dose CT with a residual encoder-decoder convolutional neural network. IEEE transactions on medical imaging 36(12), 2524–2535 (2017)
  • (24) Kang, E., Min, J., Ye, J.C.: A deep convolutional neural network using directional wavelets for low-dose x-ray CT reconstruction. Medical physics 44(10) (2017)
  • (25) Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A.C., Bengio, Y.: Generative adversarial networks. CoRR abs/1406.2661 (2014)
  • (26) Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., Lee, H.: Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396 (2016)
  • (27) Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A.P., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, vol. 2, p. 4 (2017)
  • (28) Vondrick, C., Pirsiavash, H., Torralba, A.: Generating videos with scene dynamics. In: Advances In Neural Information Processing Systems, pp. 613–621 (2016)
  • (29) Wolterink, J.M., Leiner, T., Viergever, M.A., Išgum, I.: Generative adversarial networks for noise reduction in low-dose CT. IEEE transactions on medical imaging 36(12), 2536–2545 (2017)
  • (30) Yang, Q., Yan, P., Zhang, Y., Yu, H., Shi, Y., Mou, X., Kalra, M.K., Zhang, Y., Sun, L., Wang, G.: Low dose CT image denoising using a generative adversarial network with wasserstein distance and perceptual loss. IEEE transactions on medical imaging (2018)
  • (31) Yi, X., Babyn, P.: Sharpness-aware low-dose CT denoising using conditional generative adversarial network. Journal of digital imaging pp. 1–15 (2018)
  • (32) Yang, Q., Yan, P., Kalra, M.K., Wang, G.: CT image denoising with perceptive deep neural networks. arXiv preprint arXiv:1702.07019 (2017)
  • (33) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  • (34) Bevins, N., Szczykutowicz, T., Supanich, M.: Tu-c-103-06: A simple method for simulating reduced-dose images for evaluation of clinical CT protocols. Medical Physics 40(6Part26), 437–437 (2013)
  • (35) Zeng, D., Huang, J., Bian, Z., Niu, S., Zhang, H., Feng, Q., Liang, Z., Ma, J.: A simple low-dose x-ray CT simulation from high-dose scan. IEEE transactions on nuclear science 62(5), 2226–2233 (2015)
  • (36) Innolitics, L.: Dicom standard browser @ONLINE (2018). URL[Accessed7Dec.2018]
  • (37) Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. CoRR abs/1511.07122 (2015)
  • (38) Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence 40(4), 834–848 (2018)
  • (39) Mao, X., Shen, C., Yang, Y.B.: Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In: Advances in Neural Information Processing Systems, pp. 2802–2810 (2016)
  • (40) Wang, T., Sun, M., Hu, K.: Dilated deep residual network for image denoising.

    In: Tools with Artificial Intelligence (ICTAI), 2017 IEEE 29th International Conference on, pp. 1272–1279. IEEE (2017)

  • (41) Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing (2017)
  • (42) Zhang, K., Zuo, W., Gu, S., Zhang, L.: Learning deep cnn denoiser prior for image restoration. In: IEEE Conference on Computer Vision and Pattern Recognition, vol. 2 (2017)
  • (43) Huang, G., Liu, Z., Weinberger, K.Q., van der Maaten, L.: Densely connected convolutional networks. 2016. URL https://arxiv. org/abs/1608.06993 (2016)
  • (44) Sobel, I.: An isotropic 3 3 image gradient operater. Machine vision for three-dimensional scenes pp. 376–379 (1990)
  • (45) Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision, pp. 694–711. Springer (2016)
  • (46) Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A Large-Scale Hierarchical Image Database. In: CVPR09 (2009)
  • (47) Lingle, W., Erickson, B., Zuley, M., Jarosz, R., Bonaccio, E., Filippini, J., Gruszauskas, N.: Radiology data from the cancer genome atlas breast invasive carcinoma [tcga-brca] collection. The Cancer Imaging Archive (2016)
  • (48) Clark, K., Vendt, B., Smith, K., Freymann, J., Kirby, J., Koppel, P., Moore, S., Phillips, S., Maffitt, D., Pringle, M., et al.: The cancer imaging archive (tcia): maintaining and operating a public information repository. Journal of digital imaging 26(6), 1045–1057 (2013)
  • (49)

    Gavrielides, M.A., Kinnard, L.M., Myers, K.J., Peregoy, J., Pritchard, W.F., Zeng, R., Esparza, J., Karanian, J., Petrick, N.: A resource for the assessment of lung nodule size estimation methods: database of thoracic ct scans of an anthropomorphic phantom.

    Opt. Express 18(14), 15,244–15,255 (2010). DOI 10.1364/OE.18.015244. URL
  • (50) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)
  • (51) Gholizadeh-Ansari, M., Alirezaie, J., Babyn, P.: Low-dose CT denoising with dilated residual network. In: 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 5117–5120. IEEE (2018)