Artifact Disentanglement Network for Unsupervised Metal Artifact Reduction

06/05/2019 ∙ by Haofu Liao, et al. ∙ University of Rochester 0

Current deep neural network based approaches to computed tomography (CT) metal artifact reduction (MAR) are supervised methods which rely heavily on synthesized data for training. However, as synthesized data may not perfectly simulate the underlying physical mechanisms of CT imaging, the supervised methods often generalize poorly to clinical applications. To address this problem, we propose, to the best of our knowledge, the first unsupervised learning approach to MAR. Specifically, we introduce a novel artifact disentanglement network that enables different forms of generations and regularizations between the artifact-affected and artifact-free image domains to support unsupervised learning. Extensive experiments show that our method significantly outperforms the existing unsupervised models for image-to-image translation problems, and achieves comparable performance to existing supervised models on a synthesized dataset. When applied to clinical datasets, our method achieves considerable improvements over the supervised models.



There are no comments yet.


page 5

page 6

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Metal artifact is one of the commonly encountered artifacts in computed tomography (CT) images. It is introduced by the metallic implants during the imaging and reconstruction process. The formation of metal artifact involves several mechanisms such as beam hardening, scatter, noise, and the non-linear partial volume effect [1], which make it very challenging to be modeled and removed by traditional methods. Therefore, recent approaches [12, 10, 4, 2] to metal artifact reduction (MAR) propose to use deep neural networks (DNNs) to inherently address the modeling of metal artifacts, and their experimental results show promising MAR performances.

All the existing DNN-based approaches are supervised methods requiring pairs of anatomically identical CT images, one with and the other without metal artifacts, for training. As it is clinically impractical to obtain such pairs of images, most of the supervised methods rely on synthesized images to train their models. However, due to the complexity of metal artifacts and the variations of CT devices, the synthesized images may not fully simulate the real clinical scenarios, and the performances of these supervised methods may degrade in clinical applications.

In this work, we aim to address the challenging yet more practical unsupervised setting where no paired CT images are available for training. To this end, we propose a novel artifact disentanglement network to separate the metal artifacts from clinical CT images in a latent space. The disentanglement enables manipulations between the artifact-affected and artifact-free image domains so that different forms of adversarial- and self-regularizations can be achieved to support unsupervised learning. To the best of our knowledge, this is the first unsupervised learning approach to MAR. Extensive experiments show that our method achieves comparable performance to the existing supervised methods on a synthesized dataset. When applied to clinical datasets, all the supervised methods demonstrate certain degrees of degradation, whereas our method outperforms the supervised methods with significantly better clinical MAR results.

2 Related work

Unsupervised image-to-image translation    Image artifact reduction can be regarded as a form of image-to-image translation. One of the earliest unsupervised works in this category is CycleGAN [13] where a cycle-consistency design is proposed for unsupervised learning. Later works [5, 6] improve CycleGAN for diverse and multimodal image generation. However, these unsupervised methods target at image synthesis and do not have suitable components for artifact reduction. Another recent work that is specialized for artifact reduction is deep image prior (DIP) [9], which, however, only works for less structured artifacts such as noise and compression artifacts.

Deep metal artifact reduction    A number of studies have recently been proposed to address MAR with DNNs. RL-ARCNN [4]

introduces residual learning to a deep convolutional neural network (CNN) and achieves better MAR performance than ordinary CNN. DesteakNet

[2] proposes a two-streams approach that can take a pair of NMAR [7] and detail images as the input to jointly reduce metal artifact. CNNMAR [12] uses CNN to generate prior images in the CT image domain to help the correction in the sinogram domain. Both DesteakNet and CNNMAR show significant improvements over the existing non-DNN based methods on synthesized datasets. cGANMAR [10] leverages generative adversarial networks (GANs) [3] to further improve DNN-based MAR performance.

3 Methodology

Figure 1: Overview of the artifact disentanglement network.

Let be the domain of all artifact-free CT images and be the domain of all artifact-affected CT images, the proposed artifact disentanglement network (ADN) aims to learn a mapping from to without paired data. As illustrated in Figure 1, ADN contains a set of artifact-free image encoder, generator and discriminator , a set of artifact-affected image encoder, generator and discriminator and an artifact-only encoder . The architectures of these building components are inspired from the state-of-the-art studies for image-to-image translation [14, 5]. See the supplementary material for their detailed structures.

Components    Given two unpaired images and , the encoders and map the artifact-free content information from and to a common content space , respectively. maps the artifact-only information from to an artifact space ,


The generator takes an artifact-free code, or , and an artifact-only code as the input and outputs an artifact-affected image. takes an artifact-free code, or , as the input and outputs an artifact-free image,


During testing, only and are required to obtain an artifact-corrected output, i.e., . The discriminator decides whether an input is sampled from or generated by . Similarly, decides whether an input is from or .

Loss functions    A good MAR model should (i) reduce the artifacts as much as possible and (ii) keep the anatomical content of the input CT images. To remove the artifacts, we train and adversarially to encourage the output to appear similar to an artifact-free image,


To maintain the anatomical content, we apply self-reconstruction to force the encoders and decoders to preserve the content of the inputs,


Here, the first term encourages encodes all the content information of and the artifact information is not encoded due to the introduction of a separate artifact encoder . With the second term, learns how to fully reconstruct the encoded artifact-free content information. Combining these two terms, content persevering for can be achieved.

In addition, we also introduce a self-reduction design to further enforce the learning. This idea is carried out in two steps. In the first step, ADN synthesizes “real” metal artifact from and apply it to . Specifically, this is achieved by decoding from and , i.e., , and we use another adversarial loss to guarantee looking “real”,


In the second step, ADN reduces artifacts from the synthesized data to recover back to . This is regularized by a cycle-consistent loss


Finally, due to the use of the same metal artifact, the difference map between and and that between and should be close. Thus, we employ an artifact-consistent loss to constrain the artifact difference,


The full objective function is given by


where the ’s are hyper-parameters that control the importance of each term.

4 Experiments

Datasets.    We evaluate the proposed method on one synthesized dataset and two clinical datasets. We refer to them as SYN, CL1 and CL2, respectively. For SYN, we randomly select artifact-free CT images from DeepLesion [11] and follow the method from CNNMAR [12] to synthesize metal artifacts. We use of the synthesized pairs for training and validation and the rest pairs for testing.

For CL1, we choose the vertebrae localization and identification dataset from We split the CT images from this dataset into two groups, one with artifacts and the other without artifacts. First, we identify regions with HU values greater than as the metal regions. Then, CT images whose largest-connected metal regions have more than 400 pixels are selected as artifact-affected images. CT images with the largest HU values less than are selected as artifact-free images. After this selection, the artifact-affected group contains images and the artifact-free group contains images. We withhold images from the artifact-affected group for testing.

For CL2, we investigate the performance of the proposed method under a more challenging cross-modality setting. Specifically, the artifact-affected images of CL2 are from a cone-beam CT (CBCT) dataset collected during spinal interventions. Images from this dataset are very noisy and the majority of them contain metallic implants. There are in total CBCT images from this dataset, among which 200 images are withheld for testing. For the artifact-free images, we reuse the CT images collected from CL1.


  Supervised   Unsupervised
CNNMAR[12] UNet [8] cGANMAR [10]   Ours CycleGAN [10] DIP [9] MUNIT [5] DRIT [6]


PSNR   32.5 34.8 34.1   33.6 30.8 26.4 14.9 25.6
SSIM   91.4 93.1 93.4   92.4 72.9 75.9 7.5 79.7


Table 1: Quantitative evaluation on the SYN dataset.
Figure 2: Qualitative evaluation on the SYN dataset. For better visualization, we obtain the metal region through thresholding and color it with red. See the supplementary material for more qualitative results.

Baselines.    We compare the proposed method with seven state-of-the-art methods that are closely related to our problem. Three of the compared methods are supervised: CNNMAR [12], UNet [8] and cGANMAR [10]. CNNMAR and cGANMAR are two recent approaches that are dedicated to MAR. UNet is a general DNN framework that shows effectiveness in many image-to-image problems. The other four compared methods are unsupervised: CycleGAN [13], DIP [9], MUNIT [5] and DRIT [6]. These methods are currently state-of-the-art approaches to unsupervised image-to-image translation problems. All the compared methods except UNet are trained with their officially released code. For UNet, a publicly available implementation222

is used.

Training and testing.   

We implement our method under the PyTorch deep learning framework and use the Adam optimizer with learning rate to minimize the objective function. For the hyper-parameters, we use , for SYN and CL1, and use , for CL2.

To simulate the unsupervised setting for SYN, we evenly divide the synthesized training pairs into two groups. For one group, only artifact-affected images are used and their corresponding artifact-free images are withheld. For the other group, only artifact-free images are used and their corresponding artifact-affected images are withheld. During training of the unsupervised methods, we randomly select one image from each of the two groups as the input. For the supervised methods, all the synthesized training pairs are used.

To train the supervised methods with CL1, we first synthesize metal artifacts using the images from the artifact-free group of CL1. Then, we train the supervised methods with the synthesized pairs. During testing, the trained models are applied to the testing set containing only clinical metal artifact images. To train the unsupervised methods, we randomly select one image from the artifact-affected group and the other from the artifact-free group as the input.

For CL2, synthesizing metal artifacts is not possible due to the unavailability of artifact-free CBCT images. Therefore, for the supervised methods we directly use the models trained for CL1. In other words, the supervised methods are trained on synthesized CT images (from CL1) and tested on clinical CBCT images (from CL2). For the unsupervised models, each time we randomly select one artifact-affected CBCT image and one artifact-free CT image as the input for training.

Figure 3: Qualitative evaluation on the CL1 dataset. For better visualization, we obtain the metal region through thresholding and color it with red. See the supplementary material for more qualitative results.

Performance on synthesized data.    SYN contains paired data, allowing for both quantitative and qualitative evaluations. Following the convention in the literature, we use peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) as the metrics for the quantitative evaluation. For both metrics, the higher the better. Table 1 and Figure 8 show the quantitative and qualitative evaluation results, respectively.

We observe that the proposed method performs significantly better than the other unsupervised methods. MUNIT focuses more on diverse and realistic outputs (Figure 8(i)) with less constraint on structural similarity. CycleGAN and DRIT perform better as both the two models also require the artifact-corrected outputs to be able to transform back to the original artifact-affected images. Although this helps preserve content information, it also encourages the models to keep the artifacts. Therefore, as shown in Figure 8(g) and 2(j), the artifacts cannot be greatly reduced. DIP does not reduce much metal artifact in the input image (Figure 8(h)) as it is not designed to handle the more structured metal artifact.

We also find that the performance of our method is on a par with the supervised methods. The performance of UNet is close to that of cGANMAR which at its backend uses an UNet-like architecture. However, owing to the use of GAN, it produces sharper outputs (Figure 8(e)) than UNet (Figure 8(f)). As for PSNR and SSIM, both methods only slightly outperform our method and, surprisingly, our method performs better than CNNMAR.

Performance on clinical data.    Next, we investigate the performance of the proposed method on clinical data. Since there are no ground truths available for the clinical images, only qualitative comparisons are performed. The qualitative evaluation results of CL1 are shown in Figure 9. Here, all the supervised methods are trained with paired images that are synthesized from the artifact-free group of CL1. We can see that UNet and cGANMAR generalize poorly when applied to clinical images (Figure 9(d) and 9(e)). CNNMAR is more robust as it corrects the artifacts in the sinogram domain. However, such a sinogram domain correction also introduces secondary artifacts (Figure 9(c)). For the more challenging cross-modality artifact reduction task with CL2 (Figure 10), all the supervised methods fail. This is not totally unexpected as the supervised methods are trained using only CT images because of the lack of artifact-free CBCT images. Similar to the cases with SYN, the other unsupervised methods also show inferior performances when evaluated on both the CL1 and CL2 datasets. By contrast, our method consistently delivers high-quality artifact reduced results on clinical images.

Figure 4: Qualitative evaluation on the CL2 dataset. See the supplementary material for more qualitative results.

5 Conclusion

We presented a novel unsupervised learning approach to MAR. Through the development of an artifact disentanglement network, we showed how to leverage different forms of regularizations to eliminate the requirement of paired images for training. To understand the effectiveness of this approach, we performed extensive evaluations on one synthesized and two clinical datasets. The evaluation results demonstrated the feasibility of using unsupervised learning method to achieve comparable performance to the supervised methods. More importantly, the results also showed that directly learning MAR from clinical CT images under an unsupervised setting was a more feasible and robust approach than transferring the knowledge learned from synthesized data to clinical data. We believe our findings in this work will initiate more applicable research for medical image artifact reduction even under an unsupervised setting.

Acknowledgement. This work was supported in part by NSF award #1722847 and the Morris K. Udall Center of Excellence in Parkinson’s Disease Research by NIH.


  • [1] Gjesteby, L., Man, B.D., Jin, Y., Paganetti, H., Verburg, J., Giantsoudi, D., Wang, G.: Metal artifact reduction in CT: where are we after four decades? IEEE Access 4, 5826–5849 (2016)
  • [2]

    Gjesteby, L., Shan, H., Yang, Q., Xi, Y., Claus, B., Jin, Y., De Man, B., Wang, G.: Deep neural network for ct metal artifact reduction with a perceptual loss function. In: In Proceedings of The Fifth International Conference on Image Formation in X-ray Computed Tomography (2018)

  • [3] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in neural information processing systems (2014)
  • [4] Huang, X., Wang, J., Tang, F., Zhong, T., Zhang, Y.: Metal artifact reduction on cervical ct images by deep residual learning. Biomedical engineering online 17(1),  175 (2018)
  • [5]

    Huang, X., Liu, M., Belongie, S.J., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Computer Vision - ECCV 2018 (2018)

  • [6] Lee, H., Tseng, H., Huang, J., Singh, M., Yang, M.: Diverse image-to-image translation via disentangled representations. In: Computer Vision - ECCV 2018 (2018)
  • [7] Meyer, E., Raupach, R., Lell, M., Schmidt, B., Kachelrieß, M.: Normalized metal artifact reduction (nmar) in computed tomography. Medical physics (2010)
  • [8] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention (2015)
  • [9]

    Ulyanov, D., Vedaldi, A., Lempitsky, V.S.: Deep image prior. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2018)

  • [10] Wang, J., Zhao, Y., Noble, J.H., Dawant, B.M.: Conditional generative adversarial networks for metal artifact reduction in ct images of the ear. In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2018 (2018)
  • [11] Yan, K., Wang, X., Lu, L., Summers, R.M.: Deeplesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning. Journal of Medical Imaging (2018)
  • [12] Zhang, Y., Yu, H.: Convolutional neural network based metal artifact reduction in x-ray computed tomography. IEEE Trans. Med. Imaging 37(6), 1370–1381 (2018)
  • [13] Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
  • [14] Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. CoRR abs/1703.10593 (2017)

A Architecture Details

Figure 5:

Basic building blocks of the encoders and generators: (a) convolution block, (b) residual block, (c) merge block, and (d) final block. ReflectionPad2d stands for a reflection padding layer that we use to replace the zero padding of the conventional convolution layer.

Figure 6: Architecture of the discriminator or

. We use ‘C#K#S#P#’ to denote the configuration of the convolution layers, where ‘K’, ‘C’, ‘S’ and ‘P’ stand for the kernel, output channel, stride and padding size, respectively.

Figure 7: Architecture of the encoders and generators. (a) or (b) (c) (d) . CB, RB, MB and FB are acronyms of the build blocks as illustrated in Fig. 5. The same as in Fig. 6, ‘C#K#S#P#’ denotes the configurations of the convolution layers in the blocks. For CB, RB, and FB, P is the padding of the reflection padding layer and the padding of the convolutional layer is zero. Note that the artifact code input for are the hierarchical features encoded by and are merged with the corresponding outputs from .

B Qualitative Results

Figure 8: Qualitative evaluation results of SYN. For better visualization, we obtain the metal regions through thresholding and color them with red.
Figure 9: Qualitative evaluation results of CL1. For better visualization, we obtain the metal regions through thresholding and color them with red.
Figure 10: Qualitative evaluation results of CL2.
Figure 11: Metal artifact transferring. First row: the clinical images with metal artifacts. Middle row: the clinical images without metal artifacts. Last row: the metal artifacts in the first row transferred to the artifact-free images in the second row.