ADN: Artifact Disentanglement Network for Unsupervised Metal Artifact Reduction

08/03/2019 ∙ by Haofu Liao, et al. ∙ University of Rochester 2

Current deep neural network based approaches to computed tomography (CT) metal artifact reduction (MAR) are supervised methods that rely on synthesized metal artifacts for training. However, as synthesized data may not accurately simulate the underlying physical mechanisms of CT imaging, the supervised methods often generalize poorly to clinical applications. To address this problem, we propose, to the best of our knowledge, the first unsupervised learning approach to MAR. Specifically, we introduce a novel artifact disentanglement network that disentangles the metal artifacts from CT images in the latent space. It supports different forms of generations (artifact reduction, artifact transfer, and self-reconstruction, etc.) with specialized loss functions to obviate the need for supervision with synthesized data. Extensive experiments show that when applied to a synthesized dataset, our method addresses metal artifacts significantly better than the existing unsupervised models designed for natural image-to-image translation problems, and achieves comparable performance to existing supervised models for MAR. When applied to clinical datasets, our method demonstrates better generalization ability over the supervised models. The source code of this paper is publicly available at



There are no comments yet.


page 1

page 3

page 5

page 7

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Metal artifact is one of the commonly encountered problems in computed tomography (CT). It arises when a patient carries metallic implants, e.g., dental fillings and hip prostheses. Compared to body tissues, metallic materials attenuate X-rays significantly and non-uniformly over the spectrum, leading to inconsistent X-ray projections. The mismatched projections will introduce severe streaking and shading artifacts in the reconstructed CT images, which significantly degrade the image quality and compromise the medical image analysis as well as the subsequent healthcare delivery.

To reduce the metal artifacts, many efforts have been made over the past decades [1]. Conventional approaches [2, 3]

address the metal artifacts by projection completion, where the metal traces in the X-ray projections are replaced by estimated values. After the projection completion, the estimated values need to be consistent with the imaging content and the underlying projection geometry. When the metallic implant is large, it is challenging to satisfy these requirements and thus secondary artifacts are often introduced due to an imperfect completion. Moreover, the X-ray projection data, as well as the associated reconstruction algorithms, are often held out by the manufactures, which limits the applicability of the projection based approaches.

Fig. 1: Artifact disentanglement. The content and artifact components of an image from the artifact-affected domain is mapped separately to the content space and the artifact space , i.e., artifact disentanglement. An image from the artifact-free domain contains no artifact and thus is mapped only to the content space. Decoding without artifact code removes the artifact from an artifact-affected image (blue arrows ) while decoding with the artifact code adds artifacts to an artifact-free image (red arrows ).

A workaround to the limitations of the projection based approaches is to address the metal artifacts directly in the CT images. However, since the formation of metal artifacts involves complicated mechanisms such as beam hardening, scatter, noise, and the non-linear partial volume effect [1], it is very challenging to model and reduce metal artifacts in the CT images with traditional approaches. Therefore, recent approaches [4, 5, 6, 7] to metal artifact reduction (MAR) propose to use deep neural networks (DNNs) to inherently address the modeling of metal artifacts, and their experimental results show promising MAR performances. All the existing DNN-based approaches are supervised methods that require pairs of anatomically identical CT images, one with and the other without metal artifacts, for training. As it is clinically impractical to acquire such pairs of images, most of the supervised methods resort to synthesizing metal artifacts in CT images to simulate the pairs. However, due to the complexity of metal artifacts and the variations of CT devices, the synthesized artifacts may not accurately reproduce the real clinical scenarios, and the performances of these supervised methods tend to degrade in clinical applications.

In this work, we aim to address the challenging yet more practical unsupervised setting where no paired CT images are available and required for training. To this end, we reformulate the artifact reduction problem as an artifact disentanglement problem. As illustrated in Fig. 1, we assume that any artifact-affected image consists of an artifact component (i.e., metal artifacts, noises, etc.) and a content component (i.e., the anatomical structure). Our goal is to disentangle these two components in the latent space, and artifact reduction can be readily achieved by reconstructing CT images without the artifact component. Fundamentally, this artifact disentanglement without paired images is made possible by grouping the CT images into two groups, one with metal artifacts and the other without metal artifacts. In this way, we introduce an inductive bias [8] that a model may inherently learn artifact disentanglement by comparing between these two groups. More importantly, the artifact disentanglement assumption guides manipulations in the latent space. This can be leveraged to include additional inductive biases that apply self-supervisions between the outputs of the model (See Sec. III-B) and thus obviate the need for paired images.

Specifically, we propose an artifact disentanglement network (ADN) with specialized encoders and decoders that handle the encoding and decoding of the artifact and content components separately for the unpaired inputs. Different combinations of the encoders and decoders support different forms of image translations (See Sec. III-A), e.g., artifact reduction, artifact synthesis, self-reconstruction, and so on. ADN exploits the relationships between the image translations for unsupervised learning. Extensive experiments show that our method achieves comparable performance to the existing supervised methods on a synthesized dataset. When applied to clinical datasets, all the supervised methods do not generalize well due to a significant domain shift, whereas ADN delivers consistent MAR performance and significantly outperforms the compared supervised methods.

Ii Related work

Conventional Metal Artifact Reduction. Most conventional approaches address metal artifacts in X-ray projections. A straightforward way is to directly correct the X-ray measurement of the metallic implants by modeling the underlying physical effects such as beam hardening [9, 10], scatter [11], and so on. However, the metal traces in projections are often corrupted. Thus, instead of projection correction, a more common approach is to replace the corrupted region with estimated values. Early approaches [12, 2]

fill the corrupted regions by linear interpolation which often introduces new artifacts due to the inaccuracy of the interpolated values. To address this issue, a state-of-the-art approach 

[3] introduces a prior image to normalize the X-ray projections before the interpolation.

Deep Metal Artifact Reduction. A number of studies have recently been proposed to address MAR with DNNs. RL-ARCNN [6]

introduces residual learning into a deep convolutional neural network (CNN) and achieves better MAR performance than standard CNN. DestreakNet 

[7] proposes a two-streams approach that can take a pair of NMAR [3] and detail images as the input to jointly reduce metal artifacts. CNNMAR [4] uses CNN to generate prior images in the CT image domain to help the correction in the projection domain. Both DestreakNet and CNNMAR show significant improvements over the existing non-DNN based methods on synthesized datasets. cGANMAR [5] leverages generative adversarial networks (GANs) [13] to further improve the DNN-based MAR performance.

Unsupervised Image-to-Image Translation. Image artifact reduction can be regarded as a form of image-to-image translation. One of the earliest unsupervised methods in this category is CycleGAN [14] where a cycle-consistency design is proposed for unsupervised learning. MUNIT [15] and DRIT [16] improve CycleGAN for diverse and multimodal image generation. However, these unsupervised methods aim at image synthesis and do not have suitable components for artifact reduction. Another recent work that is specialized for artifact reduction is deep image prior (DIP) [17], which, however, only works for less structured artifacts such as additive noise or compression artifacts.

Preliminary work. A preliminary version [18] of this manuscript was previously published. This paper extends the preliminary version substantially with the following improvements.

  • We include more details (with illustrations) about the motivations and assumptions of the artifact disentanglement to help the readers better understand this work at high-level.

  • We include improved notations and problem formulation to describe this work more precisely.

  • We redraw the diagram of the overall architecture and add new diagrams as well as the descriptions about the detailed architectures of the subnetworks.

  • We discuss the reasoning about the design choices of the loss functions and the network architectures to better inform and enlighten the readers about our work.

  • We add several experiments to better demonstrate the effectiveness of the proposed approach. Specifically, we add comparisons with conventional approaches, add comparisons with different variants of the proposed approach for an ablation study, and add evaluations about the proposed approach on artifact transfer.

  • We include discussions about the significance and potential applications of this work.

Iii Methodology

Fig. 2: Overview of the proposed artifact disentanglement network (ADN). Taking any two unpaired images, one from and the other from , as the inputs, ADN supports four different forms of image translations: , , and .

Let be the domain of all artifact-affected CT images and be the domain of all artifact-free CT images. We denote as a set of paired images, where is an MAR model that removes the metal artifacts from . In this work, we assume no such paired dataset is available and we propose to learn with unpaired images.

As illustrated in Fig. 1, the proposed method disentangles the artifact and content components of an artifact-affected image by encoding them separately into a content space and an artifact space . If the disentanglement is well addressed, the encoded content component should contain no information about the artifact while preserving all the content information. Thus, decoding from should give an artifact-free image which is the artifact-removed counterpart of . On the other hand, it is also possible to encode an artifact-free image into the content space which gives a content code . If is decoded together with an artifact code , we obtain an artifact-affected image . In the following sections, we introduce an artifact disentanglement network (ADN) that learns these encodings and decodings without paired data.

Iii-a Encoders and Decoders

The architecture of ADN is shown in Fig. 2. It contains a pair of artifact-free image encoder and decoder and a pair of artifact-affected image encoder and decoder . The encoders map an image sample from the image domain to the latent space and the decoders map a latent code from the latent space back to the image domain. Note that unlike a conventional encoder, consists of a content encoder and an artifact encoder , which encode the content and artifacts separately to achieve artifact disentanglement.

Specifically, given two unpaired images and , and map the content component of and to the content space , respectively. maps the artifact component of to the artifact space . We denote the corresponding latent codes as


takes a content code and an artifact code as the input and outputs an artifact-affected image. Decoding from and should reconstruct and decoding from and should add artifacts to ,


takes a content code as the input and outputs an artifact-free image. Decoding from should remove the artifacts from and decoding from should reconstruct ,


Note that can be regarded as a synthesized artifact-affected image whose artifacts come from and content comes from . Thus, by reapplying and , it should remove the synthesized artifacts and recover ,


Iii-B Learning

Fig. 3: An illustration of the relationships between the loss functions and ADN’s inputs and outputs.

For ADN, learning an MAR model means to learn the two key components and . encodes only the content of an artifact-affected image and generates an artifact-free image with the encoded content code. Thus, their composition readily results in an MAR model, . However, without paired data, it is challenging to directly address the learning of these two components. Therefore, we learn and together with other encoders and decoders in ADN. In this way, different learning signals can be leveraged to regularize the training of and , and removes the requirement of paired data.

The learning aims at encouraging the outputs of the encoders and decoders to achieve the artifact disentanglement. That is, we design loss functions so that ADN outputs the intended images as denoted in Eqn. 2-4. An overview of the relationships between the loss functions and ADN’s outputs is shown in Fig. 3. We can observe that ADN enables five forms of losses, namely two adversarial losses and , an artifact consistency loss , a reconstruction loss and a self-reduction loss . The overall objective function is formulated as the weighted sum of these losses,


where the ’s are hyper-parameters that control the importance of each term.

Adversarial Loss. By manipulating the artifact component in the latent space, ADN outputs (Eqn. 3) and (Eqn. 2), where the former removes artifacts from and the latter adds artifacts to . Learning to generate these two outputs is crucial to the success of artifact disentanglement. However, since there are no paired images, it is impossible to simply apply regression losses, such as the or loss, to minimize the difference between ADN’s outputs and the ground truths. To address this problem, we adopt the idea of adversarial learning [13] by introducing two discriminators and to regularize the plausibility of and . On the one hand, / learns to distinguish whether an image is generated by ADN or sampled from /. On the other hand, ADN learns to deceive and so that they cannot determine if the outputs from ADN are generated images or real images. In this way, , and ADN can be trained without paired images. Formally, the adversarial loss can be written as


Reconstruction Loss. Despite of the artifact disentanglement, there should be no information lost or model-introduced artifacts during the encoding and decoding. For artifact reduction, the content information should be fully encoded and decoded by and . For artifact synthesis, the artifact and content components should be fully encoded and decoded by , and . However, without paired data, the intactness of the encoding and decoding cannot be directly regularized. Therefore, we introduce two forms of reconstruction to inherently encourage the encoders and decoders to preserve the information. Specifically, ADN requires and

to serve as autoencoders when encoding and decoding from the same image,


Here, the two outputs (Eqn. 2) and (Eqn. 3) of ADN reconstruct the two inputs and , respectively. As a common practice in image-to-image translation problem [19], we use loss instead of loss to encourage sharper outputs.

Artifact Consistency Loss. The adversarial loss reduces metal artifacts by encouraging to resemble a sample from . But the obtained in this way is only anatomically plausible not anatomically precise, i.e., may not be anatomically correspondent to . A naive solution to achieve the anatomical preciseness without paired data is to directly minimize the difference between and with an or loss. However, this will induce to contain artifacts, and thus conflicts with the adversarial loss and compromises the overall learning. ADN addresses the anatomical preciseness by introducing an artifact consistency loss,


This loss is based on the observation that the difference between and and the difference between and should be close due to the use of the same artifact. Unlike a direct minimization of the difference between and , only requires and to be anatomically close but not exactly close and vice versa, for and .

Self-Reduction Loss. ADN also introduces a self-reduction mechanism. It first adds artifacts to which creates and then removes the artifacts from which results . Thus, we can pair with to regularize the artifact reduction in Eqn. 4 with regression,

Fig. 4: Basic building blocks of the encoders and decoders: (a) residual block, (b) downsampling block, (c) upsampling block, (d) final block and (e) merging block.
Fig. 5: Detailed architecture of the proposed artifact pyramid decoding (APD). The artifact-affected decoder uses APD to effectively merge the artifact code from .

Iii-C Network Architectures

We formulate the building components, i.e., the encoders, decoders, and discriminators, as convolutional neural networks (CNN). Table I lists their detailed architectures. As we can see, the building components consist of a stack of building blocks, where some of the structures are inspired by the state-of-the-art approaches for image translation [20, 21].

As shown in Fig. 4, there are five different types of blocks. The residual, downsampling and upsampling blocks are the core blocks of the encoders and decoders. The downsampling block (Fig. 4

b) uses strided convolution to reduce the dimensionality of the feature maps for better computational efficiency. Compared with the max pooling layers, strided convolution adaptively selects the features for downsampling which demonstrates better performance for generative models 

[22]. The residual block (Fig. 4

a) includes residual connections to allow low-level features to be considered in the computation of high-level features. This design shows better performance for deep neural networks 

[23]. The upsampling block (Fig. 4c) converts feature maps back to their original dimension to generate the final outputs. We use an upsample layer (nearest neighbor interpolation) followed with a convolutional layer for the upsampling. We choose this design instead of the deconvolutional layer to avoid the “checkerboard” effect [24]

. The padding of all the convolutional layers in the blocks of the encoders and decoders are reflection padding. It provides better results along the edges of the generated images.

It is worth noting that we propose a special way to merge the artifact code and the content code during the decoding of an artifact-affected image. We refer to this design as artifact pyramid decoding (APD) in respect to the feature pyramid network (FPN) [25]. For artifact encoding and decoding, we aim to effectively recover the details of the artifacts. A feature pyramid design, which includes high-definition features with relatively cheaper costs, serves well for this purpose. Fig. 5 demonstrates the detailed architecture of APD. consists of several downsampling blocks and outputs feature maps at different scales, i.e. a feature pyramid. consists of a stack of residual, merge, upsample and final blocks. It generates the artifact-affected images by merging the artifact code at different scales during the decoding. The merging blocks (Fig. 4e) in first concatenate the content feature maps and artifact feature maps along the channel dimension, and then use a convolution to locally merge the features.

Network Block/Layer Count Ch. Kernel Stride Pad.
/ down 1 64 7 1 3
down. 1 128 4 2 1
down. 1 256 4 2 1
residual 4 256 3 1 1
down. 1 64 7 1 3
down. 1 128 4 2 1
down. 1 256 4 2 1
residual 4 256 3 1 1
up. 1 128 5 1 2
up. 1 64 5 1 2
final 1 1 7 1 3
residual 4 256 3 1 1
merge 1 256 1 1 0
up. 1 128 5 1 2
merge 1 128 1 1 0
up. 1 64 5 1 2
merge 1 64 1 1 0
final 1 1 7 1 3
/ conv 1 64 4 2 1
relu 1 - - - -
down. 1 128 4 2 1
down. 1 256 4 1 1
conv 1 1 4 1 1
TABLE I: Architecture of the building components. “Channel (Ch.)”, “Kernel”, “Stride” and “Padding (Pad.)” denote the configurations of the convolution layers in the blocks.

Iv Experiments

Fig. 6: Qualitative comparison with baseline methods on the SYN dataset. For better visualization, we segment out the metal regions through thresholding and color them in red.

Iv-a Baselines

We compare our method with nine methods that are closely related to our problem. Two of the compared methods are conventional methods: LI [2] and NMAR [3]. They are widely used approaches to MAR. Three of the compared methods are supervised methods: CNNMAR [4], UNet [26] and cGANMAR [5]. CNNMAR and cGANMAR are two recent approaches that are dedicated to MAR. UNet is a general CNN framework that shows effectiveness in many image-to-image problems. The other four compared methods are unsupervised methods: CycleGAN [14], DIP [17], MUNIT [15] and DRIT [16]. They are currently state-of-the-art approaches to unsupervised image-to-image translation problems.

For the implementations of the compared methods, we use their officially released code whenever possible. For LI and NMAR, there is no official code and we adopt the implementations that are used in CNNMAR. For UNet, we use a publicly available implementation in PyTorch For cGANMAR, we train the model with the official code of Pix2Pix [19] as cGANMAR is identical to Pix2Pix at the backend.

Iv-B Datasets

We evaluate the proposed method on one synthesized dataset and two clinical datasets. We refer to them as SYN, CL1, and CL2, respectively. For SYN, we randomly select artifact-free CT images from DeepLesion [27] and follow the method from CNNMAR [4] to synthesize metal artifacts. CNNMAR is one of the state-of-the-art supervised approaches to MAR. To generate the paired data for training, it simulates the beam hardening effect and Poisson noise during the synthesis of metal-affected polychromatic projection data from artifact-free CT images. As beam hardening effect and Poisson noise are two major causes of metal artifacts, and for a fair comparison, we apply the metal artifact synthesis method from CNNMAR in our experiments. We use of the synthesized pairs for training and validation and the remaining pairs for testing.

For CL1, we choose the vertebrae localization and identification dataset from Spineweb [28]. This is a challenging CT dataset for localization problems with a significant portion of its images containing metallic implants. We split the CT images from this dataset into two groups, one with artifacts and the other without artifacts. First, we identify regions with HU values greater than as the metal regions. Then, CT images whose largest-connected metal regions have more than 400 pixels are selected as artifact-affected images. CT images with the largest HU values less than are selected as artifact-free images. After this selection, the artifact-affected group contains images and the artifact-free group contains images. We withhold images from the artifact-affected group for testing.

For CL2, we investigate the performance of the proposed method under a more challenging cross-modality setting. Specifically, the artifact-affected images of CL2 are from a cone-beam CT (CBCT) dataset collected during spinal interventions. Images from this dataset are very noisy. The majority of them contain metal artifacts while the metal implants are mostly not within the imaging field of view. There are in total CBCT images from this dataset, among which 200 images are withheld for testing. For the artifact-free images, we reuse the CT images collected from CL1.

Note that LI, NMAR, and CNNMAR require the availability of raw X-ray projections which however are not provided by SYN, CL1, and CL2. Therefore, we follow the literature [4] by synthesizing the X-ray projections via forward projection. For SYN, we first forward project the artifact-free CT images and then mask out the metal traces. For CL1 and CL2, there are no ground truth artifact-free CT images available. Therefore, the X-ray projections are obtained by forward projecting the artifact-affected CT images. The metal traces are also segmented and masked out for projection interpolation.

Iv-C Training and testing

We implement our method under the PyTorch deep learning framework and use the Adam optimizer with learning rate to minimize the objective function. For the hyper-parameters, we use , for SYN and CL1, and use , for CL2.

Due to the artifact synthesis, SYN contains paired images for supervised learning. To simulate the unsupervised setting for SYN, we evenly divide the

training pairs into two groups. For one group, only artifact-affected images are used and their corresponding artifact-free images are withheld. For the other group, only artifact-free images are used and their corresponding artifact-affected images are withheld. During the training of the unsupervised methods, we randomly select one image from each of the two groups as the input.

To train the supervised methods with CL1, we first synthesize metal artifacts using the images from the artifact-free group of CL1. Then, we train the supervised methods with the synthesized pairs. During testing, the trained models are applied to the testing set containing only clinical metal artifact images. To train the unsupervised methods, we randomly select one image from the artifact-affected group and the other from the artifact-free group as the input. In this way, the artifact-affected images and artifact-free images are sampled evenly during training which helps with the data imbalance between the artifact-affected and artifact-free groups.

For CL2, synthesizing metal artifacts is not possible due to the unavailability of artifact-free CBCT images. Therefore, for the supervised methods, we directly use the models trained for CL1. In other words, the supervised methods are trained on synthesized CT images (from CL1) and tested on clinical CBCT images (from CL2). For the unsupervised models, each time we randomly select one artifact-affected CBCT image and one artifact-free CT image as the input for training.

Iv-D Performance on synthesized data

Method    Metrics
Conventional LI [2] 32.0 91.0
NMAR [3] 32.1 91.2
Supervised CNNMAR [4] 32.5 91.4
UNet [26] 34.8 93.1
cGANMAR [5] 34.1 93.4
Unsupervised CycleGAN [20] 30.8 72.9
DIP [29] 26.4 75.9
MUNIT [21] 14.9 7.5
DRIT [30] 25.6 79.7
Ours 33.6 92.4
TABLE II: Quantitative comparison with baseline methods on the SYN dataset.
Fig. 7: Qualitative comparison with baseline methods on the CL1 dataset. For better visualization, we obtain the metal regions through thresholding and color them with red.

SYN contains paired data, allowing for both quantitative and qualitative evaluations. Following the convention in the literature, we use peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) as the metrics for the quantitative evaluation. For both metrics, the higher values are better. Table II and Fig. 6 show the quantitative and qualitative evaluation results, respectively.

We observe that our proposed method performs significantly better than the other unsupervised methods. MUNIT focuses more on diverse and realistic outputs (Fig. 6j) with less constraint on structural similarity. CycleGAN and DRIT perform better as both the two models also require the artifact-corrected outputs to be able to transform back to the original artifact-affected images. Although this helps preserve content information, it also encourages the models to keep the artifacts. Therefore, as shown in Fig. 6h and 6k, the artifacts cannot be effectively reduced. DIP does not reduce much metal artifact in the input image (Fig. 6i) as it is not designed to handle the more structured metal artifact.

We also find that the performance of our method is on par with the conventional and supervised methods. The performance of UNet is close to that of cGANMAR which at its backend uses an UNet-like architecture. However, due to the use of GAN, cGANMAR produces sharper outputs (Fig. 6g) than UNet (Fig. 6f). As for PSNR and SSIM, both methods only slightly outperform our method. LI, NMAR, and CNNMAR are all projection interpolation based methods. NMAR is better than LI as it uses prior images to guide the projection interpolation. CNNMAR uses CNN to learn the generation of the prior images and thus shows better performance than NMAR. As we can see, ADN performs better than these projection interpolation based approaches both quantitatively and qualitatively.

Iv-E Performance on clinical data

Fig. 8: Qualitative comparison with baseline methods on the CL2 dataset.

Next, we investigate the performance of the proposed method on clinical data. Since there are no ground truths available for the clinical images, only qualitative comparisons are performed. The qualitative evaluation results of CL1 are shown in Fig. 7. Here, all the supervised methods are trained with paired images that are synthesized from the artifact-free group of CL1. We can see that UNet and cGANMAR do not generalize well when applied to clinical images (Fig. 7f and 7g). LI, NMAR, and CNNMAR are more robust as they correct the artifacts in the projection domain. However, the projection domain corrections also introduce secondary artifacts (Fig. 7c, 7d and 7e). For the more challenging CL2 dataset (Fig. 8), all the supervised methods fail. This is not totally unexpected as the supervised methods are trained using only CT images because of the lack of artifact-free CBCT images. As the metallic implants of CL2 are not within the imaging field of view, there are no metal traces available and the projection interpolation based methods do not work (Fig. 8c, 8d and 8e). Similar to the cases with SYN, the other unsupervised methods also show inferior performances when evaluated on both the CL1 and CL2 datasets. In contrast, our method removes the dark shadings and streaks significantly without introducing secondary artifacts.

Iv-F Ablation study

Fig. 9: Qualitative comparison of different variants of ADN. The compared models (M1-M4) are trained with different combinations of the loss functions discussed in Sec. III-B.

We perform an ablation study to understand the effectiveness of several designs of ADN. All the experiments are conducted with the SYN dataset so that both the quantitative and qualitative performances can be analyzed. Table III and Fig. 9 show the experimental results, where the performances of ADN (M4) and its three variants (M1-M3) are compared. M1 refers to the model trained with only the adversarial loss . M2 refers to the model trained with both the adversarial loss and the reconstruction loss . M3 refers to the model trained with the adversarial loss , the reconstruction loss , and the artifact consistency loss . M4 refers to the model trained with all the losses, i.e., ADN. We use M4 and ADN interchangeably in the experiments.

From Fig. 9, we can observe that M1 generates artifact-free images that are structurally similar to the inputs. However, with only adversarial loss, there is no support that the content of the generated images should exactly match the inputs. Thus, we can see that many details of the inputs are lost and some anatomical structures are mismatched. In contrast, the results from M2 maintain most of the anatomical details of the inputs. This demonstrates that learning to reconstruct the inputs is helpful to guide the model to preserve the details of the inputs. However, as the reconstruction loss is applied in a self-reconstruction manner, there is no direct penalty for the anatomical reconstruction error during the artifact reduction. Thus, we can still observe some minor anatomical imperfections from the outputs of M2.

M3 improves M2 by including the artifact consistency loss. This loss directly measures the pixel-wise anatomical differences between the inputs and the generated outputs. As shown in Fig. 9, the results of M3 precisely preserve the content of inputs and suppress most of the metal artifacts. For M4, we can find that the outputs are further improved. This shows that the self-reduction mechanism, which allows the model to reduce synthesized artifacts, is indeed helpful. The quantitative results are provided in Table III. We can see that they are consistent with our qualitative observations in Fig. 9.

Method    Metrics
M1 ( only) 21.7 61.5
M2 (M1 with ) 26.3 82.1
M3 (M2 with ) 32.8 91.6
M4 (M3 with ) 33.6 92.4
TABLE III: Quantitative comparison of different variants of ADN. The compared models (M1-M4) are trained with different combinations of the loss functions discussed in Sec. III-B.

Iv-G Artifact Synthesis

In addition to artifact reduction, ADN also supports unsupervised artifact synthesis. This functionality arises from two designs. First, the adversarial loss encourages the output to be a sample from , i.e. the metal artifact should look real. Second, the artifact consistency loss induces to contain the metal artifacts from and suppresses the synthesis of the content component from . This section investigates the effectiveness of these two designs. The experiments are performed with the CL1 dataset as learning to synthesize clinical artifacts is more practical and challenging than learning to synthesize the artifacts from SYN, whose artifacts are already synthesized. Fig. 10 shows the experimental results, where each row is an example of artifact synthesis. Images on the left are the clinical images with metal artifacts. Images in the middle are the clinical images without artifacts. Images on the right are the artifact synthesis results by transferring the artifacts from the left image to the middle image. As we can see, except the positioning of the metal implants, the synthesized artifacts look realistic. The metal artifacts merge naturally into the artifact-free images making it really challenging to notice that the artifacts are actually synthesized. More importantly, it is only the artifacts that are transferred and almost no content is transferred to the artifact-free images. Note that our model is data-driven. If there is an anatomical structure or lesion that looks like metal artifacts, it might also be transferred.

Fig. 10: Metal artifact transfer. Left: the clinical images with metal artifacts . Middle: the clinical images without metal artifacts . Right: the metal artifacts on the left column transferred to the artifact-free images in the middle .

V Discussions

Applications to Artifact Reduction. Given the flexibility of ADN, we expect many applications to artifact reduction in medicine, where obtaining paired data is often impractical. First, as we have demonstrated, ADN can be applied to address metal artifacts. It reduces metal artifacts directly with CT images, which is critical to the scenarios when researchers or healthcare providers have no access to the raw projection data as well as the associated reconstruction algorithms. For the manufacturers, ADN can be applied in a post-processing step to improve the in-house MAR algorithm that addresses metal artifacts in the projection data during the CT reconstruction.

Second, even though our problem under investigation is MAR, ADN should work with other artifact reduction problems as well. In the problem formulation, ADN does not make any assumption about the nature of the artifacts. Therefore, if we change to other artifact reduction problems such as deblurring, destreaking, denoising, etc., ADN should also work. Actually, in the experiments, the input images from CL1 (Fig. 7b) are slightly noisy while the outputs of ADN are more smooth. Similarly, input images from CL2 (Fig. 8b) contain different types of artifacts, such as noise, streaking artifacts and so on, and ADN handles them well.

Applications to Artifact Synthesis. By combining , and , ADN can be applied to synthesize artifacts in an artifact-free image. As we have shown in Fig. 10, the synthesized artifacts look natural and realistic, which may potentially have practical applications in medical image analysis. For example, a CT image segmentation model may not work well when metal artifacts are present as there are not enough metal-affected images in the dataset. By using ADN, we could significantly increase the number of metal-affected images in the dataset via the realistic metal artifact synthesis. In this way, ADN may potentially improve the performance of the CT segmentation model.

Vi Conclusions

We present an unsupervised learning approach to MAR. Through the development of an artifact disentanglement network, we have shown how to leverage artifact disentanglement to achieve different forms of image translations as well as self-reconstructions that eliminate the requirement of paired images for training. To understand the effectiveness of this approach, we have performed extensive evaluations on one synthesized and two clinical datasets. The evaluation results demonstrate the feasibility of using unsupervised learning method to achieve comparable performance to the supervised methods with synthesized dataset. More importantly, the results also show that directly learning MAR from clinical CT images under an unsupervised setting is a more feasible and robust approach than simply applying the knowledge learned from synthesized data to clinical data. We believe our findings in this work will stimulate more applicable research for medical image artifact reduction under an unsupervised setting.


This work was supported in part by NSF award #1722847 and the Morris K. Udall Center of Excellence in Parkinson’s Disease Research by NIH.


  • [1] L. Gjesteby, B. D. Man, Y. Jin, H. Paganetti, J. Verburg, D. Giantsoudi, and G. Wang, “Metal artifact reduction in CT: where are we after four decades?” IEEE Access, vol. 4, pp. 5826–5849, 2016.
  • [2] W. A. Kalender, R. Hebel, and J. Ebersberger, “Reduction of ct artifacts caused by metallic implants.” Radiology, vol. 164, no. 2, pp. 576–577, 1987.
  • [3] E. Meyer, R. Raupach, M. Lell, B. Schmidt, and M. Kachelrieß, “Normalized metal artifact reduction (nmar) in computed tomography,” Medical physics, 2010.
  • [4] Y. Zhang and H. Yu, “Convolutional neural network based metal artifact reduction in x-ray computed tomography,” IEEE Trans. Med. Imaging, vol. 37, no. 6, pp. 1370–1381, 2018.
  • [5] J. Wang, Y. Zhao, J. H. Noble, and B. M. Dawant, “Conditional generative adversarial networks for metal artifact reduction in ct images of the ear,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2018, 2018.
  • [6] X. Huang, J. Wang, F. Tang, T. Zhong, and Y. Zhang, “Metal artifact reduction on cervical ct images by deep residual learning,” Biomedical engineering online, vol. 17, no. 1, p. 175, 2018.
  • [7]

    L. Gjesteby, H. Shan, Q. Yang, Y. Xi, B. Claus, Y. Jin, B. De Man, and G. Wang, “Deep neural network for ct metal artifact reduction with a perceptual loss function,” in

    In Proceedings of The Fifth International Conference on Image Formation in X-ray Computed Tomography, 2018.
  • [8] F. Locatello, S. Bauer, M. Lucic, S. Gelly, B. Schölkopf, and O. Bachem, “Challenging common assumptions in the unsupervised learning of disentangled representations,” arXiv preprint arXiv:1811.12359, 2018.
  • [9] H. S. Park, D. Hwang, and J. K. Seo, “Metal artifact reduction for polychromatic x-ray ct based on a beam-hardening corrector,” IEEE transactions on medical imaging, vol. 35, no. 2, pp. 480–487, 2015.
  • [10] J. Hsieh, R. C. Molthen, C. A. Dawson, and R. H. Johnson, “An iterative approach to the beam hardening correction in cone beam ct,” Medical physics, vol. 27, no. 1, pp. 23–29, 2000.
  • [11] E. Meyer, C. Maaß, M. Baer, R. Raupach, B. Schmidt, and M. Kachelrieß, “Empirical scatter correction (esc): A new ct scatter correction method and its application to metal artifact reduction,” in IEEE Nuclear Science Symposuim & Medical Imaging Conference.   IEEE, 2010, pp. 2036–2041.
  • [12] G. H. Glover and N. J. Pelc, “An algorithm for the reduction of metal clip artifacts in ct reconstructions,” Medical physics, vol. 8, no. 6, pp. 799–807, 1981.
  • [13] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014.
  • [14] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , 2017.
  • [15] X. Huang, M. Liu, S. J. Belongie, and J. Kautz, “Multimodal unsupervised image-to-image translation,” in Computer Vision - ECCV 2018, 2018.
  • [16] H. Lee, H. Tseng, J. Huang, M. Singh, and M. Yang, “Diverse image-to-image translation via disentangled representations,” in Computer Vision - ECCV 2018, 2018.
  • [17] D. Ulyanov, A. Vedaldi, and V. S. Lempitsky, “Deep image prior,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  • [18] H. Liao, W.-A. Lin, J. Yuan, S. K. Zhou, and J. Luo, “Artifact disentanglement network for unsupervised metal artifact reduction,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2019, MICCAI.   Cham: Springer International Publishing, 2019.
  • [19]

    P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in

    Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1125–1134.
  • [20] J. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” CoRR, vol. abs/1703.10593, 2017.
  • [21] X. Huang, M. Liu, S. J. Belongie, and J. Kautz, “Multimodal unsupervised image-to-image translation,” CoRR, vol. abs/1804.04732, 2018.
  • [22] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015.
  • [23] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  • [24] A. Odena, V. Dumoulin, and C. Olah, “Deconvolution and checkerboard artifacts,” Distill, vol. 1, no. 10, p. e3, 2016.
  • [25] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2117–2125.
  • [26] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, 2015.
  • [27] K. Yan, X. Wang, L. Lu, and R. M. Summers, “Deeplesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning,” Journal of Medical Imaging, 2018.
  • [28] B. Glocker, D. Zikic, E. Konukoglu, D. R. Haynor, and A. Criminisi, “Vertebrae localization in pathological spine ct via dense classification from sparse annotations,” in MICCAI 2013 - 16th International Conference on Medical Image Computing and Computer Assisted Intervention.   Springer, September 2013.
  • [29] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” arXiv:1711.10925, 2017.
  • [30] H. Lee, H. Tseng, J. Huang, M. K. Singh, and M. Yang, “Diverse image-to-image translation via disentangled representations,” CoRR, vol. abs/1808.00948, 2018.