Unsupervised Deformable Image Registration Using Cycle-Consistent CNN

07/02/2019 ∙ by Boah Kim, et al. ∙ KAIST 수리과학과 0

Medical image registration is one of the key processing steps for biomedical image analysis such as cancer diagnosis. Recently, deep learning based supervised and unsupervised image registration methods have been extensively studied due to its excellent performance in spite of ultra-fast computational time compared to the classical approaches. In this paper, we present a novel unsupervised medical image registration method that trains deep neural network for deformable registration of 3D volumes using a cycle-consistency. Thanks to the cycle consistency, the proposed deep neural networks can take diverse pair of image data with severe deformation for accurate registration. Experimental results using multiphase liver CT images demonstrate that our method provides very precise 3D image registration within a few seconds, resulting in more accurate cancer size estimation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Radiologists often diagnose the progress of disease by comparing medical images at different temporal phases. In case of diagnosis of liver tumor such as hepatocellular carcinoma (HCC), the contrast of normal liver tissue and tumor region in contrast enhanced CT (CECT) distinctly varies before and after the infection of contrast agent. This provides radiologists an important clue to diagnose cancers and plan surgery or radiation therapy [7]. However, liver images taken at different phases are usually different in their shape due to disease progress, breathing, patient motion, etc, so image registration is important to improve accuracy of dynamic studies.

Classical image registration methods [11, 4] are usually implemented in a variational framework that solves an energy minimization problem over the space of deformations. Since the diffeomorphic image registration ensures the preservation of topology and one-to-one mapping between the source and target images, the algorithmic extensions to large deformation such as LDDMM [3] and SyN [1] have been applied to various image registration studies. However, these approaches usually require substantial time and extensive computation.

To address this issue, recent image registration techniques are often based on deep neural networks that instantaneously generate deformation fields. In supervised learning approaches

[12, 14], the ground-truths of deformation fields are required for training neural networks, which are typically generated by the traditional registration method. However, the performance of these existing supervised methods depends on the quality of the ground-truth registration fields, or they do not explicitly enforce the consistency criterion to uniquely describe the correspondences between two images.

In order to overcome the aforementioned limitations and provide topology-preserving guarantee, many unsupervised learning methods have been developed in these days. Balakrishnan et al. [2]

propose 3D medical image registration algorithm using a spatial transform network. Zhang

[13] presents a CNN framework that enforces an inverse-consistent constraint for the deformation fields. However, for the registration of large deformable volumes such as livers, the existing unsupervised learning methods often result in inaccurate registration due to the potential for the degeneracy of the mapping. Although Dalca et el. [5] tried to address this problem by incorporating a diffeomorphic integration layer, we found that its application of the liver registration is still limited.

In this paper, we present a novel unsupervised registration method using convolutional neural networks (CNN) with cycle-consistency

[14]. We show that the cyclic constraint can be adopted for the image registration case naturally, and this cycle consistency improves topology preservation in generating fewer folding problems. Also, our network is trained with diverse source and target images in multiphase CECT acquisition so that a single neural network of our method provides deformable registration between every pairs once the network is trained. Experimental results demonstrate that the proposed method performs accurate 3D image registration for any pair of images within a few seconds in the challenging problem of 3D liver registration in multiphase CECT.

2 Proposed Method

Figure 1: The overall framework of the proposed method for image registration. The input images in different phases are denoted as and , and their phase and shape are denoted as and , respectively. The short-dashed line indicates the floating image and the long-dashed line denotes the fixed image.

The overall framework of our method is illustrated in Figure 1. For the input images, and , in different phases, we define two registration networks as and , where (resp. ) denotes the 3-D deformation field from to (resp. to ). We use a 3D spatial transformation layer in the networks to warp the moving image by the estimated deformation field, so that the registration networks are trained to minimize the dissimilarity between the deformed moving source image and fixed target image. Accordingly, once a pair of images are given to the registration networks, the moving images are deformed into the fixed images.

To guarantee the topology preservation between the deformed and fixed images, we here adopt the cycle consistency constraint between the original moving image and its re-deformed image. That is, the deformed volumes are given as the inputs to the networks again by switching their order to impose the cycle consistency. This constraint ensures that the shape of deformed images successively returns to the original shape.

2.1 Loss Function

We train the networks by solving the following loss function:

(1)

where , , and are registration loss, cycle loss, and identity loss, respectively (see Fig. 2), and and are hyper-parameters. Based on this loss function, our method is trained in an unsupervised manner without ground-truth deformation fields.

Registration Loss. The registration loss function is based on the energy function of classical variational image registration. For example, the energy function for the registration of floating image to the target volume is composed of two terms:

(2)

where is the moving image, and is the fixed image. computes image dissimilarity between the deformed image by the estimated deformation field and the fixed image, and evaluates the smoothness of the deformation field. Here, denotes the 3D spatial transformation function. In particular, we employ the cross-correlation as the similarity function to deal with the contrast change during CECT exam, and the L2-loss for regularization function. Accordingly, our registration loss function can be written as:

(3)

where denotes the cross-correlation defined by

(4)

where and denote the mean value of and , respectively.

Figure 2: The diagram of loss function structure in our proposed method. The short- and long-dashed lines are for floating image and fixed image, respectively.

Cycle Loss. The cycle consistency condition is implemented by minimizing the loss function as shown in Fig. 2(a). Since an image is first deformed to an image , which is then deformed again by the other network to generate image , the cyclic consistency imposes . Similarly, an image should be successively deformed by the two networks to generate image . Then, the cyclic consistency is to impose that .

As shown in Fig. 2

(a), since the network in our registration receives both the moving image and the fixed image, the implementation of the cycle consistency loss should be given by as the vector-form of the cycle consistency condition:

(5)

where Thus, the cycle loss is computed by:

(6)

where denotes the -norm.

Identity Loss. Another important consideration for the design of loss function is that the network should not change the stationary regions of the body, i.e. the stationary regions should be the fixed points of the network. As shown in Fig. 2(b), this constraint can be implemented by imposing that the input image should not be changed when the identical images are used as the floating and reference volumes. More specifically, we use the following identity loss:

(7)

By minimizing this identity loss (7), the cross-correlation between the deformed image and the fixed image can be maximized. Thus, the identity loss guides the stability of the deformable field estimation in stationary regions.

2.2 Network Architecture and 3D Spatial Transformation Layer

To generate a displacement vector field in width-, height-, and depth direction, we adopt VoxelMorph-1 [2] as our baseline network. Note that our model without both the cycle and identity loss is equivalent to VoxelMorph-1. This 3D network consists of encoder, decoder and their skip connections similar to U-Net [10].

The 3D spatial transformation layer [6] is to deform the moving volume with the deformation field . We use the spatial transformation function

with tri-linear interpolation for warping the image

by , which can be written as:

(8)

where indicates the voxel index, denotes the 8-voxel cubic neighborhood around , and is three directions in 3D image space.

3 Experiments

To verify the performance of our method, we conducted liver registration from multiphase CT images. The dataset was collected from liver cancer (HCC) patients at Asan Medical Center, Seoul, South Korea. Each scans has pathologically proven hepatic nodules and four-phase liver CT (unenhanced, arterial, portal, and 180-s delayed phases). The slice thickness was 5mm. We did not perform pre-processing such as affine transformation except for matching the number of slices for the moving and fixed images. Here, we extracted slices only including liver by a pre-trained liver segmentation network and performed zero-padding to the above and below volumes based on the center of mass of liver.

We used 555 scans for training and 50 scans for testing. For the network training, we stacked two volumes with different phases as the input. We normalized the input intensity with the maximum value of each volume. Also, we randomly down-sampled the training data from to to fit in the GPU memory, while we evaluated the test data with original size of , where

is different for each pair of input. For data augmentation, we adopted random horizontal/vertical flipping and rotation with 90 degree for each pair of training volume. Our proposed method was implemented with pyTorch library. We applied Adam with momentum optimization algorithm to train the models with a learning rate of

, and set the batch size 1. The model was trained for 50 epochs using a NVIDIA GeForce GTX 1080 Ti GPU.

Figure 3: Results of the target registration errors (TRE) of all 20 anatomical points in the deformed arterial and delayed images of 50 test data. Mean graph represents the mean TRE of the points for all subjects. D# in the x-axis indicates the patient number.
Method Arterial Portal Delayed Portal
tumor size TRE time tumor size TRE time
major minor major minor
Elastix [8] 0.98 0.61 3.26 19.64 0.91 0.58 2.96 19.64
VoxelMorph [2] 0.79 1.64 6.67 0.18 0.61 0.87 5.35 0.20
Ours 0.89 1.16 4.91 0.22 0.59 0.43 3.76 0.20
Table 1: Tumor size differences, TRE values () between the deformed arterial/delayed images and the fixed portal image, and their average time (min) to be deformed on test set.

3.1 Registration Results

We evaluated the registration performance using the target registration error (TRE) based on the 20 anatomical points in the liver and adjacent organs at the portal phases, which are marked by radiologists. Also, we measured the tumor size that verifies the registration performance in the view point of tumor diagnosis. We compared our method to Elastix [8] that is known to have its state-of-the-art performance among the classical approaches. We also compared with VoxelMorph-1 [2]. Additionally, we performed ablation studies by excluding the cycle loss or identity loss. Apart from the loss, different ablated networks were subjected to the same training procedure for fair comparison.

Fig. 3 shows the registration performance. We visualize the TRE values of the deformed arterial and delayed images with respect to each test data, and also show the average TRE values of all subjects with respect to the deformed arterial and delayed images into the fixed portal image. We can observe that the proposed method achieves significant improvement compared to VoxelMorph-1, while the error of the proposed method is slightly higher than Elastix. Also, in Table 1, we can confirm that the tumor size of deformed images from our proposed method is the most accurate for the case of delay to portal registration, and comparable in arterial to portal registration.

To demonstrate the effect of the cycle consistency, we also computed the percentage of voxels with a non-positive Jacobian determinant on the deformation fields and the normalized mean square error (NMSE) between the original moving image and re-deformed image. As shown in Table 2, we confirm that the proposed method is less prone to folding problem and improves topological preservation for liver registration.

Fig. 4 illustrates an example of registration results that deforms the multiphase 3D images with the four distinct phases. Moreover, we calculated the time required for the proposed method to deform one image into the fixed image (see Table 1). Specifically, the conventional Elastix takes approximately 19.6 minutes for the image registration, while the proposed method takes only 10 seconds.

Method Arterial Portal Delayed Portal
% of det NMSE % of det NMSE
VoxelMorph [2] 0.0327 0.0278 0.0311 0.0213
Ours w/o 0.0270 0.0279 0.0284 0.0214
Ours w/o 0.0218 0.0279 0.0205 0.0208
Ours 0.0175 0.0277 0.0181 0.0199
Table 2: Percentage of voxels with a non-positive Jacobian determinant and normalized mean square error (NMSE) on test set.
Figure 4: Results of multiphase liver CT registration (Left) and their deformation fields (Right). The diagonal images with red-box are original images, which are deformed to other phases as indicated by each row. Specifically, the element of the figure represents the deformed image to the -th phase from the -th phase original image.

4 Conclusion

We presented an unsupervised image registration method using a cycle consistent convolutional neural network. Using two registration networks, our proposed method is trained to satisfy the cycle consistency that imposes inverse consistency between a pair of images. However, once the networks are trained, a single network can provide accurate 3D image registration with any pair of new data, so the computational complexity is same as VoxelMorph-1. Our liver registration results demonstrated that the proposed method works well for any image pairs with different contrast.

References

  • [1] Avants, B.B., Epstein, C.L., Grossman, M., Gee, J.C.: Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Medical image analysis 12(1), 26–41 (2008)
  • [2]

    Balakrishnan, G., Zhao, A., Sabuncu, M.R., Guttag, J., Dalca, A.V.: An unsupervised learning model for deformable medical image registration. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 9252–9260 (2018)

  • [3] Beg, M.F., Miller, M.I., Trouvé, A., Younes, L.: Computing large deformation metric mappings via geodesic flows of diffeomorphisms. International journal of computer vision 61(2), 139–157 (2005)
  • [4] Christensen, G.E., Johnson, H.J.: Consistent image registration. IEEE transactions on medical imaging 20(7), 568–582 (2001)
  • [5] Dalca, A.V., Balakrishnan, G., Guttag, J., Sabuncu, M.R.: Unsupervised learning for fast probabilistic diffeomorphic registration. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 729–738. Springer (2018)
  • [6] Jaderberg, M., Simonyan, K., Zisserman, A., et al.: Spatial transformer networks. In: Advances in neural information processing systems. pp. 2017–2025 (2015)
  • [7] Kim, K.W., Lee, J.M., Choi, B.I.: Assessment of the treatment response of hcc. Abdominal imaging 36(3), 300–314 (2011)
  • [8] Klein, S., Staring, M., Murphy, K., Viergever, M.A., Pluim, J.P.: Elastix: a toolbox for intensity-based medical image registration. IEEE transactions on medical imaging 29(1), 196–205 (2010)
  • [9] Mahapatra, D., Antony, B., Sedai, S., Garnavi, R.: Deformable medical image registration using generative adversarial networks. In: Biomedical Imaging (ISBI 2018), 2018 IEEE 15th International Symposium on. pp. 1449–1453. IEEE (2018)
  • [10] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention. pp. 234–241. Springer (2015)
  • [11] Thirion, J.P.: Image matching as a diffusion process: an analogy with maxwell’s demons. Medical image analysis 2(3), 243–260 (1998)
  • [12] Yang, X., Kwitt, R., Styner, M., Niethammer, M.: Quicksilver: Fast predictive image registration–a deep learning approach. NeuroImage 158, 378–396 (2017)
  • [13] Zhang, J.: Inverse-consistent deep networks for unsupervised deformable image registration. arXiv preprint arXiv:1809.03443 (2018)
  • [14]

    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision. pp. 2223–2232 (2017)