Deformable multimodal image registration has become essential for many procedures in image-guided therapies, e.g., preoperative planning, intervention, and diagnosis. Due to substantial improvement in computational efficiency over traditional iterative registration approaches, learning-based registration approaches are becoming more prominent in time-intensive applications.
1.0.1 Related work.
Many learning-based registration approaches adopt fully supervised or semi-supervised strategies. Their networks are trained with ground-truth deformation fields or segmentation masks [5, 19, 13, 12, 16], and may struggle with limited or imperfect data labeling. A number of unsupervised registration approaches have been proposed to overcome this problem by training unlabeled data to minimize traditional similarity metrics, e.g., mean squared intensity differences [4, 26, 15, 11, 21, 17]. However, the performances of these methods are inherently limited by the choice of similarity metrics. Given the limited selection of multimodal similarity metrics, unsupervised registration approaches may have difficulties outperforming traditional multimodal registration methods as they both essentially optimize the same cost functions. A recent trend for multimodal image registration takes advantage of the latent feature disentanglement  and image-to-image translation [6, 20, 23]. Specifically, translation-based approaches use Generative Adversarial Network (GAN) to translate images from one modality into the other modality, thus are able to convert the difficult multimodal registration into a simpler unimodal task. However, being a challenging topic by itself, image translation may inevitably produce artificial anatomical features that can further interfere with the registration process.
In this work, we propose a novel translation-based fully unsupervised multimodal image registration approach. In the context of Computed Tomography (CT) image to Magnetic Resonance (MR) image registration, previous translation-based approaches would translate a CT image into an MR-like image (tMR), and use tMR-to-MR registration to estimate the final deformation field . In our approach, the network estimates two deformation fields, namely of tMR-to-MR and of CT-to-MR, in a dual-stream fashion. The addition of the original enables the network to implicitly regularize to mitigate certain image translation problems, e.g., artificial features. The network further automatically learns how to fuse and towards achieving the best registration accuracy.
Contributions and advantages of our work can be summarized as follows:
Our method leverages the deformation fields estimated from the original multimodal stream and synthetic unimodal stream to overcome the shortcomings of translation-based registration;
We improve the fidelity of organ boundaries in the translated MR by adding two extra constraints in the image-to-image translation model Cycle-GAN.
We evaluate our method on two clinically acquired datasets. It outperforms state-of-the-art traditional, unsupervised and translation-based registration approaches.
In this work, we propose a general learning framework for robustly registering CT images to MR images in a fully unsupervised manner.
First, given a moving CT image and a fixed MR image, our improved Cycle-GAN module translates the CT image into an MR-like image. Then, our dual-stream subnetworks, UNet_o and UNet_s, estimate two deformation fields and
respectively, and the final deformation field is fused via a proposed fusion module. Finally, the moving CT image is warped via Spatial Transformation Network (STN), while the entire registration network aims to maximize the similarity between the moved and the fixed images. The pipeline of our method is shown in Fig. 1.
2.1 Image-to-Image Translation with Unpaired Data
The CT-to-MR translation step consists of an improved Cycle-GAN with additional structural and identical constraints. As a state-of-the-art image-to-image translation model, Cycle-GAN  can be trained without pairwise aligned CT and MR datasets of the same patient. Thus, Cycle-GAN is widely used in medical image translation [25, 1, 9].
Our Cycle-GAN model is illustrated in Fig. 2. The model consists of two generators and , which can provide CT-to-MR and MR-to-CT translation respectively. Besides, it has two discriminators and . is used to distinguish between translated CT(tCT) and real CT(rCT), and is for translated MR(tMR) and real MR(rMR). The training loss of original Cycle-GAN only adopts two types of items: adversarial loss given by two discriminators ( and ) and cycle-consistency loss to prevent generators from generating images that are not related to the inputs (refer to  for details).
However, training a Cycle-GAN on medical images is difficult since the cycle-consistency loss is not enough to enforce structural similarity between translated images and real images (as shown in the red box in Fig. 3(b)). Therefore, we introduce two additional losses, structure-consistency loss and identity loss , to constrain the training of Cycle-GAN.
MIND (Modality Independent Neighbourhood Descriptor)  is a feature that describes the local structure around each voxel. Thus, we minimize the difference in MIND between translated images or and real images or to enforce the structural similarity. We define as follows:
where represents MIND features, and denote the number of voxels in and , and is a non-local region around voxel .
The identity loss (as shown in Fig. 2(b)) is included to prevent images already in the expected domain from being incorrectly translated to the other domain. We define it as:
Finally, the total loss of our proposed Cycle-GAN is defined as:
where , and denotes the relative importance of each term.
2.2 Dual-stream Multimodal Image Registration Network
As shown in Fig. 3, although our improved Cycle-GAN can better translate CT images into MR-like images, the CT-to-MR translation is still challenging for translating “simple” CT images to “complex” MR images. Most image-to-image translation methods will inevitably generate unrealistic soft-tissue details, resulting in some mismatch problems. Therefore, the registration methods that simply convert multimodal to unimodal registration via image translation algorithm are not reliable.
In order to address this problem, we propose a dual-stream network to fully use the information of the moving, fixed and translated images as shown in Fig. 1. In particular, we can use effective similarity metrics to train our multimodal registration model without any ground-truth deformation.
2.2.1 Network Details.
As shown in Fig. 1, our dual-stream network is comprised of four parts: multimodal stream subnetwork, unimodal stream subnetwork, deformation field fusion, and Spatial Transformation Network.
In Multimodal Stream subnetwork, original CT(rCT) and MR(rMR) are represented as the moving and fixed images, which allows the model to propagate original information to counteract mismatch problems in translated MR(tMR).
Through image translation, we obtain the translated MR(tMR) with similar appearance to the fixed MR(rMR). Then, in Unimodal Stream, tMR and rMR are used as moving and fixed images respectively. This stream can effectively propagate more texture information, and constrain the final deformation field to suppress unrealistic voxel drifts from the multimodal stream.
During the network training, the two streams constrain each other, while they are also cooperating to optimize the entire network. Thus, our novel dual-stream design allows us to benefit from both original image information and homogeneous structural information in the translated images.
Specifically, UNet_o and UNet_s adopt the same UNet architecture used in VoxelMorph  (shown in Fig. 4) . The only difference is that UNet_o is with multimodal inputs but UNet_s is with unimodal inputs. Each UNet takes a single 2-channel 3D image formed by concatenating and as input, and outputs a volume of deformation field with 3 channels.
After Uni- and Multi-model Stream networks, we obtain two deformation fields, (for rCT and rMR) and (for tMR and rMR). We stack and , and apply a 3D convolution with size of to estimate the final deformation field , which is a 3D volume with the same shape of and .
To evaluate the dissimilarity between moved and fixed images, we integrate spatial transformation network (STN)  to warp the moving image using
. The loss function consists of two components as shown in Eq. (4).
where is a regularization weight. The first loss is similarity loss, which is to penalize the differences in appearance between fixed and moved images. Here we adopt SSIM  for experiments. Suggested by , deformation regularization adopts a L2-norm of the gradients of the final deformation field .
3 Experiments and Results
3.0.1 Dataset and Preprocessing.
We focus on the application of abdominal CT-to-MR registration.We evaluated our method on two proprietary datasets since there is no designated public repository.
1) Pig Ex-vivo Kidney CT-MR Dataset. This dataset contains 18 pairs of CT and MRI kidney scans from pigs. All kidneys are manually segmented by experts. After preprocessing the data, e.g., resampling and affine spatial normalization, we cropped the data to with 1mm isotropic voxels and arbitrarily divided it into two groups for training (15 cases) and testing (3 cases).
2) Abdomen (ABD) CT-MR Dataset. This 50-patient dataset of CT-MR scans was collected from a local hospital and annotated with anatomical landmarks. All data were preprocessed into with the same resolution () and were randomly divided into two groups for training (45 cases) and testing (5 cases).
We trained our model using the following settings: (1) The Cycle-GAN for CT-MR translation network is based on the existing implementation 
with changes as discussed in Section 2.1. (2) The Uni- and Multi-modal stream registration networks were implemented using Keras with the Tensorflow backend and trained on an NVIDIA Titan X (Pascal) GPU.
3.1 Results for CT-to-MR Translation
We extracted 1792 and 5248 slices from the transverse planes of the Pig kidney and ABD dataset respectively to train the image translation network. Parameters , and were set to 10, 5, and 5 for training.
Since our registration method is for 3D volumes, we apply the pre-trained CT-to-MR generator to translate moving CT images into MR-like images slice-by-slice and concatenate 2D slices into 3D volumes. The qualitative results are visualized in Fig. 3. In addition, to quantitatively evaluate the translation performance, we apply our registration method to obtain aligned CT-MR pairs and utilize SSIM  and PSNR  to judge the quality of translated MR (shown in Table 1). In our experiment, our method predicts better MR-like images on both datasets.
3.2 Registration Results
Affine registration is used as the baseline method. For traditional method, only mutual information (MI) based SyN  is compared since it is the only metric (available in ANTs ) for multimodal registration. In addition to SyN, we implemented the following learning-based methods: 1) VM_MIND and VM_SSIM which extends VoxelMorph with similarity metrics MIND  and SSIM . 2) M2U which is a typical translation-based registration method. It generates tMR from CT and converts the multimodal problem to tMR-to-MR registration. It’s noteworthy that the parameters of all methods are optimized to the best results on both datasets.
Two examples of the registration results are visualized in Fig. 5, where the red and yellow contours represent the ground truth and registered organ boundaries respectively. As shown in Fig. 5, the organ boundaries aligned by the traditional SyN method have a considerable amount of disagreement. Among all learning-based methods, our method has the most visually appealing boundary alignment for both cases. VM_SSIM performed significantly worse for the kidney. VM_MIND achieved accurate registration for the kidney, but its result for the ABD case is significantly worse. Meanwhile, M2U suffers from artificial features in the image translation, which leads to an inaccurate registration result.
The quantitative results are presented in Table 2. We compare different methods by the Dice score  and target registration error (TRE) . We also provide the average run-time for each method. As shown in Table 2, our method consistently outperformed other methods and was able to register a pair of images in less than 2 seconds (when using GPU).
3.3 The effect of each deformation field
In order to validate the effectiveness of the deformation field fusion, we compare , and together with warped images (shown in Fig. 6). The qualitative result shows that from the unimodal stream alleviates the voxel drift effect from the multimodal stream. While from the multimodal stream uses the original image textures to maintain the fidelity and reduce artificial features for the generated tMR image. The fused deformation field produces better alignment than both streams alone, which demonstrates the effectiveness of the joint learning step.
We proposed a fully unsupervised uni- and multi-modal stream network for CT-to-MR registration. Our method leverages both CT-translated-MR and original CT images towards achieving the best registration result. Besides, the registration network can be effectively trained by computationally efficient similarity metrics without any ground-truth deformation. We evaluated the method on two clinical datasets, and it outperformed state-of-the-art methods in terms of accuracy and efficiency.
This project was supported by the National Institutes of Health (Grant No. R01EB025964, R01DK119269, and P41EB015898) and the Overseas Cooperation Research Fund of Tsinghua Shenzhen International Graduate School (Grant No. HW201808).
-  Armanious, K., Jiang, C., Abdulatif, S., Küstner, T., Gatidis, S., Yang, B.: Unsupervised medical image translation using cycle-medgan. In: 2019 27th European Signal Processing Conference (EUSIPCO). pp. 1–5. IEEE (2019)
-  Avants, B.B., Epstein, C.L., Grossman, M., Gee, J.C.: Symmetric diffeomorphic image registration with cross-correlation: Evaluating automated labeling of elderly and neurodegenerative brain. Medical image analysis 12 1, 26–41 (2008)
-  Avants, B.B., Tustison, N.J., Song, G., Cook, P.A., Klein, A., Gee, J.C.: A reproducible evaluation of ants similarity metric performance in brain image registration. NeuroImage 54, 2033–2044 (2011)
Balakrishnan, G., Zhao, A., Sabuncu, M.R., Guttag, J., Dalca, A.V.: An unsupervised learning model for deformable medical image registration. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 9252–9260 (2018)
-  Cao, X., Yang, J., Zhang, J., Nie, D., Kim, M., Wang, Q., Shen, D.: Deformable image registration based on similarity-steered cnn regression. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 300–308. Springer (2017)
Cao, X., Yang, J., Wang, L., Xue, Z., Wang, Q., Shen, D.: Deep learning based inter-modality image registration supervised by intra-modality similarity. In: International Workshop on Machine Learning in Medical Imaging. pp. 55–63. Springer (2018)
-  Dice, L.R.: Measures of the amount of ecologic association between species. Ecology 26(3), 297–302 (1945)
-  Heinrich, M.P., Jenkinson, M., Bhushan, M., Matin, T., Gleeson, F.V., Brady, M., Schnabel, J.A.: Mind: Modality independent neighbourhood descriptor for multi-modal deformable registration. Medical image analysis 16(7), 1423–1435 (2012)
-  Hiasa, Y., Otake, Y., Takao, M., Matsuoka, T., Takashima, K., Carass, A., Prince, J.L., Sugano, N., Sato, Y.: Cross-modality image synthesis from unpaired data using cyclegan. In: International workshop on simulation and synthesis in medical imaging. pp. 31–41. Springer (2018)
-  Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition. pp. 2366–2369. IEEE (2010)
-  Hu, X., Kang, M., Huang, W., Scott, M.R., Wiest, R., Reyes, M.: Dual-stream pyramid registration network. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 382–390. Springer (2019)
Hu, Y., Modat, M., Gibson, E., Ghavami, N., Bonmati, E., Moore, C.M., Emberton, M., Noble, J.A., Barratt, D.C., Vercauteren, T.: Label-driven weakly-supervised learning for multimodal deformable image registration. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). pp. 1070–1074. IEEE (2018)
Hu, Y., Modat, M., Gibson, E., Li, W., Ghavami, N., Bonmati, E., Wang, G., Bandula, S., Moore, C.M., Emberton, M., et al.: Weakly-supervised convolutional neural networks for multimodal image registration. Medical image analysis49, 1–13 (2018)
-  Jaderberg, M., Simonyan, K., Zisserman, A., et al.: Spatial transformer networks. In: Advances in neural information processing systems. pp. 2017–2025 (2015)
-  Kuang, D., Schmah, T.: Faim–a convnet method for unsupervised 3d medical image registration. In: International Workshop on Machine Learning in Medical Imaging. pp. 646–654. Springer (2019)
-  Liu, C., Ma, L., Lu, Z., Jin, X., Xu, J.: Multimodal medical image registration via common representations learning and differentiable geometric constraints. Electronics Letters 55(6), 316–318 (2019)
-  Mahapatra, D., Ge, Z., Sedai, S., Chakravorty, R.: Joint registration and segmentation of xray images using generative adversarial networks. In: International Workshop on Machine Learning in Medical Imaging. pp. 73–80. Springer (2018)
-  Qin, C., Shi, B., Liao, R., Mansi, T., Rueckert, D., Kamen, A.: Unsupervised deformable registration for multi-modal images via disentangled representations. In: International Conference on Information Processing in Medical Imaging. pp. 249–261. Springer (2019)
-  Sedghi, A., Luo, J., Mehrtash, A., Pieper, S., Tempany, C.M., Kapur, T., Mousavi, P., Wells III, W.M.: Semi-supervised image registration using deep learning. In: Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling. vol. 10951, p. 109511G. International Society for Optics and Photonics (2019)
-  Tanner, C., Ozdemir, F., Profanter, R., Vishnevsky, V., Konukoglu, E., Goksel, O.: Generative adversarial networks for mr-ct deformable image registration. arXiv preprint arXiv:1807.07349 (2018)
-  de Vos, B.D., Berendsen, F.F., Viergever, M.A., Sokooti, H., Staring, M., Išgum, I.: A deep learning framework for unsupervised affine and deformable image registration. Medical image analysis 52, 128–143 (2019)
-  Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4), 600–612 (2004)
-  Wei, D., Ahmad, S., Huo, J., Peng, W., Ge, Y., Xue, Z., Yap, P.T., Li, W., Shen, D., Wang, Q.: Synthesis and inpainting-based mr-ct registration for image-guided thermal ablation of liver tumors. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 512–520. Springer (2019)
-  West, J.B., Fitzpatrick, J.M., Wang, M.Y., Dawant, B.M., Maurer Jr, C.R., Kessler, R.M., Maciunas, R.J., Barillot, C., Lemoine, D., Collignon, A.M., et al.: Comparison and evaluation of retrospective intermodality image registration techniques. In: Medical Imaging 1996: Image Processing. vol. 2710, pp. 332–347. International Society for Optics and Photonics (1996)
-  Yang, H., Sun, J., Carass, A., Zhao, C., Lee, J., Xu, Z., Prince, J.: Unpaired brain mr-to-ct synthesis using a structure-constrained cyclegan. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 174–182. Springer (2018)
-  Zhao, S., Lau, T., Luo, J., Eric, I., Chang, C., Xu, Y.: Unsupervised 3d end-to-end medical image registration with volume tweening network. IEEE journal of biomedical and health informatics (2019)
-  Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Cyclegan (2017), https://github.com/xhujoy/CycleGAN-tensorflow
-  Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision. pp. 2223–2232 (2017)