Deep Deformable Registration: Enhancing Accuracy by Fully Convolutional Neural Net

11/27/2016 ∙ by Sayan Ghosal, et al. ∙ University of Alberta 0

Deformable registration is ubiquitous in medical image analysis. Many deformable registration methods minimize sum of squared difference (SSD) as the registration cost with respect to deformable model parameters. In this work, we construct a tight upper bound of the SSD registration cost by using a fully convolutional neural network (FCNN) in the registration pipeline. The upper bound SSD (UB-SSD) enhances the original deformable model parameter space by adding a heatmap output from FCNN. Next, we minimize this UB-SSD by adjusting both the parameters of the FCNN and the parameters of the deformable model in coordinate descent. Our coordinate descent framework is end-to-end and can work with any deformable registration method that uses SSD. We demonstrate experimentally that our method enhances the accuracy of deformable registration algorithms significantly on two publicly available 3D brain MRI data sets.



There are no comments yet.


page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Image registration or image alignment is the process of overlaying two images taken at different time instants, or different view points, or from different subjects in a common coordinate system. Image registration has remained a significant tool in medical imaging applications [1]. 3D data in medical images, where image registration is applied, generally includes Computed Tomography (CT), Cone-beam CT (CBCT), Magnetic Resonance Imaging (MRI) and Computer Aided Design (CAD) model of medical devices.

Among all the image registration methods, deformable image registration is important in neuroscience and clinical studies. Diffeomorphic demons [2] and Log-domain diffeomorphic demon [3] algorithms are popular deformable registration methods. In these optimization-based methods, the deformable transformation parameters are iteratively optimized over a scalar valued cost (e.g., SSD) representing the quality of registration [4]. To impose smoothness on the solution, typically a regularization term is also added to the registration cost function. These costs are non-convex in nature, hence the optimization sometimes get trapped in the local minima. On the basis of the cost functions, different optimization algorithms are used. In the Gauss-Newton method for minimizing SSD, projective geometric deformation is used [5]. The method is sensitive to local minima. Levenberg-Marquadt algorithm was used in [6] to minimize the difference in intensities of corresponding pixels. This method update the parameters between gradient descent and Gauss Newton and accelerates towards local minima. The combination of Levenber-Marquadt method and SSD is used in [7].

In order to improve the solution of registration, in other words, to find better local minima in the SSD cost, we propose a novel method to modify the reference image by introducing a heatmap (essentially another image) produced by a Fully Convolutional Neural Network (FCNN) with a skip architecture [8]. This modified reference image helps to create a tight upper bound to the SSD registration cost that we refer to as UB-SSD. Next, we minimize the UB-SSD by adjusting parameters of the FCNN as well as deformation parameters of the registration algorithm. We refer to our proposed method by deep deformable registration (DDR).

FCNN is type of deep learning machine that has has been successfully used for semantic segmentation in

[8]. In [9]

convolutional network has been used to classify 1.2 million images in 1000 different classes. However, our proposed method (DDR) does not employ any learning, rather it uses FCNN to optimize SSD registration cost. It is a newer trend in computer vision and graphics, where deep learning tools are used only for optimization purposes and not for learning. One prominent example is artistic style transfer


Prior to our work, convolutional network was used for image registration in [11]

, where the authors trained the parameters of a 2-layers convolutional network. The network was used to seek the hierarchical representation for each image, where high level features are inferred from the low level network. The goal of their work was to learn feature vectors for better registration. In contrast, DDR focuses on finding better solution during optimization, where we make use of end-to-end back-propagation and the deep learning architecture.

2 Proposed Method

In this section, we provide detailed descriptions of each component of our solution pipeline. We start with the overall registration framework. Then, we illustrate our FCNN architecture and define upper bound of the SSD cost. We end the section by explaining the registration module.

2.1 Deep Deformable Registration Framework

Throughout this paper the moving image is represented as and the fixed (or reference) image is represented as . The output of the fully convolutional neural network is represented as . represents the deformation produced by a registration algorithm. The complete DDR framework is shown in Fig. 1.

Figure 1: Deep deformable registration framework.

In DDR, we have considered SSD as the registration cost between the moving and the fixed image. For simplicity, we omit any regularization term here. Hence, the SSD registration cost is as follows:


In DDR, the FCNN is followed by a non-linear function as shown in Fig. 1. A suitable design of ensures the upper bound of the original cost (UB-SSD):


In order to minimize the UB-SSD given in (2), back-propagation is applied. is error that back-propagates from registration module and is the error of the non-linear module. The output of the FCNN is the heatmap which is modified by the non-linearity function to :


This modified heat map is added pixel-wise with the fixed image as a distortion.

In DDR, the registration module minimizes the UB-SSD, which will ensure minimization of the original SSD cost. The DDR framework works in an iterative coordinate descent manner by alternating between the following two steps until convergence: (a) fix and optimize for deformable parameters and (b) fix and optimize for heatmap by back-propagation. Thus, the DDR framework works in an end-to-end fashion. The error signals and are as follows:




Thus, the DDR framework enhances the space of optimization parameters from to a joint space of and (or ). So, when the registration optimizer is stuck at a local minimum of , the alternating coordinate descent finds a better minimum for in the joint space, and the registration proceeds because of the upper bound property of the cost function. The decrease of UB-SSD and SSD is shown in Fig. 2 over iterations of the coordinate descent for a registration example.

Figure 2: Original cost and UB-cost vs iterations.

2.2 Tight UB-SSD

In DDR, (2) serves as an upper bound to (1). Using this condition, we obtain:


We ensure condition (6) by realizing as a soft thresholding function with threshold :


Note that is applied pixel-wise on the heatmap . For a tight UB-SSD condition (6) can be restated as follows:


where is a small positive number. The following simple algorithm makes sure that with a soft thresholding function , condition (8) is met. Fig. 2 demonstrates UB-SSD is quite tight on the SSD cost.

2:Set stepsize to a very small number
4:if Condition is not met then
6:goto loop.
Algorithm 1 Soft thresholding algorithm

2.3 FCNN Architecture

We have used VGG-net [12]

to serve as FCNN. This network contains both convolutional and deconvolutional layers. In the VGG-net we decapitated the final classifier layer and convert all fully connected layers to convolutional layers. This is followed by multiple deconvolutional layers to bi-linearly up-sample the coarse outputs to pixel dense outputs. The convolutional part consist of multiple convolutional layers, ReLU layers and max-pooling layers. The deconvolutional part consists of deconvolutional layers. We have skipped multiple layers and fused them with the deconvolved layers to introduce local appearance information. A typical skip architecture used in our module is shown in details in Fig.


Figure 3: FCNN with skip architecture.

2.4 Registration module

In order to register the moving image with the modified reference image we have used the demons [2, 3] method. To find the optimum transformation we optimize the following cost:


Due to the large number of transformation parameters in the transformation field , we use limited memory BFGS (LBFGS) algorithm to find the optimum transformation field. This algorithm is computationally less extensive than BFGS when the number of optimization parameters are large. While calculating the step length and direction, LBFGS store the hessian matrix for the last few iterations and use them to calculate the direction and step length instead of updating and storing the hessian matrix in each iteration. After finding the optimum transformation field, the error is back-propagated through the pipeline which helps the FCNN in finding the necessary distortion required to reduce the energy further down.

3 Results

3.1 Registration Algorithms

To establish the usefulness of DDR, the following two deformable registration algorithms, each with and without DDR, are used:

  1. DDR + Diffeomorphic demon

  2. Diffeomorphic demon

  3. DDR + Log-demon

  4. Log-demon.

In our setup, to register images using DDR + diffeomorphic demons, we have used FCNN-16s network [8] and for registration using DDR + log-demon, we have used FCNN-32s architecture for the FCNN.

3.2 Registration Evaluation Metrics

For performance measures, we have used structural similarity index (SSIM) [13], Peak signal to noise ratio (PSNR) and the SSD error. SSIM can capture local differences between the images, whereas SSD and PSNR can capture global differences.

3.3 Experiments with IXI Dataset

IXI dataset ( consists of 30 subjects, which are all 3D volumetric data. Among them, we have randomly chosen one as our reference volume and we have registered the others using the aforementioned four algorithms.

Figure 4: Results on IXI Dataset with diffeomorphic demons. Top : SSIM vs no. of Subjects, Middle: PSNR value vs no. of subjects, Bottom: Mean SSD value vs no. of subjects.

The improvements in SSIM, PSNR and SSD error with diffeomorphic demon are provided in Fig. 4. The improvement in registration with log-demon is shown in Fig. 5. These results are summarized in Table 1 where we show average percentage improvements in 3D volume registration. From these results we observe significant improvements gained by using DDR, especially in reducing the SSD cost, because the optimization has targeted SSD. However, other measures such as SSIM and PSNR have also decreased significantly. Fig. 8 shows a residual image (difference image) for the log-demon method with and without using DDR. Significant reduction in residual image intensity is observed when DDR is used.

Figure 5: Results on IXI Dataset using log-demon. Top : SSIM vs no. of Subjects, Middle: PSNR value vs no. of subjects, Bottom: Mean SSD value vs no. of subjects.
DDR with diffeomorphic demons DDR with log-demons

5.0 19.0 11.0 5.3 21.0
Table 1: Improvement in registration for IXI dataset: average percentage
Figure 6: Results on ADNI Dataset. Top : SSIM vs no. of Subjects, Middle: PSNR value vs no. of subjects, Bottom: Mean SSD value vs no. of subjects.

3.4 Experiments with ADNI Dataset

In these experiments, we have randomly selected 20 MR 3D volumes from the ADNI dataset ( Among them, one is randomly selected to be the template, and the rest are registered with it using Diffeomorphic Demons and Log-Domain Diffeomorphic Demons algorithm. The SSIM, PSNR and SSD values are calculated and plotted in Fig.7 and Fig. 6. These improvements are summarized in summarized in Table 2. Once again, we observe significant gains in registration metrics using the proposed DDR.

Figure 7: Results on ADNI Dataset. Top : SSIM index vs no. of Subjects, Middle: PSNR value vs no. of subjects, Bottom: Mean SSD value vs no. of subjects.
DDR with diffeomorphic demons DDR with log-demons

6.2 19.9 4.3 3 12.6
Table 2: Improvement in registration for ADNI dataset: average percentage
Figure 8: Top-left: Registered image using log-demon; top-right: DDR+log-demon result. Bottom-left: Residual image from log-demon. Bottom-right: Residual using DDR+log-demon.

4 Conclusions and Future Work

We have proposed a novel method for improving deformable registration using fully convolutional neural network. While previous studies have focused on learning features, here we have utilized FCNN to help optimize registration algorithm better. On two publicly available datasets, we show that improvements in registration metrics are significant. In the future, we intend to work with other diffeomorphic registration algorithms, such HAMMER [14].


Authors acknowledge support from MITACS Globallink and Computing Science, University of Alberta.


  • [1] R. Liao, L. Zhang, Y. Sun, S. Miao, and C. Chefd’Hotel, “A review of recent advances in registration techniques applied to minimally invasive therapy,” IEEE Transactions on Multimedia, vol. 15, no. 5, pp. 983–1000, Aug 2013.
  • [2] T. Vercauteren, X. Pennec, A. Perchant, and N. Ayache, “Efficient non-parametric image registration,” NeuroImage, vol. 45, no. 1, Supplement 1, pp. S61––S72, Mar 2009.
  • [3] Herve Lombaert, L. Grady, X. Pennec, N. Ayache, and F. Cheriet, “Spectral log-demons: Diffeomorphic image registration with very large deformations,” International Journal of Computer Vision, vol. 107, pp. 254––271, 2014.
  • [4] A. Sotiras, C. Davatzikos, and N. Paragios, “Deformable medical image registration: A survey,,” IEEE Transactions on Medical Imaging, vol. 2, no. 7, pp. 1153––1190, 2013.
  • [5] R.K. Sharma and M. Pavel, “Multisensor image registration,” Proceedings of the Society for Information Display, vol. XXVIII, pp. 951––954, 1997.
  • [6] H.S. Sawhney and R. Kumar, “True multi-image alignment and its applications to mosaicing and lens distortion correction,” IEEE Transactions on Pattern Analysis and Machine Intellingece, vol. 21, pp. 235––243, 1999.
  • [7] P. The´venaz, U.E. Ruttimann, and M. Unser, “Iterative multiscale registration without landmarks,” Proceedings of the IEEE International Confernece on Image Processing, pp. 228––231, 1995.
  • [8] Jonathan Long, Evan Shelhamer, and Trevor Darrell, “Fully convolutional networks for semantic segmentation,”

    Conference on Computer Vision and Pattern Recognition

    , 2015.
  • [9] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton,

    Imagenet classification with deep convolutional neural networks,”

    in Advances in neural information processing systems, 2012, pp. 1097–1105.
  • [10] Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge, “A neural algorithm of artistic style,” arXiv:1508.06576, 2015.
  • [11] Guorong Wu, Minjeong Kim, Qian Wang, Yaozong Gao, Shu Liao, and Dinggang Shen,

    Unsupervised Deep Feature Learning for Deformable Registration of MR Brain Images

    , pp. 649–656,
  • [12] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” ICLR, 2015.
  • [13] Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 228––231, 2004.
  • [14] Shen DG, “Image registration by local histogram matching,” Pattern Recognition, pp. 1161––1172, 2007.