1 Introduction
Image registration or image alignment is the process of overlaying two images taken at different time instants, or different view points, or from different subjects in a common coordinate system. Image registration has remained a significant tool in medical imaging applications [1]. 3D data in medical images, where image registration is applied, generally includes Computed Tomography (CT), Conebeam CT (CBCT), Magnetic Resonance Imaging (MRI) and Computer Aided Design (CAD) model of medical devices.
Among all the image registration methods, deformable image registration is important in neuroscience and clinical studies. Diffeomorphic demons [2] and Logdomain diffeomorphic demon [3] algorithms are popular deformable registration methods. In these optimizationbased methods, the deformable transformation parameters are iteratively optimized over a scalar valued cost (e.g., SSD) representing the quality of registration [4]. To impose smoothness on the solution, typically a regularization term is also added to the registration cost function. These costs are nonconvex in nature, hence the optimization sometimes get trapped in the local minima. On the basis of the cost functions, different optimization algorithms are used. In the GaussNewton method for minimizing SSD, projective geometric deformation is used [5]. The method is sensitive to local minima. LevenbergMarquadt algorithm was used in [6] to minimize the difference in intensities of corresponding pixels. This method update the parameters between gradient descent and Gauss Newton and accelerates towards local minima. The combination of LevenberMarquadt method and SSD is used in [7].
In order to improve the solution of registration, in other words, to find better local minima in the SSD cost, we propose a novel method to modify the reference image by introducing a heatmap (essentially another image) produced by a Fully Convolutional Neural Network (FCNN) with a skip architecture [8]. This modified reference image helps to create a tight upper bound to the SSD registration cost that we refer to as UBSSD. Next, we minimize the UBSSD by adjusting parameters of the FCNN as well as deformation parameters of the registration algorithm. We refer to our proposed method by deep deformable registration (DDR).
FCNN is type of deep learning machine that has has been successfully used for semantic segmentation in
[8]. In [9]convolutional network has been used to classify 1.2 million images in 1000 different classes. However, our proposed method (DDR) does not employ any learning, rather it uses FCNN to optimize SSD registration cost. It is a newer trend in computer vision and graphics, where deep learning tools are used only for optimization purposes and not for learning. One prominent example is artistic style transfer
[10].Prior to our work, convolutional network was used for image registration in [11]
, where the authors trained the parameters of a 2layers convolutional network. The network was used to seek the hierarchical representation for each image, where high level features are inferred from the low level network. The goal of their work was to learn feature vectors for better registration. In contrast, DDR focuses on finding better solution during optimization, where we make use of endtoend backpropagation and the deep learning architecture.
2 Proposed Method
In this section, we provide detailed descriptions of each component of our solution pipeline. We start with the overall registration framework. Then, we illustrate our FCNN architecture and define upper bound of the SSD cost. We end the section by explaining the registration module.
2.1 Deep Deformable Registration Framework
Throughout this paper the moving image is represented as and the fixed (or reference) image is represented as . The output of the fully convolutional neural network is represented as . represents the deformation produced by a registration algorithm. The complete DDR framework is shown in Fig. 1.
In DDR, we have considered SSD as the registration cost between the moving and the fixed image. For simplicity, we omit any regularization term here. Hence, the SSD registration cost is as follows:
(1) 
In DDR, the FCNN is followed by a nonlinear function as shown in Fig. 1. A suitable design of ensures the upper bound of the original cost (UBSSD):
(2) 
In order to minimize the UBSSD given in (2), backpropagation is applied. is error that backpropagates from registration module and is the error of the nonlinear module. The output of the FCNN is the heatmap which is modified by the nonlinearity function to :
(3) 
This modified heat map is added pixelwise with the fixed image as a distortion.
In DDR, the registration module minimizes the UBSSD, which will ensure minimization of the original SSD cost. The DDR framework works in an iterative coordinate descent manner by alternating between the following two steps until convergence: (a) fix and optimize for deformable parameters and (b) fix and optimize for heatmap by backpropagation. Thus, the DDR framework works in an endtoend fashion. The error signals and are as follows:
(4) 
and
(5) 
Thus, the DDR framework enhances the space of optimization parameters from to a joint space of and (or ). So, when the registration optimizer is stuck at a local minimum of , the alternating coordinate descent finds a better minimum for in the joint space, and the registration proceeds because of the upper bound property of the cost function. The decrease of UBSSD and SSD is shown in Fig. 2 over iterations of the coordinate descent for a registration example.
2.2 Tight UBSSD
We ensure condition (6) by realizing as a soft thresholding function with threshold :
(7) 
Note that is applied pixelwise on the heatmap . For a tight UBSSD condition (6) can be restated as follows:
(8) 
2.3 FCNN Architecture
We have used VGGnet [12]
to serve as FCNN. This network contains both convolutional and deconvolutional layers. In the VGGnet we decapitated the final classifier layer and convert all fully connected layers to convolutional layers. This is followed by multiple deconvolutional layers to bilinearly upsample the coarse outputs to pixel dense outputs. The convolutional part consist of multiple convolutional layers, ReLU layers and maxpooling layers. The deconvolutional part consists of deconvolutional layers. We have skipped multiple layers and fused them with the deconvolved layers to introduce local appearance information. A typical skip architecture used in our module is shown in details in Fig.
3.2.4 Registration module
In order to register the moving image with the modified reference image we have used the demons [2, 3] method. To find the optimum transformation we optimize the following cost:
(9) 
Due to the large number of transformation parameters in the transformation field , we use limited memory BFGS (LBFGS) algorithm to find the optimum transformation field. This algorithm is computationally less extensive than BFGS when the number of optimization parameters are large. While calculating the step length and direction, LBFGS store the hessian matrix for the last few iterations and use them to calculate the direction and step length instead of updating and storing the hessian matrix in each iteration. After finding the optimum transformation field, the error is backpropagated through the pipeline which helps the FCNN in finding the necessary distortion required to reduce the energy further down.
3 Results
3.1 Registration Algorithms
To establish the usefulness of DDR, the following two deformable registration algorithms, each with and without DDR, are used:

DDR + Diffeomorphic demon

Diffeomorphic demon

DDR + Logdemon

Logdemon.
In our setup, to register images using DDR + diffeomorphic demons, we have used FCNN16s network [8] and for registration using DDR + logdemon, we have used FCNN32s architecture for the FCNN.
3.2 Registration Evaluation Metrics
For performance measures, we have used structural similarity index (SSIM) [13], Peak signal to noise ratio (PSNR) and the SSD error. SSIM can capture local differences between the images, whereas SSD and PSNR can capture global differences.
3.3 Experiments with IXI Dataset
IXI dataset (http://biomedic.doc.ic.ac.uk/braindevelopment/index.php?n=Main.Datasets) consists of 30 subjects, which are all 3D volumetric data. Among them, we have randomly chosen one as our reference volume and we have registered the others using the aforementioned four algorithms.
The improvements in SSIM, PSNR and SSD error with diffeomorphic demon are provided in Fig. 4. The improvement in registration with logdemon is shown in Fig. 5. These results are summarized in Table 1 where we show average percentage improvements in 3D volume registration. From these results we observe significant improvements gained by using DDR, especially in reducing the SSD cost, because the optimization has targeted SSD. However, other measures such as SSIM and PSNR have also decreased significantly. Fig. 8 shows a residual image (difference image) for the logdemon method with and without using DDR. Significant reduction in residual image intensity is observed when DDR is used.
DDR with diffeomorphic demons  DDR with logdemons  

SSIM  PSNR  SSD  SSIM  PSNR  SSD 
9.2 
5.0  19.0  11.0  5.3  21.0 
3.4 Experiments with ADNI Dataset
In these experiments, we have randomly selected 20 MR 3D volumes from the ADNI dataset (http://adni.loni.ucla.edu/). Among them, one is randomly selected to be the template, and the rest are registered with it using Diffeomorphic Demons and LogDomain Diffeomorphic Demons algorithm. The SSIM, PSNR and SSD values are calculated and plotted in Fig.7 and Fig. 6. These improvements are summarized in summarized in Table 2. Once again, we observe significant gains in registration metrics using the proposed DDR.
DDR with diffeomorphic demons  DDR with logdemons  
SSIM  PSNR  SSD  SSIM  PSNR  SSD 
13.6 
6.2  19.9  4.3  3  12.6 
4 Conclusions and Future Work
We have proposed a novel method for improving deformable registration using fully convolutional neural network. While previous studies have focused on learning features, here we have utilized FCNN to help optimize registration algorithm better. On two publicly available datasets, we show that improvements in registration metrics are significant. In the future, we intend to work with other diffeomorphic registration algorithms, such HAMMER [14].
Acknowledgments
Authors acknowledge support from MITACS Globallink and Computing Science, University of Alberta.
References
 [1] R. Liao, L. Zhang, Y. Sun, S. Miao, and C. Chefd’Hotel, “A review of recent advances in registration techniques applied to minimally invasive therapy,” IEEE Transactions on Multimedia, vol. 15, no. 5, pp. 983–1000, Aug 2013.
 [2] T. Vercauteren, X. Pennec, A. Perchant, and N. Ayache, “Efficient nonparametric image registration,” NeuroImage, vol. 45, no. 1, Supplement 1, pp. S61––S72, Mar 2009.
 [3] Herve Lombaert, L. Grady, X. Pennec, N. Ayache, and F. Cheriet, “Spectral logdemons: Diffeomorphic image registration with very large deformations,” International Journal of Computer Vision, vol. 107, pp. 254––271, 2014.
 [4] A. Sotiras, C. Davatzikos, and N. Paragios, “Deformable medical image registration: A survey,,” IEEE Transactions on Medical Imaging, vol. 2, no. 7, pp. 1153––1190, 2013.
 [5] R.K. Sharma and M. Pavel, “Multisensor image registration,” Proceedings of the Society for Information Display, vol. XXVIII, pp. 951––954, 1997.
 [6] H.S. Sawhney and R. Kumar, “True multiimage alignment and its applications to mosaicing and lens distortion correction,” IEEE Transactions on Pattern Analysis and Machine Intellingece, vol. 21, pp. 235––243, 1999.
 [7] P. The´venaz, U.E. Ruttimann, and M. Unser, “Iterative multiscale registration without landmarks,” Proceedings of the IEEE International Confernece on Image Processing, pp. 228––231, 1995.

[8]
Jonathan Long, Evan Shelhamer, and Trevor Darrell,
“Fully convolutional networks for semantic segmentation,”
Conference on Computer Vision and Pattern Recognition
, 2015. 
[9]
Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton,
“Imagenet classification with deep convolutional neural networks,”
in Advances in neural information processing systems, 2012, pp. 1097–1105.  [10] Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge, “A neural algorithm of artistic style,” arXiv:1508.06576, 2015.

[11]
Guorong Wu, Minjeong Kim, Qian Wang, Yaozong Gao, Shu Liao, and Dinggang Shen,
Unsupervised Deep Feature Learning for Deformable Registration of MR Brain Images
, pp. 649–656, 2013.  [12] K. Simonyan and A. Zisserman, “Very deep convolutional networks for largescale image recognition,” ICLR, 2015.
 [13] Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 228––231, 2004.
 [14] Shen DG, “Image registration by local histogram matching,” Pattern Recognition, pp. 1161––1172, 2007.
Comments
There are no comments yet.