DeepAI
Log In Sign Up

An Auto-Context Deformable Registration Network for Infant Brain MRI

05/19/2020
by   Dongming Wei, et al.
Shanghai Jiao Tong University
University of North Carolina at Chapel Hill
0

Deformable image registration is fundamental to longitudinal and population analysis. Geometric alignment of the infant brain MR images is challenging, owing to rapid changes in image appearance in association with brain development. In this paper, we propose an infant-dedicated deep registration network that uses the auto-context strategy to gradually refine the deformation fields to obtain highly accurate correspondences. Instead of training multiple registration networks, our method estimates the deformation fields by invoking a single network multiple times for iterative deformation refinement. The final deformation field is obtained by the incremental composition of the deformation fields. Experimental results in comparison with state-of-the-art registration methods indicate that our method achieves higher accuracy while at the same time preserves the smoothness of the deformation fields. Our implementation is available online.

READ FULL TEXT VIEW PDF

page 3

page 6

10/28/2021

Deformable Registration of Brain MR Images via a Hybrid Loss

We learn a deformable registration model for T1-weighted MR images by co...
02/13/2018

BIRNet: Brain Image Registration Using Dual-Supervised Fully Convolutional Networks

In this paper, we propose a deep learning approach for image registratio...
08/16/2019

Conv2Warp: An unsupervised deformable image registration with continuous convolution and warping

Recent successes in deep learning based deformable image registration (D...
10/12/2017

Fast, Accurate and Fully Parallelizable Digital Image Correlation

Digital image correlation (DIC) is a widely used optical metrology for s...
08/08/2021

Alignment of Tractography Streamlines using Deformation Transfer via Parallel Transport

We present a geometric framework for aligning white matter fiber tracts....
08/16/2019

A Cooperative Autoencoder for Population-Based Regularization of CNN Image Registration

Spatial transformations are enablers in a variety of medical image analy...

Code Repositories

ACTA-Reg-Net

An Auto-Context based Tissue-Aware Deformable Registration Network (ACTA-Reg-Net) for Infant Brain MR Images


view repo

1 Introduction

Deformable image registration [1, 2] establishes anatomical correspondences and is fundamental to longitudinal and population image analysis. Accurate registration of infant brain MRI is significantly more challenging than adults due to rapid shape and appearance changes of the brain images in association with dynamic development. In the first year of life, the overall brain volume doubles to about 65% of the adult brain volume [3]. During this life span, gray matter (GM) develops more rapidly (108% – 149%) than the white matter (WM) (11%), exhibiting significant increase in cortical thickness and surface area.

Existing registration methods, such as SyN [4], diffeomorphic Demons [5] and NiftyReg [6, 7], are based on iterative optimization and typically take a long time. Recently, the deep learning based image registration methods [8] have been shown to predict deformations in a short time with high accuracy. A registration network (Reg-Net) can be trained in an unsupervised manner with carefully designed metrics [9].

It is however not straightforward to balance between image matching similarity and deformation smoothness. Balakrishnan et al. [9]

proposed to train the network with a loss function comprising of both image similarity and L2-norm of the deformation field. This method can get trapped in local optima while coping with the large deformation fields through a global regularization. Moreover, manual parameter fine-tuning is required in the training stage to decide the weight for loss term. To overcome these limitations, Dalca 

et al. [10] implemented scaling and squaring based integration operations to estimate a velocity field, instead of directly predicting the deformation field. Hu et al. [11] applied a discriminator network to gauge the smoothness of the output deformation. As the discriminator requires additional data for training, the applicability of the method can be limited. There are several other works [12, 13] that train cascaded Reg-Nets to improve learning capacity, but at the expense of much more GPU memory due to the huge amount of parameters.

In this work, we propose an auto-context deformable registration network (AC-Reg-Net) for infant brain MRI. Instead of using cascaded training, AC-Reg-Net estimates the deformation fields by invoking a single network multiple times for incremental refinement of deformation fields. The final deformation field is obtained by composing all the incremental deformations. AC-Reg-Net functions very much in the spirit of auto-context modeling  [14].

2 Methods

Figure 1: (a) Auto-context registration network (AC-Reg-Net). Deformation fields are color coded with red, green, and blue, representing deformation in the left-right, anterior-posterior, and inferior-superior directions, respectively. Black indicates zero displacement. (b) The architecture of the registration network (Reg-Net).

AC-Reg-Net, illustrated in Fig. 1, aims to progressively refine the deformation fields based on the ‘context’ provided by prior estimates of the deformation. AC-Reg-Net consists of a basic deformable registration network (Reg-Net in the figure) and a spatial transformer [15]. The Reg-Net outputs smooth and incremental deformation fields. In our implementation, Reg-Net is trained following a tissue-aware topology-preserving metric based on tissue segmentation maps [16]. The spatial transformer, inspired by [15], resamples the moving tissue segmentation map based on the estimated deformation field. The Reg-Net and spatial transformer are invoked iteratively in a manner resembling auto-context modeling [14].

2.1 Auto-Context Framework

The auto-context strategy (Fig. 1(a)) consists of the following steps:

  1. The moving and fixed tissue segmentation map pair {, } is used as input to pre-trained Reg-Net to obtain the deformation field and the warped moving tissue segmentation map .

  2. The new segmentation map pair {, } is fed into the same Reg-Net to get the new residual deformation field and the warped moving tissue segmentation map .

  3. The previous two steps are repeated times to obtain the final warped moving tissue segmentation map with the final deformation field given by .

Note that, unlike [13], we avoid error accumulation of repeated segmentation map resampling by composing the deformation fields before warping the moving tissue segmentation map in each iteration.

2.2 Deformable Registration Network

The Reg-Net outputs a smooth deformation field, which when composed with prior deformation estimates, aims to lead to highly accurate and smooth alignment. Only a single pre-trained Reg-Net is utilized for all iterations. Our Reg-Net consists of trainable layers and an integration layer, as shown in Fig. 1(b).

2.2.1 Architecture –

The trainable layers are a 3D U-Net akin to VoxelMorph [9], where the dimensions of the input and output layers are adapted to the specific registration task. Given the moving/fixed tissue segmentation map pair {}, the input layer size is for two input segmentation maps, and the output layer size is for the deformation field. Reg-Net estimates a velocity field that can be integrated across iterations to eventually result in a smooth deformation field , as inspired by LDDMM [17]. The integrating layer adopts scaling and squaring operations as described [10, 18]. For training the Reg-Net in an unsupervised manner, the spatial transformer is applied to obtain the warped moving tissue segmentation map (Fig. 1(b)).

2.2.2 Loss Function –

The loss function used to train Reg-Net consists of a dissimilarity function and a regularizer. Similar to LDDMM [17], the basic loss function is defined as

(1)

where Sim() can be any mono-modal similarity metric, which in our case is implemented as the localized normalized cross-correlation. Reg() is the L2-norm of the gradient of , as defined in VoxelMorph. However, the regularization of VoxelMorph is insufficient for avoiding folding in infant MRI registration, as we will demonstrate with experimental results. Therefore, we propose to regularize the deformation via tissue-aware Jacobian determinant for greater smoothness. We constrain to be positive via

(2)

This regularization adaptively constraints the Jacobian determinant according to the tissues type. For GM and WM, the minimum of the Jacobian determinant should be positive. For the background and CSF, the average Jacobian determinant should be close to 1. The loss function is defined as

(3)

2.2.3 Implementation –

The proposed Reg-Net was implemented in Keras and trained on a single 12GB NVIDIA Titan X GPU. We used ADAM optimizer with a learning rate of

. The network was trained for 1500 epochs, with 100 iterations in each epoch. The Reg-Net was trained by randomly selecting one tissue segmentation map pair from the training dataset in each iteration, and it took around 145 hours to finish the entire training processing.

3 Results and Discussion

3.0.1 Dataset and Preprocessing –

The dataset consisted of longitudinal T1w and T2w images (acquired at 2 weeks, 3, 6 and 12 months after birth) of 47 healthy infant subjects enrolled as part of the anonymous study. The imaging parameters for T1w MR images were: TR = 1900 ms, TE = 4.38 ms, flip angle = 7, 144 sagittal slices, and 1 mm isotropic voxel resolution. The imaging parameters for T2w MR images were TR = 7380 ms, TE = 119 ms, flip angle = , 64 sagittal slices, and voxels resolution. The dataset is pre-processed by the infant dedicated pre-processing pipeline [19] to obtain tissue segmentation maps.

The number of scans for each subject can vary due to missed scans. The training dataset comprised of 56 longitudinal scans of 29 subjects, and 57 scans of the remaining 18 subjects were used for testing. We selected a 12-month-old tissue segmentation map from the testing dataset as the fixed image in the testing stage. All the tissue segmentation maps were rigidly aligned with the fixed tissue segmentation map using FLIRT [20]. All the tissue segmentation maps and intensity images were then resampled to have a size of with mm voxels resolution.

Figure 2: Results obtained with various registration methods, including warped intensity images overlaid with deformation fields, and warped tissue segmentation maps.

3.0.2 Evaluation Metrics –

We computed Dice similarity coefficient (DSC) over the segmented GM and WM. The smoothness of the deformation field was evaluated using ratio of folding points (RFP), which is the ratio of the negative Jacobian determinant voxels to the total number of voxels. Higher DSC with smaller RFP signifies better performance, meaning higher similarity with a more regularized deformation field.

Figure 3: The mean DSC of GM and WM over the testing dataset (top), and RFP of the deformation field (bottom) with different numbers of iterations in the auto-context framework.

3.1 Comparison with Existing Methods

We performed inter-subject registration over the testing dataset and randomly selected a 12-month-old scan as the fixed image. All the other segmentation maps in the testing dataset were then registered to this fixed image. We compared our method with SyN in the ANTs toolkit [4], diffeomorphic Demons [5], NifityReg [6], and VoxelMorph [10]. The parameters details for ANTs, Demons, and NifityReg are as follows:

  1. ANTs:

    ANTS 3 -m PR[fixed_image.nii, moving_image.nii, 1, 2] -O output -i -t SyN[0.5] -r Gauss[2,0] –continue-affine false –use-NN -G

    WarpImageMultiTransform 3 moving_labels.nii output_labels.nii -R fixed _image.nii outputWarp.nii outputAffine.txt -–use-NN

  2. Demons:

    DemonsRegistration -f fixed_image.nii -m moving_image.nii -O output.mha -e -s 2 -i 30x20x10

    DemonsWarp -m moving_labels.nii -b output.mha -o output_labels.nii -I

  3. NiftyReg:

    ref_f3d -flo moving_image.nii -ref fixed_image.nii -res warped.nii -cpp output.nii

    reg_resample -ref fixed_image.nii -flo moving_labels.nii -res output_labels.nii -trans output.nii -inter 0

We trained VoxelMorph with the training dataset using its default parameters [10]. The quantitative results for the registration of 2 weeks to 12 months, 3 months to 12 months, and 6 months to 12 months are given in Table 1. It can be observed that AC-Reg-Net obtained significant improvement for DSC over the compared methods for all three time-points. The results can be visually inspected in Fig. 2, confirming that AC-Reg-Net obtains accurate alignment with smooth deformation fields compared to the other methods.

2 weeks to 12 months 3 months to 12 months 6 months to 12 months RFP (%)
DSC DSC DSC
GM WM GM WM GM WM
FLIRT 0
ANTs 0.0453
Demons 0.0006
NiftyReg 0.0034
VoxelMorph 0.6915
Reg-Net 77.620.96 0.0193
AC-Reg-Net 84.960.49 82.580.56 85.120.36 82.790.18 85.170.31 83.190.10 0.0122
Table 1: DSC (%) and RFP (%) over GM and WM by FLIRT, diffeomorphic Demons, ANTs, NiftyReg, VoxelMorph, Reg-Net, and AC-Reg-Net.

3.2 Benefits of Auto-Context Registration

We evaluated the effects of the number of iteration in the auto-context framework. We compared the mean WM and GM DSC and RFP over the testing dataset. Fig. 3 shows that result varies with the iteration number. The mean DSC of GM and WM improves sharply from the 1st to the 2nd iteration and plateaus after the 5th iteration. RFP is kept very low for all the iterations. Since the deformation field generated by AC-Reg-Net is smooth for each iteration, the composition of these smooth deformation fields results in a smooth final deformation field . We chose for AC-Reg-Net since the performance is optimal at this point.

4 Conclusion

This paper presented a deep registration framework for infant brain MRI. To counter appearance changes, our method, AC-Reg-Net, uses tissue segmentation maps for training. AC-Reg-Net is applied in an auto-context manner, leveraging context information iteratively to improve registration accuracy. The Jacobian regularizer constrained the estimated deformation fields so that they are topology-preserving. Experimental results validate the efficacy of AC-Reg-Net in registering MRI scans of infants of 2-week, 3-months, and 6-months of age to a 12-month-old scan, both in terms of accuracy and deformation regularity.

References

  • [1] Aristeidis Sotiras, Christos Davatzikos, and Nikos Paragios. Deformable medical image registration: A survey. IEEE transactions on medical imaging, 32(7):1153–1190, 2013.
  • [2] Hava Lester and Simon R Arridge. A survey of hierarchical non-linear medical image registration. Pattern recognition, 32(1):129–149, 1999.
  • [3] Rebecca C. Knickmeyer, Sylvain Gouttard, Chaeryon Kang, Dianne Evans, Kathy Wilber, J. Keith Smith, Robert M. Hamer, Weili Lin, Guido Gerig, and John H. Gilmore. A structural mri study of human brain development from birth to 2 years. The Journal of Neuroscience, 28(47):12176–12182, 2008.
  • [4] Brian B Avants, Charles L Epstein, Murray Grossman, and James C Gee. Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Medical image analysis, 12(1):26–41, 2008.
  • [5] Tom Vercauteren, Xavier Pennec, Aymeric Perchant, and Nicholas Ayache. Diffeomorphic demons: Efficient non-parametric image registration. NeuroImage, 45(1):S61–S72, 2009.
  • [6] D. Rueckert, L.I. Sonoda, C. Hayes, D.L.G. Hill, M.O. Leach, and D.J. Hawkes. Nonrigid registration using free-form deformations: application to breast mr images. IEEE Transactions on Medical Imaging, 18(8):712–721, 1999.
  • [7] Marc Modat, David M. Cash, Pankaj Daga, Gavin P. Winston, John S. Duncan, and Sébastien Ourselin. Global image registration using a symmetric block-matching approach. Journal of medical imaging, 1(2):24003–24003, 2014.
  • [8] Grant Haskins, Uwe Kruger, and Pingkun Yan. Deep learning in medical image registration: a survey. Machine Vision and Applications, 31(1):8, 2020.
  • [9] Guha Balakrishnan, Amy Zhao, Mert R. Sabuncu, John Guttag, and Adrian V. Dalca. Voxelmorph: A learning framework for deformable medical image registration. IEEE Transactions on Medical Imaging, 38(8):1788–1800, 2019.
  • [10] Adrian V Dalca, Guha Balakrishnan, John Guttag, and Mert R Sabuncu. Unsupervised learning of probabilistic diffeomorphic registration for images and surfaces. Medical image analysis, 57:226–236, 2019.
  • [11] Yipeng Hu, Eli Gibson, Nooshin Ghavami, Ester Bonmati, Caroline M. Moore, Mark Emberton, Tom Vercauteren, J. Alison Noble, and Dean C. Barratt.

    Adversarial deformation regularization for training image registration neural networks.

    In 2018 Medical Image Computing and Computer Assisted Intervention, pages 774–782, 2018.
  • [12] Zhengyang Shen, Xu Han, Zhenlin Xu, and Marc Niethammer. Networks for joint affine and non-parametric image registration. In

    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    , pages 4224–4233, 2019.
  • [13] Shengyu Zhao, Yue Dong, Eric I-Chao Chang, and Yan Xu. Recursive cascaded networks for unsupervised medical image registration. In Proceedings of the IEEE International Conference on Computer Vision, pages 10600–10610, 2019.
  • [14] Zhuowen Tu and Xiang Bai. Auto-context and its application to high-level vision tasks and 3d brain image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(10):1744–1757, 2010.
  • [15] Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial transformer networks. In NIPS’15 Proceedings of the 28th International Conference on Neural Information Processing Systems, pages 2017–2025, 2015.
  • [16] Li Wang, Dong Nie, Guannan Li, Elodie Puybareau, Jose Dolz, Qian Zhang, Fan Wang, Jing Xia, Zhengwang Wu, Jia-Wei Chen, Kim-Han Thung, Toan Duc Bui, Jitae Shin, Guodong Zeng, Guoyan Zheng, Vladimir S. Fonov, Andrew Doyle, Yongchao Xu, Pim Moeskops, Josien P. W. Pluim, Christian Desrosiers, Ismail Ben Ayed, Gerard Sanroma, Oualid M. Benkarim, Adria Casamitjana, Veronica Vilaplana, Weili Lin, Gang Li, and Dinggang Shen. Benchmark on automatic six-month-old infant brain segmentation algorithms: The iseg-2017 challenge. IEEE Transactions on Medical Imaging, 38(9):2219–2230, 2019.
  • [17] M. Faisal Beg, Michael I. Miller, Alain Trouvé, and Laurent Younes. Computing large deformation metric mappings via geodesic flows of diffeomorphisms. International Journal of Computer Vision, 61(2):139–157, 2005.
  • [18] John Ashburner. A fast diffeomorphic image registration algorithm. NeuroImage, 38(1):95–113, 2007.
  • [19] J.G. Sled, A.P. Zijdenbos, and A.C. Evans. A nonparametric method for automatic correction of intensity nonuniformity in mri data. IEEE Transactions on Medical Imaging, 17(1):87–97, 1998.
  • [20] Mark Jenkinson and Stephen M. Smith. A global optimisation method for robust affine registration of brain images. Medical Image Analysis, 5(2):143–156, 2001.