Log In Sign Up

Test-Time Training for Deformable Multi-Scale Image Registration

Registration is a fundamental task in medical robotics and is often a crucial step for many downstream tasks such as motion analysis, intra-operative tracking and image segmentation. Popular registration methods such as ANTs and NiftyReg optimize objective functions for each pair of images from scratch, which are time-consuming for 3D and sequential images with complex deformations. Recently, deep learning-based registration approaches such as VoxelMorph have been emerging and achieve competitive performance. In this work, we construct a test-time training for deep deformable image registration to improve the generalization ability of conventional learning-based registration model. We design multi-scale deep networks to consecutively model the residual deformations, which is effective for high variational deformations. Extensive experiments validate the effectiveness of multi-scale deep registration with test-time training based on Dice coefficient for image segmentation and mean square error (MSE), normalized local cross-correlation (NLCC) for tissue dense tracking tasks. Two videos are in and


page 2

page 5

page 6


NeurReg: Neural Registration and Its Application to Image Segmentation

Registration is a fundamental task in medical image analysis which can b...

Meta-Registration: Learning Test-Time Optimization for Single-Pair Image Registration

Neural networks have been proposed for medical image registration by lea...

Deformer: Towards Displacement Field Learning for Unsupervised Medical Image Registration

Recently, deep-learning-based approaches have been widely studied for de...

3D B-mode ultrasound speckle reduction using deep learning for 3D registration applications

Ultrasound (US) speckles are granular patterns which can impede image po...

Neural Multi-Scale Self-Supervised Registration for Echocardiogram Dense Tracking

Echocardiography has become routinely used in the diagnosis of cardiomyo...

A Multi-scale Optimization Learning Framework for Diffeomorphic Deformable Registration

Conventional deformable registration methods aim at solving a specifical...

I Introduction

Image registration tries to establish the correspondence between organs, tissues, landmarks, edges, or surfaces in different images and it is critical to many clinical tasks such as tumor growth monitoring and surgical robotics [18]. Manual image registration is time-consuming, laborious and lacks reproducibility which causes clinical disadvantage potentially. Thus, automated registration is desired in many clinical practices. Generally, registration can be necessary to analyze motion from videos, auto-segment organs given atlases, and align a pair of images from different modalities, acquired at different times, from different viewpoints or even from different patients. Thus designing a robust image registration can be challenging due to the high variability of scenarios.

Traditional registration methods estimate the registration fields by optimizing certain objective functions. Such registration field can be modeled in several ways, e.g. elastic-type models 

[5, 38], free-form deformation [36], Demons [43], and statistical parametric mapping [1]. Beyond the deformation model, diffeomorphic transformations preserve topology and many methods adopt them such as large deformation diffeomorphic metric mapping [8], symmetric image normalization (SyN) [3] and diffeomorphic anatomical registration using exponential Lie algebra [2]. One limitation of these methods is that the optimization can be computationally expensive especially for 3D images.

Deep learning-based registration methods have recently been emerging as a practicable alternative to the conventional registrations [34, 41, 46, 45, 9, 19, 28, 29, 30, 26]

. These methods employ sparse or weak label of registration field, or conduct supervised learning 

[11, 34, 10, 41, 44, 21]

purely based on registration field ground truth. Facilitated by a spatial transformer network 

[22], recent unsupervised deep learning-based registrations [32, 33, 27, 37, 17, 16, 23, 14, 47], such as VoxelMorph [6, 15], have been explored because of annotation free in the training. VoxelMorph is further extended to diffeomorphic transformation and Bayesian framework [13]. Adversarial similarity network adds an extra discriminator to model the appearance loss between warped image and fixed image, and uses adversarial training to improve the unsupervised registration [50, 16, 39, 20]. However, most of these registration methods have comparable accuracy as traditional registrations, although with potential speed advantages.

In this work, we design a novel test-time training for deep deformable image registration framework with multi-scale parameterization of complicated deformations for accurate registration to handle large deformation and noise widely existing in medical images. The framework, called self-supervision optimized multi-scale registration as illustrated in Fig. 1, is motivated by the fact that these purely learning-based registrations generally cannot generalize well on noisy images with large and complicated deformations because of domain shift between training data and test data [42]. More specifically, we propose a registration optimized in both training and test to tackle the generalization gap between training images and test images. Different from unsupervised registration [6], registration with test-time training [42] further tunes the deep network in each test image pair, which is vital for the success of noisy image registration in Fig. 3 and 4. Inspired by the improvement of multi-scale registration in conventional registration [12], we further employ a multi-scale scheme to estimate accurate deformation where we conduct test-time training with a U-Net [35] to model the residual registration field in each scale.

Fig. 1: Framework of multi-scale deep registration networks with test-time training in scale .

Our main contributions are as follows. 1) We design a novel test-time training for deep deformable registration to improve generalization ability of learning-based registration. The test-time training with self-supervised optimization for deep registration network yields accurate registration field estimation even for noisy images with large deformations by eliminating the accuracy gap between the estimation on the training set and that on the test set. 2) We design a deep multi-scale registration based on unsupervised learning framework to model the large deformations. The multi-scale strategy estimates the

residual registration field consecutively, which enforces a coarse to fine consistency of the morphing, and it provides a sequential optimization pathway which yields much more accurate registration field.

Ii Method

Ii-a Notations and Framework

Let and be two images defined over an n-dimensional spatial domain . In this paper, we focus on either for 2D images such as X-ray and ultrasound, or for volumetric images such as computed tomography (CT) or magnetic resonance imaging (MRI). For simplicity, we assume both and are gray-scale containing a single channel. Our goal is to align to through a deformable transformation so that the anatomical features in transformed are aligned to those in . To distinguish them, we call and the moving and fixed images, respectively.

Let be the registration field that maps coordinates of to coordinates of . The image registration problem [4, 5, 7] can be generally formulated as minimizing the anatomical differences between ( warped by ) and , regularized by a smoothness constraint on the registration field,


where is a reconstruction loss measuring the dissimilarity between two images, and measures the smoothness of the registration field. In this work, we assume with denoting the identify mapping, and model the displacement field instead. Let denote the -th coordinate of the displacement field. The smoothness term is taken to be

throughout this paper with the gradients approximated by displacement differences between neighboring grids in the actual implementation.

Ii-B Deep Learning-Based Registration

We model the displacement field through a deep neural net, , receiving and as input and generating through a neural net with weight parameters . Finding the registration field is thus expressed as a learning problem to identify the optimal parameters

to minimize the loss function shown in Eq. (

1). We employ a U-Net [35] with skip connections to estimate the displacement field . The input to the network is the concatenation of and across the channel dimension. The U-Net configuration is illustrated in Fig. 1.

Given a point , the registration field aligns its image feature in the fixed image to the image feature at location

of the moving image. Because image values are only defined over a grid of integer points, we linearly interpolate the values of

from the grid points neighboring

through a spatial transformer network

 [22], which generates a warped moving image according to the registration field . This formulation allows gradient calculation and back-propagating errors during learning.

Many metrics have been proposed to measure the anatomical similarity between fixed image and warped moving image , including mean-squared error, mutual information, cross-correlation. In this work, we use the negative normalized local cross-correlation as the reconstruction loss, which tends to be more robust for noisy images, although other metrics can be equally applied


where is a location within a local neighborhood around , and and are the local means of the fixed and warped images, respectively.

Ii-C Test-Time Training for Registration

Given training and test data, previous neural based registration methods take the standard paradigm of learning the parameters of the neural net by minimizing the loss function Eq. (1

) on the training set, and then derive the displacement field on the test set based on the learned parameters. The benefit of this approach is that the inference is fast but coming at the cost of significant performance reduction. The main cause for this is that each pair of medical images has its own unique characteristics and it is difficult for learned models to generalize well on new test images. For medical images with low signal-to-noise ratios, such as ultrasound images, the performance gap between the training and test sets can be substantial.

We propose to use self-supervised optimization to improve the generalization accuracy of learning-based registration. Under this paradigm, the network parameters is further optimized based on both training and the test image pairs


where contains both the image pairs in the training set as well as test image pairs, and is a regularization parameter balancing the trade-off between the reconstruction and smoothness losses.

Our approach aims to seek a middle ground between traditional optimization-based approaches and pure learning-based approaches. The neural network parameterized registration trained with stochastic optimization alleviates some of key challenges in traditional optimization-based approaches such as high cost in optimization, long running time, and poor performance due to local optimality 

[25], while at the same time improves the performance of purely learning-based approach through further fine tuning on test images.

Ii-D Multi-Scale Neural Registration

The registration field for medical images often have both large and small-scale displacement vectors throughout the spatial domain. This is most apparent in echocardiogram, where we use image registration to align temporally nearby images to detect tissue or blood flow movements. In order to capture the displacement field at various scales, we propose a multi-scale scheme to parameterize the

residual registration field. At each scale, a self-supervision optimized registration network uses the concatenation of reconstructed image from last scale and fixed image of current scale as input. A U-Net is used to parameterize the residual registration field of each scale. The final registration field is calculated by fusing the residual registration fields across different scales.

We choose a sequence of spatial scales from coarse to fine to parameterize the neural registration field, with each modeled by a U-Net. We employ resize with different image sizes for different spatial scales. For instance, models the neural registration field at scale . At each scale, we use self-supervised optimization to obtain the parameter in the corresponding U-Net.

At each scale , the input to the neural registration field is the concatenation of reconstructed image and fixed image , where is the reconstructed moving image obtained from the previous scale downsampled to scale and is the fixed image downsampled to scale . At the coarsest scale , is taken to be downsampled moving image to scale . Let denote the registration field obtained at each scale . (Note that covers the original spatial domain of the moving image, different from , which is a residual mapping between down-sampled domains, i.e., the domain of .) Then the reconstructed image from scale is and is the down-sampled version of this image to scale . By default, we assume is the identify map because is the coarsest scale, and .

More specifically, at scale , we first resize the moving image and the fixed image to the current scale for data preparation as


where is the downsampled reconstructed image and is the down-sampling operator, is the moving image. We employ a U-Net to model the residual registration field of scale , , where is the parameters in the U-Net.

Ii-E Multi-Scale Registration Field Aggregation

After the test-time training, we obtain the optimal parameters of the neural network and calculate the residual registration field for the last scale’s reconstructed image of scale . We calculate the registration field for the moving image by aggregating registration field of the last scale and intermediate/residual registration field . For each pixel position in the moving image , we obtain the final position and the combined registration field by


where is the linear interpolation operator for up-sampling and we use linear intepolation to calculate the field for point in the current generated residual registration field . We do not need to calculate the combined registration for the coarsest scale because is a zero field as predefined.

Iii Results

We validate test-time training for multi-scale deformable registration, including ablation study, on three datasets.

Iii-a Data

We employ a 3D hippocampus MRI dataset from the medical segmentation decathlon [40] to validate the proposed method for registration-based segmentation. On the hippocampus dataset, we randomly split the dataset into 208 training images and 52 test images. There are two foreground categories, hippocampus head and hippocampus body. We re-sample the MR images to

spacing. To reduce the discrepancy of the intensity distributions of the MR images, we calculate the mean and standard deviation of each volume and clip each volume up to six standard deviations. Finally, we linearly transform each 3D image into range

. The image size is within voxels.

We collect echocardiograms from 19 patients with 3,052 frames in total for myocardial tracking, and contrast echocardiograms from 71 patients with 11,462 frames in total for cardiac blood tracking. Contrast enhanced echocardiography-based vortex imaging has been used in patients with cardiomyopathy, LV hypertrophy, valvular diseases and heart failure. For testing, we randomly choose three patients’ echocardiograms with 291 frames for myocardial tracking, and three patients’ echocardiograms with 216 frames for cardiac blood tracking from the two datasets. The rest of echocardiograms are used for training. We use large training set to facilitate the learning based-method, VoxelMorph [6], to perform well. The frame per second (FPS) of ultrasound for myocardial tracking is 75 and the FPS for cardiac blood tracking is from 72 to 93. Because of the consistency of FPS, it has little impact on the results. All echocardiograms have no registration field ground truth.

We only use the first channel of echocardiography images (e.g., treated as gray-scale images) with the pixel values normalized to be in by being divided by 255. The image size is . To remove the background and improve the robustness of registration, we conduct registration on region of pixel value within the range which is the default setting in ANTs [3, 4]. To obtain a smooth boundary for stable optimization, we further employ morphological dilation with a disk-shaped structure of radius of 16 pixels to extract myocardial region. For cardiac blood flow tracking, we extract cardiac blood region by 1) creating masks of the left ventricular blood pool at the end of the systole and the end of the diastole, 2) using active contour model to fit 100 uniformly sampled spline points along a circle into the boundary of cardiac blood mask [24]

, 3) using linear interpolation to get 100 interpolated spline points for each frame, 4) using radial basis function in interpolation to obtain the final smooth cardiac blood boundary from the 100 spline points. Removing myocardial region is crucial to cardiac blood tracking.

Iii-B Performance Evaluation and Implementation

For unsupervised learning, one of the main challenges is the model evaluation. Manually labeling the corresponding points for evaluation is time-consuming, laborious and inaccurate, because the image size is typically large especially for ultrasound images with low signal-to-noise ratio. We can use segmentation accuracy to evaluate registration by conducting registration-based segmentation, which employs the most similar training image based on NLCC as the moving image and obtains segmentation prediction by transforming the training label with the predicted registration field. For the segmentation, we compare our method with NiftyReg [31], ANTs [3, 4], VoxelMorph [6] and NeurReg [49]

which uses simulated registration field to conduct supervised learning. ANTs and NiftyReg are traditional optimization-based methods and VoxelMorph is a deep learning-based unsupervised registration with the same network structure and loss function as our method for a fair comparison. We use Dice coefficient (DSC) as the evaluation metric, defined to be

, where , , and are true positives, false negatives, false positives, respectively. Because the image size is within voxels, we use a -voxel window in the reconstruction loss in Eq. (2). We follow the same settings as NeurReg and the performances of the compared methods are reported from [49].

For the motion analysis, we use the last frame as the moving image and the current frame as the fixed image. Because there is no registration ground truth for tissue tracking based on echocardiogram, we use reconstruction based metrics, i.e., the mean square error (MSE) and the normalized local cross correlation (NLCC) with radius of ten pixels to replace pixel position based evaluation metric. We calculate the average MSE and NLCC over all frame pairs with the linearly normalized pixel value of range . For MSE and NLCC of one pair of frames, we take the average of square error and normalized local cross correlation over the masked region obtained from Section III-A. The method with low MSE and high NLCC has good reconstruction and is the preferred method. We compare our approach to ANTs [3, 4], and VoxelMorph [6]. We use a -voxel window for reconstruction loss . For a fair comparison, we use the same network structure and number of channels in each convolution as VoxelMorph.

For the purpose of ablation studies, we report the results of self-supervision optimized registration (SSR), self-supervision optimized multi-scale registration (SSMSR (1)) and different scales of self-supervision optimized multi-scale registration (SSMSR ()). We use two scales and on hippocampus dataset and four different scales , , and on echocardiogram datasets because echocardiogram is relative large () and hippocampus image size is relative small (). We use learning rate of and Adam optimizer to update the weights in neural networks for both our method and VoxelMorph [25]. The , which is the weight ratio of the smoothness loss and the reconstruction loss, is set to be based on the performance on the validation set. We set the number of optimization steps to per test pair or ultrasound sequence for the self-supervision optimized registration (of each scale), and set the number of iterations to the number of training ultrasound sequences for VoxelMorph for fair comparison. To improve the generalization performance of our method, we also firstly update the weights on the training set the same as VoxelMorph. In the experiment, we find, for self-supervision optimized multi-scale registration, the consecutive optimization across scales, i.e., weights from scale using optimized weights from scale as initialization, yields better registration.

We conduct deformable registration and use the recommended hyper-parameters for ANTs and NiftyReg in [6], because the field of view in the hippocampus and cardiac tissues is roughly aligned during image acquisition. For ANTs, we use three scales with 200 iterations each, B-Spline SyN step size of 0.25, updated field mesh size of five. For NiftyReg, we use three scales, the same local negative cross correlation objective function with control point grid spacing of five voxels and 500 iterations. We have tried our best to tune the best hyper-parameters for ANTs and VoxelMorph.

Iii-C Registration-Based Segmentation

On the Hippocampus dataset, the image size is within voxels, which is small. We only use two scales for multi-scale registration. For ablative study, we calculate DSCs of test-time training, self-supervision optimized, registration without multi-scale scheme (SSR), self-supervision optimized multi-scale registration of scale 1/2 (SSMSR (1/2)) and final self-supervision optimized multi-scale registration (SSMSR (1)) in Table I. We compare our method with previous registration approaches including ANTs, NiftyReg and recently proposed NeurReg [49]. We follow the same experimental settings as NeurReg. The results of ANTs, NiftyReg, VoxelMorph and NeurReg are reported in [49]. We use only one atlas for each test image based on NLCC similarity score from VoxelMorph. The segmentation comparisons are listed in Table I. With the same set of atlases, self-supervision optimized registration yields better segmentation than VoxelMorph, which means that the alleviation of domain shift by self-supervised optimization improves the accuracy of registration-based segmentation. Test-time training by self-supervision optimized multi-scale registration further improves registration-based segmentation accuracy based on segmentation dice score on both classes.

Method hippocampus head/body Average
ANTs 80.865.13/78.345.24 79.60
NiftyReg 80.534.86/77.925.47 79.23
SSR (w/o multi-scale)
SSMSR (1/2) (coarsest scale)

TABLE I: Segmentation (Dice scores, %) on hippocampus dataset.

Iii-D Registration-Based Tissue Dense Tracking

To validate the effectiveness of our method on noisy and large deformation ultrasound images, we calculate the performances of registration fields from ANTs, VoxelMorph, test-time training by self-supervision optimized registration (SSR) and self-supervision optimized multi-scale registrations (SSMSR ()). Quantitative comparison results of these models on both myocardial and cardiac blood flow dense tracking are shown in Table II.

Methods MSE (10-3) NLCC (10-1)
ANTs 15.51799.4637 3.15971.3913
VoxelMorph 1.22660.5457 4.73630.5457
SSR (w/o multi-scale) 1.21580.5473 4.78090.4952
SSMSR (1/8) (coarest scale) 1.69370.5616 4.20840.5086
SSMSR (1/4) 1.35040.4885 4.41400.5005
SSMSR (1/2) 1.09290.3775 4.71550.4569
SSMSR (1) 0.92060.3236 5.08810.4107

3.93441.4660 4.20621.0576
VoxelMorph 5.86541.6943 3.34620.5891
SSR (w/o multi-scale) 5.77801.6222 3.39420.5868
SSMSR (1/8) (coarest scale) 6.81081.8189 2.39480.5495
SSMSR (1/4) 5.61061.2424 2.73310.6779
SSMSR (1/2) 4.50891.0272 3.38790.7524
SSMSR (1) 3.79440.9384 4.25190.7181

TABLE II: Comparisons on myocardial (Upper) and cardiac blood flow (Lower) dense tracking among ANTs, VoxelMorph and Ours.

From Table II, we highlight the following observations. 1) SSMSR (1) achieves the best performances, and outperforms ANTs in terms of both MSE and NLCC on both myocardial and cardiac blood tracking, likely due to the representation and optimization efficiency of deep neural nets; Deep learning has a good capacity to model the registration field. 2) SSR yields consistently better results than VoxelMorph, demonstrating the efficacy of self-supervised optimization during the test phase for improving registration field estimation and reducing the estimation gap between training and testing. 3) SSMSR (1) obtains better performance than SSR on all experiments, demonstrating the benefit of sequential multi-scale optimization in echocardiogram registration. The multi-scale scheme alleviates the over-optimization of reconstruction loss compared with ANTs, which can be visually noticed from Fig. 3 and 4 in Section IV-B. The multi-scale registration with consecutive test-time training, SSMSR (1), achieves the best performance on the two tasks and outperforms traditional registration, ANTs, and current unsupervised registration, VoxelMorph, on the two tasks based on the two metrics.

Iv Discussion

Iv-a Understand Test-Time Training

To further understand the importance of test-time training in each scale of multi-scale registration, we visualize the reconstructions of cardiac blood from test-time training based multi-scale registration (scale 1/8) with (second column) and without (first column) test-time training respectively in Fig. 2. From the visual comparison, multi-scale registration without test-time training (first column) reconstructs worse blood details than multi-scale registration with test-time training (second column). In the multi-scale registration framework, the current registration relies on the previous reconstructed image which makes the optimization more difficult if the reconstruction from coarse level cannot do well.

Fig. 2: Reconstruction comparison for cardiac blood from test-time training based multi-scale registration (scale 1/8) w/o (first), i.e., VoxelMorph [6], w/ (middle) test-time training and ground truth image (last column).

Iv-B Understand Multi-Scale Registration

To further understand the self-supervision optimized multi-scale registration (SSMSR) in the test-time, we randomly choose two neighbor frames from ultrasound sequences and visualize the myocardial tracking results based on ANTs, VoxelMorph and the intermediate registration fields from SSMSR with four different scales in Fig. 3. More visualization results can be found in the supplemental materials.

Fig. 3: Visualization of myocardial tracking based on ANTs, VoxelMorph [6] and Ours.

From Fig. 3, we note that the registration field from ANTs is noisy, and the velocity direction from VoxelMorph for the right myocardial is incorrect because of contradictory in the estimated direction of myocardial. By contrast, SSMSR (1/8) produces the smoothest registration field, and SSMSR (1) generates more detailed velocity estimation that preserves both large and low-scale velocity variations. The coarse-to-fine results illustrate the multi-scale optimization scheme coupled with deep neural nets can be very effective in dealing with the highly challenging case of image registration in echocardiograms.

To facilitate the understanding of proposed SSMSR in the test-time on dense blood flow tracking based on echocardiograms, we also visualize the cardiac blood flow tracking results from ANTs, VoxelMorph and the intermediate registration fields from SSMSR with four different scales in Fig. 4. We randomly choose two neighbor frames from these ultrasound images.

Fig. 4: Visualization of cardiac blood flow tracking based on ANTs, VoxelMorph [6] and Ours.

From the registration fields generated from ANTs and VoxelMorph in Fig. 4, we cannot easily recognize the vortex in cardiac blood flow. By contrast, the vortex flow pattern from SSMSR is readily recognizable. The general vortex pattern is apparent from the coarsest level registration by SSMSR (1/8), followed by finer-scale registrations to introduce details of local velocity field variations. The final velocity field produced by SSMSR (1) includes both easily recognizable vortex flow, as well as details of local field variations.

Iv-C Computational Cost

We compare the computation cost on the echocardiogram dataset. NiftyReg takes the same scale of time as ANTs as validated in [49]. For ANTs, we cannot find implementation on GPU and the average computational time is 214.1054.04 seconds for the registration of two consecutive frames on 12 processors of Intel i7-6850K CPU @ 3.60GHz. For an ultrasound sequence of 50 frames, the computational time is about three hours for ANTs. Because the inference of VoxelMorph only relies on one feed forward pass of deep neural network, the average computational time is 0.110.47 seconds for one pair frames on one NVIDIA 1080 Ti GPU. The test-time training based multi-scale registration takes 279.97, 101.65, 68.79, 66.09 seconds for self-supervision optimization with the scale 1, 1/2, 1/4, 1/8 respectively on one ultrasound sequence of 49 frames by one NVIDIA 1080 Ti GPU. The SSMSR takes less than nine minutes in test time in total for one ultrasound sequence, achieving 20 times speedup over ANTs even using four scales.

V Conclusions

In this work, we propose a novel framework, test-time training based multi-scale registration, as a general framework for image deformable registration. To produce accurate registration field estimation from noisy medical images and reduce the estimation gap between training and testing, we incorporate test-time training in the registration framework. To handle large variations of registration fields, a multi-scale scheme is integrated into the proposed framework to reduce the over-optimization of similarity functions and provides a sequential residual optimization pathway which alleviates the optimization difficulties in registration. Our proposed method consistently outperforms previous approaches on both registration-based segmentation task of 3D MR images and myocardial and cardiac blood flow dense tracking task of echocardiograms.

In future work, the current framework can be extended by: 1) forcing the network to generate consistent displacement fields between moving and fixed images [17], 2) adapting iterative module and improving the efficiency by predicting the final registration field directly [21], 3) integrating registration into segmentation facilitated by the smart strategy for large GPU memory consumption [48, 51, 49].


  • [1] J. Ashburner et al. (2000) Voxel-based morphometry—the methods. Neuroimage. Cited by: §I.
  • [2] J. Ashburner (2007) A fast diffeomorphic image registration algorithm. Neuroimage 38 (1), pp. 95–113. Cited by: §I.
  • [3] B. B. Avants, C. L. Epstein, M. Grossman, and J. C. Gee (2008) Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Medical image analysis 12 (1), pp. 26–41. Cited by: §I, §III-A, §III-B, §III-B.
  • [4] B. B. Avants, N. Tustison, and G. Song (2009) Advanced normalization tools (ants). Insight j 2, pp. 1–35. Cited by: §II-A, §III-A, §III-B, §III-B.
  • [5] R. Bajcsy and S. Kovačič (1989) Multiresolution elastic matching. Computer vision, graphics, and image processing 46 (1), pp. 1–21. Cited by: §I, §II-A.
  • [6] G. Balakrishnan, A. Zhao, M. R. Sabuncu, J. Guttag, and A. V. Dalca (2018) An unsupervised learning model for deformable medical image registration. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    pp. 9252–9260. Cited by: §I, §I, §III-A, §III-B, §III-B, §III-B, Fig. 2, Fig. 3, Fig. 4.
  • [7] G. Balakrishnan, A. Zhao, M. R. Sabuncu, J. Guttag, and A. V. Dalca (2019) VoxelMorph: a learning framework for deformable medical image registration. IEEE transactions on medical imaging. Cited by: §II-A.
  • [8] M. F. Beg, M. I. Miller, A. Trouvé, and L. Younes (2005) Computing large deformation metric mappings via geodesic flows of diffeomorphisms. International journal of computer vision 61 (2), pp. 139–157. Cited by: §I.
  • [9] M. Blendowski and M. P. Heinrich (2019) Combining mrf-based deformable registration and deep binary 3d-cnn descriptors for large lung motion estimation in copd patients. International journal of computer assisted radiology and surgery 14 (1), pp. 43–52. Cited by: §I.
  • [10] X. Cao, J. Yang, J. Zhang, D. Nie, M. Kim, Q. Wang, and D. Shen (2017) Deformable image registration based on similarity-steered cnn regression. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 300–308. Cited by: §I.
  • [11] E. Chee and J. Wu (2018) Airnet: self-supervised affine registration for 3d medical images using neural networks. arXiv preprint arXiv:1810.02583. Cited by: §I.
  • [12] A. H. Curiale, G. Vegas-Sánchez-Ferrero, and S. Aja-Fernández (2016) Influence of ultrasound speckle tracking strategies for motion and strain estimation. Medical image analysis 32, pp. 184–200. Cited by: §I.
  • [13] A. V. Dalca, G. Balakrishnan, J. Guttag, and M. R. Sabuncu (2018) Unsupervised learning for fast probabilistic diffeomorphic registration. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 729–738. Cited by: §I.
  • [14] B. D. de Vos, F. F. Berendsen, M. A. Viergever, H. Sokooti, M. Staring, and I. Išgum (2019) A deep learning framework for unsupervised affine and deformable image registration. Medical image analysis 52, pp. 128–143. Cited by: §I.
  • [15] B. D. de Vos, F. F. Berendsen, M. A. Viergever, M. Staring, and I. Išgum (2017)

    End-to-end unsupervised deformable image registration with a convolutional neural network

    In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 204–212. Cited by: §I.
  • [16] J. Fan, X. Cao, Z. Xue, P. Yap, and D. Shen (2018) Adversarial similarity network for evaluating image alignment in deep learning based registration. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 739–746. Cited by: §I.
  • [17] C. Godard, O. Mac Aodha, and G. J. Brostow (2017) Unsupervised monocular depth estimation with left-right consistency. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 270–279. Cited by: §I, §V.
  • [18] G. Haskins, U. Kruger, and P. Yan (2019) Deep learning in medical image registration: a survey. arXiv preprint arXiv:1903.02026. Cited by: §I.
  • [19] Y. Hu, R. Song, and Y. Li (2016) Efficient coarse-to-fine patchmatch for large displacement optical flow. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5704–5712. Cited by: §I.
  • [20] Y. Huang, W. Zhu, D. Xiong, Y. Zhang, C. Hu, and F. Xu (2020)

    Cycle-consistent adversarial autoencoders for unsupervised text style transfer

    In Proceedings of the 28th International Conference on Computational Linguistics, pp. 2213–2223. Cited by: §I.
  • [21] J. Hur and S. Roth (2019) Iterative residual refinement for joint optical flow and occlusion estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5754–5763. Cited by: §I, §V.
  • [22] M. Jaderberg, K. Simonyan, A. Zisserman, et al. (2015) Spatial transformer networks. In Advances in neural information processing systems, pp. 2017–2025. Cited by: §I, §II-B.
  • [23] P. Jiang and J. A. Shackleford (2018) CNN driven sparse multi-level b-spline image registration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9281–9289. Cited by: §I.
  • [24] M. Kass, A. Witkin, and D. Terzopoulos (1988) Snakes: active contour models. International journal of computer vision 1 (4), pp. 321–331. Cited by: §III-A.
  • [25] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §II-C, §III-B.
  • [26] J. Krebs, T. Mansi, H. Delingette, L. Zhang, F. C. Ghesu, S. Miao, A. K. Maier, N. Ayache, R. Liao, and A. Kamen (2017) Robust non-rigid registration through agent-based action learning. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 344–352. Cited by: §I.
  • [27] H. Li and Y. Fan (2018) Non-rigid image registration using self-supervised fully convolutional networks without training data. In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 1075–1078. Cited by: §I.
  • [28] R. Liao, S. Miao, P. de Tournemire, S. Grbic, A. Kamen, T. Mansi, and D. Comaniciu (2017) An artificial agent for robust image registration. In

    Thirty-First AAAI Conference on Artificial Intelligence

    Cited by: §I.
  • [29] K. Ma, J. Wang, V. Singh, B. Tamersoy, Y. Chang, A. Wimmer, and T. Chen (2017)

    Multimodal image registration with deep context reinforcement learning

    In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 240–248. Cited by: §I.
  • [30] S. Miao, S. Piat, P. Fischer, A. Tuysuzoglu, P. Mewes, T. Mansi, and R. Liao (2018) Dilated fcn for multi-agent 2d/3d medical image registration. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §I.
  • [31] M. Modat, G. R. Ridgway, Z. A. Taylor, M. Lehmann, J. Barnes, D. J. Hawkes, N. C. Fox, and S. Ourselin (2010) Fast free-form deformation using graphics processing units. Computer methods and programs in biomedicine 98 (3), pp. 278–284. Cited by: §III-B.
  • [32] T. C. Mok and A. C. Chung (2020) Large deformation diffeomorphic image registration with laplacian pyramid networks. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 211–221. Cited by: §I.
  • [33] J. Neylon, Y. Min, D. A. Low, and A. Santhanam (2017) A neural network approach for fast, automated quantification of dir performance. Medical physics 44 (8), pp. 4126–4138. Cited by: §I.
  • [34] M. Rohé, M. Datar, T. Heimann, M. Sermesant, and X. Pennec (2017) SVF-net: learning deformable image registration using shape matching. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 266–274. Cited by: §I.
  • [35] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §I, §II-B.
  • [36] D. Rueckert, L. I. Sonoda, C. Hayes, D. L. Hill, M. O. Leach, and D. J. Hawkes (1999) Nonrigid registration using free-form deformations: application to breast mr images. IEEE transactions on medical imaging 18 (8), pp. 712–721. Cited by: §I.
  • [37] A. Sheikhjafari, M. Noga, K. Punithakumar, and N. Ray (2018) Unsupervised deformable image registration with fully connected generative neural network. In International conference on Medical Imaging with Deep Learning, Cited by: §I.
  • [38] D. Shen and C. Davatzikos (2002) HAMMER: hierarchical attribute matching mechanism for elastic registration.. IEEE transactions on medical imaging 21 (11), pp. 1421. Cited by: §I.
  • [39] L. Shen, W. Zhu, X. Wang, L. Xing, J. M. Pauly, B. Turkbey, S. A. Harmon, T. H. Sanford, S. Mehralivand, P. Choyke, et al. (2020) Multi-domain image completion for random missing input data. IEEE Transactions on Medical Imaging. Cited by: §I.
  • [40] A. L. Simpson, M. Antonelli, S. Bakas, M. Bilello, K. Farahani, B. van Ginneken, A. Kopp-Schneider, B. A. Landman, G. Litjens, B. Menze, et al. (2019) A large annotated medical image dataset for the development and evaluation of segmentation algorithms. arXiv preprint arXiv:1902.09063. Cited by: §III-A.
  • [41] H. Sokooti, B. de Vos, F. Berendsen, B. P. Lelieveldt, I. Išgum, and M. Staring (2017) Nonrigid image registration using multi-scale 3d convolutional neural networks. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 232–239. Cited by: §I.
  • [42] Y. Sun, X. Wang, Z. Liu, J. Miller, A. A. Efros, and M. Hardt (2020) Test-time training for out-of-distribution generalization.

    Proceedings of the International Conference on Machine Learning

    Cited by: §I.
  • [43] J. Thirion (1998) Image matching as a diffusion process: an analogy with maxwell’s demons. Medical image analysis 2 (3), pp. 243–260. Cited by: §I.
  • [44] H. Uzunova, M. Wilms, H. Handels, and J. Ehrhardt (2017) Training cnns for image registration from few samples with model-based data augmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 223–231. Cited by: §I.
  • [45] G. Wu, M. Kim, Q. Wang, Y. Gao, S. Liao, and D. Shen (2013)

    Unsupervised deep feature learning for deformable registration of mr brain images

    In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 649–656. Cited by: §I.
  • [46] X. Yang, R. Kwitt, M. Styner, and M. Niethammer (2017) Quicksilver: fast predictive image registration–a deep learning approach. NeuroImage 158, pp. 378–396. Cited by: §I.
  • [47] S. Zhao, Y. Dong, E. I. Chang, Y. Xu, et al. (2019) Recursive cascaded networks for unsupervised medical image registration. In Proceedings of the IEEE International Conference on Computer Vision, pp. 10600–10610. Cited by: §I.
  • [48] W. Zhu, Y. Huang, L. Zeng, X. Chen, Y. Liu, Z. Qian, N. Du, W. Fan, and X. Xie (2019) AnatomyNet: deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy. Medical physics 46 (2), pp. 576–589. Cited by: §V.
  • [49] W. Zhu, A. Myronenko, Z. Xu, W. Li, H. Roth, Y. Huang, F. Milletari, and D. Xu (2020) NeurReg: neural registration and its application to image segmentation. In 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), Cited by: §III-B, §III-C, §IV-C, §V.
  • [50] W. Zhu, X. Xiang, T. D. Tran, G. D. Hager, and X. Xie (2018) Adversarial deep structured nets for mass segmentation from mammograms. In 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018), pp. 847–850. Cited by: §I.
  • [51] W. Zhu, C. Zhao, W. Li, H. Roth, Z. Xu, and D. Xu (2020) LAMP: large deep nets with automated model parallelism for image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 374–384. Cited by: §V.