MAS
Multi Atlas Segmentation for 3D MRI image
view repo
We present VoxelMorph, a fast, unsupervised, learning-based algorithm for deformable pairwise medical image registration. Traditional registration methods optimize an objective function independently for each pair of images, which is time-consuming for large datasets. We define registration as a parametric function, implemented as a convolutional neural network (CNN). We optimize its global parameters given a set of images from a collection of interest. Given a new pair of scans, VoxelMorph rapidly computes a deformation field by directly evaluating the function. Our model is flexible, enabling the use of any differentiable objective function to optimize these parameters. In this work, we propose and extensively evaluate a standard image matching objective function as well as an objective function that can use auxiliary data such as anatomical segmentations available only at training time. We demonstrate that the unsupervised model's accuracy is comparable to state-of-the-art methods, while operating orders of magnitude faster. We also show that VoxelMorph trained with auxiliary data significantly improves registration accuracy at test time. Our method promises to significantly speed up medical image analysis and processing pipelines, while facilitating novel directions in learning-based registration and its applications. Our code is freely available at voxelmorph.csail.mit.edu.
READ FULL TEXT VIEW PDFMulti Atlas Segmentation for 3D MRI image
CNN for segmentation of 3D images
Deformable registration is a fundamental task in a variety of medical imaging studies, and has been a topic of active research for decades. In deformable registration, a dense, non-linear correspondence is established between a pair of images, such as 3D MR brain scans. Traditional registration methods solve an optimization problem for each volume pair by aligning voxels with similar appearance while enforcing constraints on the registration mapping. Unfortunately, solving an optimization is computationally intensive, and therefore extremely slow in practice. For example, state-of-the-art algorithms can take over two hours to register a pair of scans.
We propose a novel registration method that learns a parametrized registration function from a collection of volumes. We implement the function using a convolutional neural network (CNN), that takes two -D input volumes and outputs a mapping of all voxels of one volume to another volume. The parameters of the network, the convolutional kernel weights, are optimized using a training set of volumes from the dataset of interest. By sharing the same parameters for a collection of volumes, the procedure learns a common representation that enables alignment of a new pair of volumes from the same distribution. In essence, we replace a costly optimization of traditional registration algorithms for each test image pair with one global function optimization during a training phase. Registration of a new test scan pair is achieved by simply evaluating the learned function on the given volumes, resulting in rapid registration. We implement our method as a general purpose framework, VoxelMorph, available at voxelmorph.csail.mit.edu.
Our model is flexible, enabling the use of any differentiable objective function to optimize the parameters of the CNN. In this paper, we present and extensively evaluate two objective functions. The first uses only the input volume pair and the proposed registration field of our model. Similar to traditional image registration algorithms, this loss function drives our method to produce registration fields that match voxels with similar appearance, while varying smoothly in space. The second objective function is useful when anatomical segmentations are available at training time for a subset of the data.
Throughout this study, we use the example of registering 3D MR brain scans. However, our method is broadly applicable to other registration tasks, both within and beyond the medical imaging domain. We evaluate our work on a multi-study dataset of over 7,000 scans containing images of healthy and diseased brains from a variety of age groups. Our unsupervised model achieves comparable accuracy to state-of-the-art registration, while taking orders of magnitude less time. Registration with VoxelMorph requires less than a minute using a CPU and under a second with a GPU, in contrast to the state-of-the-art baseline which takes over 2 hours on the CPU. This is of significant practical importance for many medical image tasks, including population analyses.
This paper extends a preliminary version of the work presented at the 2018 International Conference on Computer Vision and Pattern Recognition
[1]. We build on that work by introducing an auxiliary learning model that uses anatomical segmentations during training, and demonstrate that training with segmentations can dramatically improve registration accuracy in new test image pairs where segmentation maps are not available. We discuss an interpretation of our model as amortized optimization, and present a rigorous analysis quantifying the effect of training set size on accuracy. We demonstrate that VoxelMorph can be used even with small datasets. We also perform parameter analyses for all of the experiments.The paper is organized as follows. Section 2 introduces medical image registration and Section 3 describes related work. Section 4 presents our unsupervised and supervised methods. Section 5 presents experimental results on MRI data. We discuss insights of the results and conclude in Section 6.
In the traditional volume registration formulation, one (moving or source) volume is warped to align with a second (fixed or target) volume. Fig. 1
shows sample 2D coronal slices taken from 3D MRI volumes, with boundaries of several anatomical structures outlined. There is significant variability across subjects, caused by natural anatomical brain variations and differences in health state. Deformable registration enables comparison of structures between scans. Such analyses are useful for understanding variability across populations or the evolution of brain anatomy over time for individuals with disease. Deformable registration strategies often involve two steps: an initial affine transformation for global alignment, followed by a much slower deformable transformation with more degrees of freedom. We concentrate on the latter step, in which we compute a dense, nonlinear correspondence for all voxels.
Most existing deformable registration algorithms iteratively optimize a transformation based on an energy function [2]. Let and denote the fixed and moving images, respectively, and let be the registration field. The optimization problem can be written as:
(1) | ||||
(2) |
where represents warped by , function
measures image similarity between its two inputs,
imposes regularization, and is the regularization trade-off parameter.There are several common formulations for , and . Often,
is a displacement vector field, specifying the vector offset from
to for each voxel [3]. Diffeomorphic transforms model as the integral of a velocity vector field, preserving topology and maintaining invertibility on [4]. Common metrics used for include intensity mean squared error, mutual information [5], and cross-correlation [6]. The latter two are particularly useful when volumes have varying intensity distributions and contrasts. enforces a spatially smooth deformation, often modeled as a linear operator on spatial gradients of .Traditional algorithms optimize (1) for each volume pair. This is prohibitively expensive when registering many volumes, for example as part of population-wide analyses. In contrast, we assume that a field can be computed by a parameterized function of the data. We optimize the function parameters by minimizing the expected energy of the form of (1) over a dataset of volume pairs. Essentially, we replace pair-specific optimization of the deformation field by global optimization of the shared parameters, which has been referred to as amortization in other domains [7, 8, 9, 10]
. Once the global function is estimated, a field can be produced by evaluating the function on a given volume pair. In this paper, we use a displacement-based vector field, and focus on various aspects of the learning framework and its advantages. However, we recently demonstrated that velocity-based representations are also possible in a VoxelMorph-like framework
[11].There is extensive work in 3D medical image registration [12, 6, 3, 4, 13, 14, 15, 16, 17]. Several studies optimize within the space of displacement vector fields. These include elastic-type models [3, 18, 19], statistical parametric mapping [20], free-form deformations with b-splines [21], discrete methods [13, 14] and Demons [22, 15]. Diffeomorphic transforms, which are topology-preserving, have shown remarkable success in various computational anatomy studies. Popular formulations include Large Diffeomorphic Distance Metric Mapping (LDDMM) [4, 23, 24, 25, 26, 27, 28, 17], DARTEL [12], diffeomorphic demons [29], and standard symmetric normalization (SyN) [6]. All of these non-learning-based approaches optimize an energy function for each image pair, resulting in slow registration.
There are several recent papers proposing neural networks to learn a function for medical image registration. Most of these rely on ground truth warp fields [30, 31, 32, 33, 34, 35]. A ground truth warp field is ill-defined and difficult to acquire, and using fields derived via conventional registration tools as ground truth can introduce a bias and require a long time to collect. In contrast, VoxelMorph is unsupervised, and is also capable of leveraging auxiliary information such as segmentations during training if those are available. Two recent works [36, 37] present unsupervised methods. Both propose a neural network consisting of a CNN and spatial transformation function [38] that warps images to one another. These methods are preliminary and are only demonstrated on limited subsets of volumes, such as 3D subregions or 2D slices, and support only small transformations. Others [36]
employ regularization only implicitly determined by interpolation methods. A recent method proposed a segmentation label-driven cost function to be used in registering different imaging modalities- T2w MRI and 3D ultrasound- within the same subject
[31, 30]. In contrast to these methods, our generalizable method is demonstrated on entire 3D volumes between different subjects, handles large deformations, and supports any differentiable cost function. We extend the preliminary version of our method [1] to show how to leverage auxiliary information during training, and provide extensive analysis of the effect of different label subsets on registration quality.Optical flow estimation is a related registration problem for 2D images. Optical flow algorithms return a dense displacement vector field depicting small displacements between a pair of 2D images. Traditional optical flow approaches typically solve an optimization problem similar to (1) using variational methods [39, 40, 41]. Extensions that better handle large displacements or dramatic changes in appearance include feature-based matching [42, 43] and nearest neighbor fields [44].
Several learning-based approaches to dense 2D image alignment have been proposed. These methods learn a low-dimensional basis for optical flow in natural images using PCA [45] or learn a parametric function using convolutional neural networks [46, 47]. These methods require ground truth registrations during training.
Let be two image volumes defined over an -D spatial domain . For the rest of this paper, we focus on the case but our method and implementation are dimension independent. For simplicity we assume that and contain single-channel, grayscale data. We also assume that and are affinely aligned as a preprocessing step, so that the only source of misalignment between the volumes is nonlinear. Many packages are available for rapid affine alignment.
We model a function using a convolutional neural network (CNN), where is a registration displacement field between and in , and are network parameters, the kernels of the convolutional layers. That is, for each voxel , is a displacement such that and correspond to similar anatomical locations.
Fig. 2 presents an overview of our method. The network takes and as input, and computes using on a set of parameters . We warp to using a spatial transformation function, enabling evaluation of the similarity of and .
We use stochastic gradient descent to find optimal parameters
by minimizing an expected loss function using a training dataset. We propose two unsupervised loss functions in this work. The first captures image similarity and field smoothness, while the second also leverages anatomical segmentations. Given unseen images and during test time, we obtain a registration field by evaluating . We describe our CNN architecture and the two loss functions in detail in the next sections.In this section we describe the particular architecture used in our experiments, but emphasize that a wide range of architectures may work similarly well and that the exact architecture is not our focus. The parametrization of is based on a convolutional neural network architecture similar to UNet [50, 51], which consists of encoder and decoder sections with skip connections.
Fig. 3 depicts our UNet-style network, which takes a single input formed by concatenating and into a 2-channel 3D image. In our experiments, the input is of size , but the framework is not limited by a particular size. We apply 3D convolutions (followed by nonlinear activations) in both the encoder and decoder stages. The convolutional layers capture hierarchical features of the input image pair, used to estimate
. In the encoder, we use strided convolutions to reduce the spatial dimensions in half at each layer. Successive layers of the encoder therefore operate over coarser representations of the input, similar to the image pyramid used in traditional image registration work.
In the decoding stage, we alternate between upsampling, convolutions and concatenating skip connections that propagate features learned during the encoding stages directly to layers generating the registration. Successive layers of the decoder operate on finer spatial scales, enabling precise anatomical alignment. The receptive fields of the convolutional kernels of the smallest layer should be at least as large as the maximum expected displacement between corresponding voxels in and . In our architecture, the smallest layer applies convolutions over a volume of the size of the input images. We provide further architecture details in the supplementary material.
The proposed method learns optimal parameter values in part by minimizing differences between and
. In order to use standard gradient-based methods, we construct a differentiable operation based on spatial transformer networks
[38] to compute .For each voxel , we compute a (subpixel) voxel location in . Because image values are only defined at integer locations, we linearly interpolate the values at the eight neighboring voxels:
(3) |
where are the voxel neighbors of and iterates over dimensions of . Because the operations are differentiable^{1}^{1}1The absolute value is implemented to be differentiable at via a set derivative value of
, we can backpropagate errors during optimization.
The proposed model can be trained with any differentiable loss function. In this section, we propose two loss functions: an unsupervised loss that evaluates the model using only the input volumes and generated registration field, and an auxiliary loss that also leverages anatomical segmentations at training time.
The unsupervised loss consists of two components: that penalizes differences in appearance, and that penalizes local spatial variations in :
(4) |
where is a regularization parameter. We experimented with two often-used functions for . The first is the mean squared voxelwise difference, applicable when and have similar image intensity distributions and local contrast:
(5) |
The second is the local cross-correlation of and , which is more robust to intensity variations found across scans and datasets [6]. Let and denote images with local mean intensities subtracted out: , where iterates over a volume around , with in our experiments. The local cross-correlation of and is written as:
(6) |
A higher CC indicates a better alignment, yielding the loss function: .
Minimizing will encourage to approximate , but may generate a non-smooth that is not physically realistic. We encourage a smooth displacement field using a diffusion regularizer on its spatial gradients:
(7) |
and approximate spatial gradients using differences between neighboring voxels.
Here, we describe how VoxelMorph can leverage auxiliary information available during training but not during testing. Anatomical segmentation maps are sometimes available during training, and can be annotated by human experts or automated algorithms. A segmentation map assigns each voxel to an anatomical structure. If a registration field represents accurate anatomical correspondences, the regions in and corresponding to the same anatomical structure should overlap well.
Let be the set of voxels of structure for and , respectively. We quantify the volume overlap for structure using the Dice score [52]:
(8) |
A Dice score of indicates that the anatomy matches perfectly, and a score of 0 indicates that there is no overlap. We define the segmentation loss over all structures as:
(9) |
alone does not encourage smoothness and agreement of image appearance, which are essential to good registration. We therefore combine with (4) to obtain the objective:
(10) |
where is a regularization parameter. We emphasize that although the segmentations are used during training, they are not inputs to , as shown in Fig. 2. We do not use them to register a new image pair during test time.
Since anatomical labels are categorical, a naive implementation of linear interpolation to compute is inappropriate, and a direct implementation of (8) might not be amenable to auto-differentiation frameworks. We design and to be image volumes with channels, where each channel is a binary mask specifying the spatial domain of a particular structure. We compute by spatially transforming each channel of . We then compute the numerator and denominator of (8) by multiplying and adding and , respectively.
Our method substitutes the pair-specific optimization over the deformation field with a global optimization of function parameters for function . This process is sometimes referred to as amortized optimization [53]. Because the function is tasked with estimating registration between any two images, the fact that parameters are shared globally acts as a natural regularization. We explore this aspect in Section V-E2 (Regularization Analysis). In addition, the quality and generalizability of the deformations output by the function will depend on the data it is trained on. Indeed, the resulting deformation can be interpreted as simply an approximation or initialization to the optimal deformation , and the resulting difference is sometimes referred to the amortization gap [10, 53]. If desired, this initial deformation field could be improved using any instance-specific optimization. In our experiments, we accomplish this by fine-tuning the network parameters for each particular scan independently. However, most often we find that the initial deformation, the VoxelMorph output, is already as accurate as state of the art results. We explore these aspects in experiments presented in Section V-E3.
We demonstrate our method on the task of brain MRI registration. We focus on atlas-based registration, in which we compute a registration field between an atlas, or reference volume, and each volume in our dataset. Atlas-based registration is a common formulation in population analysis, where inter-subject registration is a core problem. The atlas represents a reference, or average volume, and is usually constructed by jointly and repeatedly aligning a dataset of brain MR volumes and averaging them together [54]. We use an atlas computed using an external dataset [55, 56]. Each input volume pair consists of the atlas (image ) and a volume from the dataset (image ). Fig. 4 shows example image pairs using the same fixed atlas for all examples. All figures that depict brains in this paper show 2D slices, but all registration is done in 3D.
We use a large-scale, multi-site, multi-study dataset of 7829 T1–weighted brain MRI scans from eight publicly available datasets: ADNI [57], OASIS [58], ABIDE [59], ADHD200 [60], MCIC [61], PPMI [62], HABS [63], and Harvard GSP [64]. Acquisition details, subject age ranges and health conditions are different for each dataset. All scans were resampled to a grid with mm isotropic voxels. We carry out standard pre‐processing steps, including affine spatial normalization and brain extraction for each scan using FreeSurfer [55], and crop the resulting images to . All MRIs were anatomically segmented with FreeSurfer, and we applied quality control using visual inspection to catch gross errors in segmentation results. We include all anatomical structures that are at least voxels in volume for all test subjects, resulting in 30 structures. We use the resulting segmentation maps in evaluating our registration as described below. We split our dataset into 7329, 250, and 250 volumes for train, validation, and test sets respectively, although we highlight that we do not use any supervised information at any stage.
Obtaining dense ground truth registration for these data is not well-defined since many registration fields can yield similar looking warped images. We evaluate our method using volume overlap of anatomical segmentations. If a registration field represents accurate correspondences, the regions in and corresponding to the same anatomical structure should overlap well (see Fig. 4 for examples). We quantify the volume overlap between structures using the Dice score (8).
We use Symmetric Normalization (SyN) [6], the top-performing registration algorithm in a comparative study [65] as a baseline. We use the SyN implementation in the publicly available ANTs software package [66], with a cross-correlation similarity measure. Throughout our work with medical images, we found the default ANTs smoothness parameters to be sub-optimal for applying ANTs to our data. We obtained improved parameters using a wide parameter sweep across multiple datasets, and use those in these experiments.
Method | Dice | GPU sec | CPU sec |
---|---|---|---|
Affine only | 0.567 (0.157) | 0 | 0 |
ANTs (SyN) | 0.750 (0.136) | - | 9059 (2023) |
VoxelMorph (CC) | 0.750 (0.138) | 0.45 (0.01) | 57 (1) |
VoxelMorph (MSE) | 0.753 (0.134) | 0.45 (0.01) | 57 (1) |
Average Dice scores and runtime results for affine alignment, ANTs and VoxelMorph. Standard deviations across structures and subjects are in parentheses. The average Dice score is computed over all structures and subjects. Timing is computed after preprocessing. Our networks yield comparable results to ANTs in Dice score, while operating orders of magnitude faster during testing. To our knowledge, ANTs does not have a GPU implementation.
We implemented our method using Keras
[67]with a Tensorflow backend
[68]. We extended the 2D spatial transformer layer to 3D with linear interpolation. We use the ADAM optimizer [69] with a learning rate of . To reduce memory usage, each training batch consists of one pair of volumes. We train separate networks with different regularization parameters, and show the behavior of our method with respect to these parameters. We then select the network that optimizes Dice score on our validation set, and report results on our held-out test set. Our code and model parameters are available online at voxelmorph.csail.mit.edu.Table I presents average Dice scores computed for all subjects and structures for: 1) a baseline of only global affine alignment, 2) ANTs, and 3) VoxelMorph with different losses. VoxelMorph variants perform comparably to ANTs in terms of Dice, and are significantly better than affine alignment. Example visual results of the warped images from our algorithms are shown in Fig. 4. Both VoxelMorph and ANTs are able to handle significant shape changes for various structures.
Fig. 5 presents the Dice scores for each structure as a boxplot. For ease of visualization, we average Dice scores of the same structures from the two hemispheres into one score, e.g., the left and right hippocampi scores are averaged. The VoxelMorph models achieve comparable Dice measures to ANTs for all structures, performing slightly better than ANTs on some structures such as cerebral white matter, and worse on others such as the hippocampi. Because MSE and CC perform similarly, we focus on MSE for the remaining experiments.
Table I presents runtime results using an Intel Xeon (E5-2680) CPU, and a NVIDIA TitanX GPU. We report the elapsed time for computations following the affine alignment preprocessing step, which all of the presented methods share, and requires just a few minutes even on a CPU. ANTs requires two or more hours on the CPU. ANTs runtimes vary widely, as its convergence depends on the difficulty of the alignment task. Registering two images with VoxelMorph is, on average, 60 times faster on the CPU. When using the GPU, VoxelMorph computes a registration in under a second. To our knowledge, there is no publicly available ANTs implementation for GPUs. It is likely that the SyN algorithm would benefit from a GPU implementation, but the main advantage of VoxelMorph comes from not requiring an optimization on each test pair, as can be seen in the CPU comparison.
Fig. 6 shows average Dice scores for the validation set for different values of the regularization parameter . Shown for comparison in a dashed line is the Dice score for alignment with ANTs with optimal parameter values. The results vary smoothly over a large range of values, illustrating that our model is robust to choice of . Interestingly, even setting , which enforces no regularization on registration, results in a significant improvement over affine registration. This is likely because the optimal network parameters need to register all pairs in the training set well, yielding an implicit dataset regularization for the function .
We evaluate the effect of training set size on accuracy, and the relationship between amortized and instance-specific optimization. We train VoxelMorph on subsets of different sizes from our training dataset, and report Dice scores on: (1) the training subset, (2) the test set, and (3) the test set when each deformation is further individually optimized for each test image pair. We perform (3) by “fine-tuning” the VoxelMorph parameters for 1000 additional iterations on each test pair (which is possible since VoxelMorph is unsupervised).
Fig. 7 presents our results. A small training set size of 10 scans results in slightly lower train and test Dice scores compared to larger training set sizes. However, there is no significant difference in Dice scores when training with 100 scans or the full dataset of over 7000 scans. These results indicate that training with just subjects already yields near-optimal test performance, and training with subjects results in state-of-the -art results. Further optimizing the VoxelMorph parameters on each test image pair results in better test Dice scores regardless of training set size.
In this section, we evaluate VoxelMorph when using loss function (10), where we use segmentation maps during training, through the auxiliary loss function. We present an evaluation of our model in two practical scenarios: (1) when subsets of anatomical structure labels are available during training, and (2) when coarse segmentations labels are available during training. We use the same train/validation/test split as the previous experiments.
In many practical settings, it may be infeasible to obtain training segmentations for all structures. We therefore first consider the case where segmentations are available for only a subset of the 30 structures. We refer to structures present in segmentations as observed, and the rest as unobserved. We considered three cases, when: 1, 15 (half), and 30 (all) structure segmentations are observed. The first two experiments simulate different amounts of partially observed segmentations. We train separate models on different subsets of observed structures. For single structure segmentations, we manually selected four important structures for four folds of the experiment: hippocampi, gray matter, white matter, and ventricles. For the second experiment type, we randomly selected 15 of the 30 structures, repeated over five folds. We use these segmentation maps at training, and show results on test pairs where segmentation maps are not used.
Fig. 8a-c shows Dice scores for both the observed and unobserved labels when sweeping in (10), the auxiliary regularization trade-off parameter. In general, VoxelMorph with auxiliary data significantly outperforms unsupervised VoxelMorph (equivalent to or ) and ANTs on observed structures. Dice score on observed labels generally increases with an increase in .
Interestingly, VoxelMorph (trained with auxiliary data) yields improved Dice scores for unobserved structures compared to the unsupervised variant for a range of values (see Fig. 8a-b), even though these segmentations were not explicitly observed during training. Registration accuracy for unobserved structures starts declining when is large, which is expected since the model starts over-fitting the observed structures too much at the cost of image matching and spatial regularization.
We consider the scenario where only coarse labels are available, such as when all the white matter is segmented as one structure. This situation enables evaluation of how the auxiliary data affects anatomical registration at finer scales, within the coarsely delineated structures. To achieve this, we merge the 30 structures into four broad groups: white matter, gray matter, cerebral spinal fluid (CSF) and the brain stem, and evaluate the accuracy of the registration on the original structures.
Fig. 8d presents mean Dice scores over the original 30 structures with varying regularization parameter . With an optimal of 0.01, we obtain an average Dice score of . This is roughly a Dice point improvement over VoxelMorph without auxiliary information.
VoxelMorph with unsupervised loss performs comparably to the state-of-the-art ANTs software in terms of Dice score, while reducing the computation time from hours to minutes on a CPU and under a second on a GPU. VoxelMorph is flexible and handles both partially observed or coarsely delineated auxiliary information during training. When all structures are annotated, the Dice score of the VoxelMorph results improve to Dice points beyond the state-of-the-art, while still preserving the runtime improvement.
VoxelMorph performs amortized optimization, learning function parameters that are optimal for an entire training dataset. As Fig. 7 shows, the dataset need not be large: with only 100 training images, VoxelMorph leads to state-of-the-art registration quality scores for both training and test sets. Instance-specific optimization further improves VoxelMorph performance by Dice point. This is a small increase, illustrating that amortized optimization leads to nearly optimal registration.
We performed a thorough set of experiments demonstrating that, for a reasonable choice of , the availability of anatomical segmentations during training significantly improves test registration performance with VoxelMorph. The performance gain varies based on the quality and number of anatomical segmentations available. Given a single labelled anatomical structure during training, the accuracy of registration of test subjects for that label increases, without negatively impacting other anatomy. If half or all of the labels are observed, or even a coarse segmentation is provided at training, registration accuracy improves for all labels during test. While we experimented with one type of auxiliary data in this study, VoxelMorph can leverage other auxiliary data, such as different modalities or anatomical keypoints.
VoxelMorph is a general learning model, and is not limited to a particular image type or anatomy – it may be useful in other medical image registration applications such as cardiac MR scans or lung CT images. With an appropriate loss function such as mutual information, the model can also perform multimodal registration. VoxelMorph promises to significantly speed up medical image analysis and processing pipelines, while opening novel directions in learning-based registration.
B. Glocker, N. Komodakis, G. Tziritas, N. Navab, and N. Paragios, “Dense image registration through mrfs and efficient linear programming,”
Medical image analysis, vol. 12, no. 6, pp. 731–741, 2008.C. Ceritoglu, K. Oishi, X. Li, M.-C. Chou, L. Younes, M. Albert, C. Lyketsos, P. C. van Zijl, M. I. Miller, and S. Mori, “Multi-contrast large deformation diffeomorphic metric mapping for diffusion tensor imaging,”
Neuroimage, vol. 47, no. 2, pp. 618–627, 2009.Y. Hu, M. Modat, E. Gibson, N. Ghavami, E. Bonmati, C. M. Moore, M. Emberton, J. A. Noble, D. C. Barratt, and T. Vercauteren, “Label-driven weakly-supervised learning for multimodal deformable image registration,” in
Biomedical Imaging (ISBI 2018), 2018 IEEE 15th International Symposium on. IEEE, 2018, pp. 1070–1074.X. Yang, R. Kwitt, M. Styner, and M. Niethammer, “Quicksilver: Fast predictive image registration–a deep learning approach,”
NeuroImage, vol. 158, pp. 378–396, 2017.P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,”
arXiv preprint, 2017.