voxelmorph
None
view repo
Traditional deformable registration techniques achieve impressive results and offer a rigorous theoretical treatment, but are computationally intensive since they solve an optimization problem for each image pair. Recently, learning-based methods have facilitated fast registration by learning spatial deformation functions. However, these approaches use restricted deformation models, require supervised labels, or do not guarantee a diffeomorphic (topology-preserving) registration. Furthermore, learning-based registration tools have not been derived from a probabilistic framework that can offer uncertainty estimates. In this paper, we present a probabilistic generative model and derive an unsupervised learning-based inference algorithm that makes use of recent developments in convolutional neural networks (CNNs). We demonstrate our method on a 3D brain registration task, and provide an empirical analysis of the algorithm. Our approach results in state of the art accuracy and very fast runtimes, while providing diffeomorphic guarantees and uncertainty estimates. Our implementation is available online at https://github.com/voxelmorph/voxelmorph .
READ FULL TEXT VIEW PDFNone
Deformable registration computes a dense correspondence between two images, and is fundamental to many medical image analysis tasks. Traditional methods solve an optimization over the space of deformations, such as elastic-type models [5, 29], B splines [28]
, dense vector fields
[31], or discrete methods [10, 14]. Constraining the allowable transformations to diffeomorphisms ensures certain desirable properties, such as preservation of topology. Diffeomorphic transforms have seen extensive methodological development, yielding state-of-the-art tools, such as LDDMM [7, 33], DARTEL [3], and SyN [4]. Unfortunately, these methods often demand substantial time and computational resources to run for a given image pair.Recent methods have proposed to train neural networks that map a pair of input images to an output deformation. These approaches usually require ground truth registration fields, often derived via more conventional registration tools, which can introduce a bias and necessitate significant preprocessing [26, 30, 32]. Some preliminary papers [11, 21]
explore unsupervised strategies that build on the spatial transformer network
[17], but are only demonstrated with constrained deformation models such as affine or small displacement fields. Furthermore, they have only been validated on limited volumes, such as 3D patches or 2D slices. A recent paper avoids these pitfalls, but still does not provide topology-preserving guarantees or probabilistic uncertainty estimates, which yield meaningful information for downstream image analysis [6].In this paper we present a formulation for registration as conducting variational inference on a probabilistic generative model. This framework naturally results in a learning algorithm that uses a convolutional neural network with an intuitive cost function. We introduce novel diffeomorphic integration layers combined with a transform layer to enable unsupervised end-to-end learning for diffeomorphic registration. We present extensive experiments, demonstrating that our algorithm achieves state of the art registration accuracy while providing diffeomorphic deformations, fast runtime and estimates of registration uncertainty.
Although the method presented in this paper applies to a multitude of deformable representations, we choose to work with diffeomorphisms, and in particular with a stationary velocity field representation [3]. Diffeomorphic deformations are differentiable and invertible, and thus preserve topology. Let
represent the deformation that maps the coordinates from one image to coordinates in another image. In our implementation, the deformation field is defined through the following ordinary differential equation (ODE):
(1) |
where is the identity transformation and is time. We integrate the stationary velocity field over to obtain the final registration field .
We compute the integration numerically using scaling and squaring [2], which we briefly review here. The integration of a stationary ODE represents a one-parameter subgroup of diffeomorphisms. In group theory, is a member of the Lie algebra and is exponentiated to produce , which is a member of the Lie group: . From the properties of one-parameter subgroups, for any scalars and , , where is a composition map associated with the Lie group. Starting from where is a map of spatial locations, we use the recurrence to obtain . is chosen so that .
Let and be 3D images, such as MRI volumes, and let be a latent variable that parametrizes a transformation function . We use a generative model to describe the formation of by warping into . We propose a variational inference method that uses a neural network of convolutions, diffeomorphic integration, and spatial transform layers. We learn the network parameters in an unsupervised fashion, i.e., without access to ground truth registrations. We describe how the network yields fast diffeomorphic registration of a new image pair and , while providing uncertainty estimates.
We model the prior probability of
as:(2) |
where
is the multivariate normal distribution with mean
and covariance . Our work applies to a wide range of representations . For example, could be a low-dimensional embedding of a dense displacement field, or even the displacement field itself. In this paper, we let be a stationary velocity field that specifies a diffeomorphism through the ODE (1). We let be the Laplacian of a neighborhood graph defined on a voxel grid, where is the graph degree matrix, and is a voxel neighbourhood adjacency matrix. We encourage spatial smoothness of by letting , where is a precision matrix and denotes a parameter controlling the scale of the velocity field .We let be a noisy observation of warped image :
(3) |
where
reflects the variance of additive image noise.
We aim to estimate the posterior registration probability . Using this, we can obtain the most likely registration field for a new image pair via MAP estimation, along with an estimate of uncertainty for this registration.
With our assumptions, computing the posterior probability
is intractable. We use a variational approach, and introduce an approximate posterior probability parametrized by . We minimize the KL divergence(4) |
which is the negative of the variational lower bound of the model evidence [19]. We model the approximate posterior as a multivariate normal:
(5) |
where is diagonal.
We estimate , and using a convolutional neural network parameterized by , as described below. We can therefore learn the parameters by optimizing the variational lower bound (4) using stochastic gradient methods. Specifically, for each image pair and samples , we can compute , with the resulting loss:
(6) | |||
where is the number of samples used. In our experiments, we use . The first term encourages the warped image to be similar to . The second term encourages the posterior to be close to the prior . Although the variational covariance is diagonal, the last term spatially smoothes the mean: , where are the neighbors of voxel . We treat and as fixed hyper-parameters.
We design the network that takes as input and and outputs and , based on a 3D UNet-style architecture [27]
. The network includes a convolutional layer with 16 filters, four downsampling layers with 32 convolutional filters and a stride of two, and finally three upsampling convolutional layers with 32 filters. All convolutional layers use LeakyReLu activations and a 3x3 kernel.
To enable unsupervised learning of parameters using (6), we must form to compute the data term. We first implement a layer that samples a new using the “re-parameterization trick” [19].
We propose novel scaling and squaring network layers to compute . Specifically, these involve compositions within the neural network architecture using a differentiable spatial transformation operation. Given two 3D vector fields and , for each voxel such a layer computes , a non-integer voxel location in
, using linear interpolation. Starting with
, we compute recursively using these layers, leading to . In our experiments, we use .Finally, we use a spatial transform layer to warp volume according to the computed diffeomorphic field . This network results in three outputs, and , which are used in the model loss (6).
In summary, the neural network takes as input and , computes and , samples a new , computes a diffeomorphic and applies it to
. Since all the steps are designed to be differentiable, we learn the network parameters using stochastic gradient descent based methods on the loss (
6). The framework is summarized in Figure 1.Our implementation uses Keras
[8]with a Tensorflow backend
[1] and the ADAM optimizer [18]. We implement our method as part of the VoxelMorph package, with both implementations available online at https://github.com/voxelmorph/voxelmorph.Given learned parameters, we approximate registration of a new scan pair using . We first obtain using
(7) |
by evaluating the neural network on the two input images. We then compute using the scaling and squaring network layers. We also obtain , enabling an estimation of the uncertainty of the velocity field at each voxel :
(8) |
We also estimate uncertainty in the deformation field empirically. We sample several representations , propagate them through the diffeomorphic layers to compute , and compute the empirical diagonal covariance across samples. The uncertainty is then .
We focus on 3D atlas-based registration, a common task in population analysis. Specifically, we register each scan to an atlas computed using external data [13].
Data and Preprocessing. We use a large-scale, multi-site dataset of 7829 T1-weighted brain MRI scans from eight publicly available datasets: ADNI [25], OASIS [22], ABIDE [12], ADHD200 [24], MCIC [15], PPMI [23], HABS [9], and Harvard GSP [16]. Acquisition details, subject age ranges and health conditions are different for each dataset. We performed standard pre-processing steps on all scans, including resampling to mm isotropic voxels, affine spatial normalization and brain extraction for each scan using FreeSurfer [13]. We crop the final images to . Segmentation maps including 29 anatomical structures, obtained using FreeSurfer for each scan, are used in evaluating registration results. We split the dataset into 7329, 250, and 250 volumes for train, validation, and test sets respectively, although we underscore that the training is unsupervised.
Method | Avg. Dice | GPU sec | CPU sec | Uncertainty | |
Affine only | 0.567 (0.157) | 0 | 0 | 0 | No |
ANTs (SyN) | 0.750 (0.135) | - | 9059 (2023) | 6505 (3024) | No |
VoxelMorph | 0.750 (0.137) | 0.554 (0.017) | 144 (1) | 18096 (4230) | No |
Ours | 0.753 (0.137) | 0.451 (0.011) | 51 (0.2) | 0.7 (2.0) | Yes |
Evaluation Metric. To evaluate a registration algorithm, we register each subject to an atlas, propagate the segmentation map using the resulting warp, and measure volume overlap using the Dice metric. We also evaluate the diffeomorphic property, a focus of our model. Specifically, the Jacobian matrix captures the local properties of around voxel . The local deformation is diffeomorphic, both invertible and orientation-preserving, only at locations for which [3]. We count all other voxels, where .
Baseline Methods. We compare our approach with the popular ANTs software package using Symmetric Normalization (SyN) [4], a top-performing algorithm [20]. We found that the default ANTs settings were sub-optimal for our task, so we performed a wide parameter and similarity metric search across a multitude of datasets. We identified top performing parameter values on the Dice metric and used cross-correlation as the ANTs similarity measure. We also test our recent CNN-based method, VoxelMorph, which aims to produce fast registration but does not yield diffeomorphic results or uncertainty estimates [6]. We sweep the regularization parameter using our validation set, and use the optimal parameters in our results.
Results on Test set: Figure 2 shows representative results. Figure 3 illustrates Dice results on a range of anatomical structures, and Table 1 gives a summary of the results. Not only does our algorithm achieve state of the art Dice results and the fastest runtimes, but it also produces diffeomorphic registration fields (having nearly no non-negative Jacobian voxels per scan) and yields uncertainty estimates.
Specifically, all methods achieve comparable Dice results on each structure and overall. Our method and VoxelMorph require a fraction of the ANTs runtime to register two images: less than a second on a GPU, and less than a minute on a CPU (for our method). To the best of our knowledge, ANTs does not have a GPU implementation. Algorithm runtimes were computed for an NVIDIA TitanX GPU and a Intel Xeon (E5-2680) CPU, and exclude preprocessing common to all methods. Importantly, while our method achieves positive Jacobians at nearly all voxels, the flow fields resulting from the baseline methods contain a few thousand locations of non-positive Jacobians. This can be alleviated with increased spatial regularization, but this in turn leads to a drop in performance on the Dice metric.
Uncertainty. Figure 4 shows representative uncertainty maps, unique to our model. The velocity field is more certain near anatomical structure edges, and less confident in homogenous scan regions, such as the white matter or ventricle interior.
Parameter Analysis. We perform a grid search for the two fixed hyper-parameters and . We train a model for each parameter pair and evaluate Dice on the validation set. We search 30 values within two orders of magnitude around meaningful initial values for both parameters: , the variance of the intensity difference between an affinely aligned image and the atlas, and
, equivalent to a diagonal standard deviation of 1 voxel for
. We found and to perform well, and set .We propose a probabilistic model for diffeomorphic image registration and derive a learning algorithm that makes use of a convolutional neural network and an intuitive resulting loss function. To achieve unsupervised, end-to-end learning for diffeomorphic registrations, we introduce novel
scaling and squaring differentiable layers. Our derivation is generalizable. For example, can be a low dimensional embedding representation of a deformation field, or the displacement field itself. Our algorithm can infer the registration of new image pairs in under a second. Compared to traditional methods, our method is significantly faster, and compared to recent learning based methods, our method offers diffeomorphic guarantees, and provides natural uncertainty estimates for resulting registrations.This research was funded, in part, by NIH grants R01LM012719, R01AG053949, and 1R21AG050122, and NSF grant 1707312, the Cornell NeuroNex Hub.
Dense image registration through mrfs and efficient linear programming.
Medical image analysis, 12(6):731–741, 2008.Quicksilver: Fast predictive image registration–a deep learning approach.
NeuroImage, 158:378–396, 2017.