Joint registration and synthesis using a probabilistic model for alignment of MRI and histological sections

by   Juan Eugenio Iglesias, et al.

Nonlinear registration of 2D histological sections with corresponding slices of MRI data is a critical step of 3D histology reconstruction. This task is difficult due to the large differences in image contrast and resolution, as well as the complex nonrigid distortions produced when sectioning the sample and mounting it on the glass slide. It has been shown in brain MRI registration that better spatial alignment across modalities can be obtained by synthesizing one modality from the other and then using intra-modality registration metrics, rather than by using mutual information (MI) as metric. However, such an approach typically requires a database of aligned images from the two modalities, which is very difficult to obtain for histology/MRI. Here, we overcome this limitation with a probabilistic method that simultaneously solves for registration and synthesis directly on the target images, without any training data. In our model, the MRI slice is assumed to be a contrast-warped, spatially deformed version of the histological section. We use approximate Bayesian inference to iteratively refine the probabilistic estimate of the synthesis and the registration, while accounting for each other's uncertainty. Moreover, manually placed landmarks can be seamlessly integrated in the framework for increased performance. Experiments on a synthetic dataset show that, compared with MI, the proposed method makes it possible to use a much more flexible deformation model in the registration to improve its accuracy, without compromising robustness. Moreover, our framework also exploits information in manually placed landmarks more efficiently than MI, since landmarks inform both synthesis and registration - as opposed to registration alone. Finally, we show qualitative results on the public Allen atlas, in which the proposed method provides a clear improvement over MI based registration.



page 6

page 12

page 14

page 15


Synth-by-Reg (SbR): Contrastive learning for synthesis-based registration of paired images

Nonlinear inter-modality registration is often challenging due to the la...

Initialize globally before acting locally: Enabling Landmark-free 3D US to MRI Registration

Registration of partial-view 3D US volumes with MRI data is influenced b...

Unsupervised Image Registration Towards Enhancing Performance and Explainability in Cardiac And Brain Image Analysis

Magnetic Resonance Imaging (MRI) typically recruits multiple sequences (...

Cortical Surface Co-Registration based on MRI Images and Photos

Brain shift, i.e. the change in configuration of the brain after opening...

Part-to-whole Registration of Histology and MRI using Shape Elements

Image registration between histology and magnetic resonance imaging (MRI...

Flexible Bayesian Modelling for Nonlinear Image Registration

We describe a diffeomorphic registration algorithm that allows groups of...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 Motivation: human brain atlases

Histology is the study of tissue microanatomy. Histological analysis involves cutting a wax-embedded or frozen block of tissue into very thin sections (in the order of 10 microns), which are subsequently stained, mounted on glass slides, and examined under the microscope. Using different types of stains, different microscopic structures can be enhanced and studied. Moreover, mounted sections can be digitised at high resolution – in the order of a micron. Digital histological sections not only enable digital pathology in a clinical setting, but also open the door to an array of image analysis applications.

A promising application of digital histology is the construction of high resolution computational atlases of the human brain. Such atlases have traditionally been built using MRI scans and/or associated manual segmentations, depending on whether they describe image intensities, neuroanatomical label probabilities, or both. Examples include: the MNI atlas

(Evans et al., 1993; Collins et al., 1994), the Colin 27 atlas (Holmes et al., 1998), the ICBM atlas (Mazziotta et al., 1995, 2001), and the LONI LPBA40 atlas (Shattuck et al., 2008).

Computational atlas building using MRI is limited by the resolution and contrast that can be achieved with this imaging technique. The resolution barrier can be partly overcome with ex vivo MRI, in which motion – and hence time constraints – are eliminated, enabling longer acquisition at ultra-high resolution (), which in turns enable manual segmentation at a higher level of detail (Augustinack et al., 2005; Yushkevich et al., 2009; Iglesias et al., 2015; Saygin et al., 2017). However, not even the highest resolution achievable with ex vivo MRI is sufficient to study microanatomy. Moreover, and despite recent advances in pulse sequences, MRI does not generate visible contrast at the boundaries of many neighbouring brain structures, the way that histological staining does.

For these reasons, recent studies building computational brain atlases are using stacks of digitised histological sections, which enable more accurate manual segmentations, to build atlases at a superior level of detail. Examples include the work by Chakravarty et al. (2006) on the thalamus and basal ganglia; by Krauth et al. (2010) on the thalamus; by Adler et al. (2014, 2016) on the hippocampus; our recent work on the thalamus (Iglesias et al., 2017), and the recently published atlas from the Allen Institute (Ding et al., 2016)222

1.2 Related work on 3D histology reconstruction

The main drawback of building atlases with histology is the fact that the 3D structure of the tissue is lost in the processing. Sectioning and mounting introduce large nonlinear distortions in the tissue structure, including artefacts such as folds and tears. In order to recover the 3D shape, image registration algorithms can be used to estimate the spatial correspondences between the different sections. This problem is commonly known as “histology reconstruction”.

The simplest approach to histology reconstruction is to sequentially align sections in the stack to their neighbours using a linear registration method. There is a wide literature on the topic, not only for histological sections but also for autoradiographs. Most of these methods use robust registration algorithms, e.g., based on edges (Hibbard and Hawkins, 1988; Rangarajan et al., 1997), block matching (Ourselin et al., 2001) or point disparity (Zhao et al., 1993). There are also nonlinear versions of serial registration methods (e.g., Arganda-Carreras et al. 2010; Pitiot et al. 2006; Chakravarty et al. 2006; Schmitt et al. 2007), some of which introduce smoothness constraints to minimise the impact of sections that are heavily affected by artefacts and/or are poorly registered (Ju et al., 2006; Yushkevich et al., 2006; Cifor et al., 2011).

The problem with serial alignment of sections is that, without any information on the original shape, methods are prone to straightening curved structures, a problem known as “z-shift” or “banana effect” (since the reconstruction of a sliced banana would be a cylinder). One way of overcoming this problem is the use of fiducial markers such as needles or rods (e.g., Humm et al. 2003); however, this approach has two disadvantages: the tissue may be damaged by the needles, and additional bias can be introduced in the registration if the sectioning plane is not perpendicular to the needles.

Another way of combating the banana effect is to use an external reference volume without geometric distortion. In an early study, Kim et al. (1997) used video frames to construct such reference, in the context of autoradiograph alignment. More recent works have used MRI scans (e.g., Malandain et al. 2004; Dauguet et al. 2007; Yang et al. 2012; Ebner et al. 2017). The general idea is to iteratively update: 1. a rigid transform bringing the MRI to the space of the histological stack; and 2. a nonlinear transform per histological section, which registers it to the space of the corresponding (resampled) MRI plane. A potential advantage of using MRI as a reference frame for histology reconstruction is that one recovers in MRI space the manual delineations made on the histological sections, which can be desirable when building atlases (Adler et al., 2016).

Increased stability in histology reconstruction can be obtained by using a third, intermediate modality to assist the process. Such modality is typically a stack of blockface photographs, which are taken prior to sectioning and are thus spatially undistorted. Such photographs help bridge the spaces of the MRI (neither modality is distorted) and the histology (plane correspondences are known). An example of this approach is the BigBrain project (Amunts et al., 2013).

Assuming that a good estimate of the rigid alignment between the MRI and the histological stack is available, the main technical challenge of 3D histology reconstruction is the nonlinear 2D registration of a histological section with the corresponding (resampled) MRI plane. These images exhibit very different contrast properties, in addition to modality-specific artefacts, e.g., tears in histology, bias field in MRI. Therefore, generic information theory based registration metrics such as mutual information (Maes et al., 1997; Wells et al., 1996; Pluim et al., 2003) yield unsatisfactory results. This is partly due to the fact that such approaches only capture statistical relationships between image intensities at the voxel level, disregarding geometric information.

1.3 Related work on image synthesis for registration

An alternative to mutual information for inter-modality registration is to use image synthesis. The premise is simple: if we need to register a floating image of modality to a reference image of modality , and we have access to a dataset of spatially aligned pairs of images of the two modalities , then we can: estimate a synthetic version of the floating image that resembles modality ; register to with an intra-modality registration algorithm; and apply the resulting deformation field to the original floating image . In the context of brain MRI, we have shown in Iglesias et al. (2013) that such an approach, even with a simple synthesis model (Hertzmann et al., 2001), clearly outperforms registration based on mutual information. This result has been replicated in other studies (e.g., Roy et al. 2014), and similar conclusions have been reached in the context of MRI segmentation (Roy et al., 2013) and classification (van Tulder and de Bruijne, 2015).

Medical image synthesis has gained popularity in the last few years due to the advent of hybrid PET-MR scanners, since synthesising a realistic CT scan from the corresponding MR enables accurate attenuation correction of the PET data (Burgos et al., 2014; Huynh et al., 2016). Another popular application of CT synthesis from MRI is dose calculation in radiation therapy (Kim et al., 2015; Siversson et al., 2015)

. Unfortunately, most of these synthesis algorithms are based on supervised machine learning techniques, which require aligned pairs of images from the two modalities – which are very hard to obtain for histology and MRI.

A possible alternative to supervised synthesis is a weakly supervised paradigm, best represented by the recent deep learning method CycleGAN

(Zhu et al., 2017)

. This algorithm uses two sets of (unpaired) images of the two modalities, to learn two mapping functions, from each modality to the other. CycleGAN enforces cycle consistency of the two mappings (i.e., that they approximately invert each other), while training two classifiers that discriminate between synthetic and real images of each modality in order to avoid overfitting. While this technique has been shown to produce realistic medical images

(Chartsias et al., 2017; Wolterink et al., 2017), it has an important limitation in the context of histology-MRI registration: it is unable to exploit the pairing between the (nonlinearly misaligned) histology and MRI images. Another disadvantage of CycleGAN is that, since a database of cases is necessary to train the model, it cannot be applied to a single image pair, i.e., it cannot be used as a generic inter-modality registration tool.

1.4 Contribution

In this study, we propose a novel probabilistic model that simultaneously solves for registration and synthesis directly on the target images, i.e., without any training data. The principle behind the method is that improved registration provides less noisy data for the synthesis, while more accurate synthesis leads to better registration. Our framework enables these two components to iteratively exploit the improvements in the estimates of the other, while considering the uncertainty in each other’s parameters. Taking uncertainty into account is crucial: if one simply tries to iteratively optimise synthesis and registration while keeping the other fixed to a point estimate, both components are greatly affected by the noise introduced by the other. More specifically, misregistration leads to bad synthesis due to noisy training data, whereas accurate registration to an poorly synthesised image yields incorrect alignment.

If multiple image pairs are available, the framework exploits the complete database, by jointly considering the probabilistic registrations between the pairs. In addition, the synthesis algorithm effectively takes advantage of the spatial structure in the data, as opposed to mutual information based registration. Moreover, the probabilistic nature of the model also enables the seamless integration of manually placed landmarks, which inform both the registration (directly) and the synthesis (indirectly, by creating areas of high certainty in the registration). We present a variational expectation maximisation algorithm (VEM, also known as variational Bayes) to solve the model with Bayesian inference, and illustrate the proposed approach through experiments on synthetic and real data.

The rest of this paper is organised as follows. In Section 2, we describe the probabilistic model on which our algorithm relies (Section 2.1), as well as an inference algorithm to compute the most likely solution within the proposed framework (Section 2.2). In Section 3, we describe the MRI and histological data (Section 3.1) that we used in our experiments (Section 3.2), as well as the results on real data and the Allen atlas (Section 3.3). Finally, Section 4 concludes the paper.

2 Methods

2.1 Probabilistic framework

The graphical model of our probabilistic framework and corresponding mathematical symbols are shown in Figure 1. For the sake of simplicity, we describe the framework from the perspective of the MRI to histology registration problem, though the method is general and can be applied to other inter-modality registration task – in any number of dimensions.

Spatial coordinates
Image domain of image pair
Number of image pairs
Intensities of MRI image
Intensities of registered MRI image
Intensities of histological section
Deformation field for image pair
Number of available landmarks for image pair
Spatial coordinates of landmark on
Spatial coordinates of landmark on
Parameters of image intensity transform (contrast synthesis)
Hyperparameters of image intensity transform (contrast synthesis)
Hyperparameters of deformation field
Variance of manual landmark placement
Figure 1:

(a) Graphical model of the proposed probabilistic framework. Circles represent random variables or parameters, arrows indicate dependencies between the variables, dots represent known (hyper)parameters, shaded variables are observed, and plates indicate replication. (b) Mathematical symbols corresponding to the model.

Let and represent MRI image slices and corresponding histological sections. We assume that each pair of images has been coarsely aligned with a 2D linear registration algorithm (e.g., using mutual information), and are hence defined over the same image domain . and are functions of the spatial coordinates , i.e., and . In addition, let and represent two sets of corresponding landmarks, manually placed on the MRI image and histological section, respectively: and , where and

are 2D vectors with the spatial coordinates of the

landmark on the image pair; for reasons that will be apparent in Section 2.2 below, we will assume that every coincides with an integer pixel coordinate. Finally, represents the MR image after applying a nonlinear deformation field , which deterministically warps it to the space of the histological section , i.e.,


which in general requires interpolation of


Each deformation field is assumed to be an independent sample of a Markov Random Field (MRF) prior, with unary potentials penalising large shifts (their squared module), and binary potentials penalising the squared gradient magnitude:


where and are the parameters of the MRF (which we group in ); is the partition function; and is the neighbourhood of the pixel located at . We note that this prior encodes a regularisation similar to that of the popular demons registration algorithm (Vercauteren et al., 2007; Cachier et al., 2003). Moreover, we also discretise the deformation field to a set of values (shifts) , i.e., ; we note that these shifts do not need to be integer (in pixels). While this choice of deformation model and regulariser does not guarantee the registration to be diffeomorphic (which might be desirable), it enables marginalisation over the deformation fields – and, as we will discuss in Section 2.2 below, a more sophisticated deformation model can be used to refine the final registration.

Application of to and yields not only a registered MRI image (Equation 1), but also a set of warped landmarks . When modelling

, we need to account for the error made by the user when manually placing corresponding key-points in the MR images and the histological sections. We assume that these errors are independent and follow zero-mean, isotropic Gaussian distributions parametrised by their covariances

(where is the 2

2 identity matrix, and where

is expected to be quite small):


Finally, to model the connection between the intensities of the histological sections and the registered MRI images }, we follow Tu et al. (2008) and make the assumption that:


This assumption is equivalent to adopting a discriminative approach to model the contrast synthesis. While this discriminative component breaks the generative nature of the framework, it also enables the modelling of much more complex relationships between the intensities of the two modalities, including spatial and geometric information about the pixels. Such spatial patterns cannot be captured by, e.g., mutual information, which only models statistical relationships between intensities (e.g., a random shuffling of pixels does not affect the metric). Here we use a regression forest (Breiman, 2001), though any other discriminative regression could have been utilised. We assume conditional independence of the pixels in the prediction: the forest produces a Gaussian distribution for each pixel separately, parametrised by and . Moreover, we place a (conjugate) Inverse Gamma prior on the variances , with hyperparameters and :


We use to represent the set of forest parameters, which groups the selected features, split values, tree structure and the prediction at each leaf node. The set of corresponding hyperparameters are grouped in , which includes the parameters of the Gamma prior , the number of trees, maximum depth, and minimum number of samples in leaf nodes. The intensity model is hence:

where is a spatial window centred at , and represents the Gaussian distribution. Given the deterministic deformation model (Equation 1), and the assumption in Equation 4, we finally obtain the likelihood term:


We emphasise that, despite breaking the generative nature of the model, the assumption in Equation 4 still leads to a valid objective function when performing Bayesian inference. This cost function can be optimised with standard inference techniques, as explained in Section 2.2 below.

2.2 Inference

The final goal is to estimate the registrations . To do so, we use Bayesian inference to “invert” the probabilistic model described in Section 2.1 above. If we group all the observed variables into the set

, the problem is to maximise the posterior probability of the registrations given the available information, i.e.,

. Computing such probability distribution requires marginalizing over the intensity model, which leads to an intractable integral:


To solve this problem, we make the standard approximation that the posterior distribution of the parameters given the observed data is strongly peaked around its mode , i.e., we use its point estimate. The intractable integral in Equation 7 then becomes:


where the point estimate is given by:


In this section, we first describe a VEM algorithm to obtain the point estimate of using Equation 9 (Section 2.2.1), and then address the computation of the final registrations with Equation 8 (Section 2.2.2).

2.2.1 Computation of point estimate

Applying Bayes’s rule on Equation 9, and taking the logarithm, we obtain the following objective function:


Exact maximisation of Equation 10 would require marginalizing over the deformation fields , which leads (once again) to an intractable integral due to the pairwise terms of the MRF prior (Equation 2). Instead, we use a variational technique (VEM) for approximate inference. Since the Kullback-Leibler (KL) divergence is by definition non-negative, the objective function in Equation 10 is bounded from below by:


The bound is the negative of the so-called free energy: represents the entropy of a random variable; and is a distribution over which approximates the posterior , while being restricted to have a simpler form. The standard mean field approximation (Parisi, 1988) assumes that factorises over voxels for each field :

where is a discrete distribution over shifts at pixel of image , such that , , .

Rather than the original objective function (Equation 10), VEM maximises the lower bound , by alternately optimizing with respect to (E-step) and (M-step) in a coordinate ascent scheme. We summarise these two steps below.


To optimise the lower bound with respect to , it is convenient to work with Equation 11. Since the first two terms are independent of , one can minimise the KL divergence between and the posterior distribution of :

where is the expected value. Building the Lagrangian (to ensure that stays in the probability simplex) and setting derivatives to zero, we obtain:


This equation has no closed-form solution, but can be solved with fixed point iterations, one image pair at the time – since there is no interdependence in . We note that the effect of the landmarks is not local; in addition to creating a very sharp around pixel at hand, the variational algorithm also creates a high confidence region around , by encouraging neighbouring pixels to have similar shifts. This is illustrated in Figure 2(a,d). The spatial location marked by red dot number 1 is right below a manually placed landmark in the histological section, and the distribution is hence strongly peaked at a location right below the corresponding landmark in the MRI slice. Red dot number 2, on the contrary, is located in the middle of the cerebral white matter, where there is little contrast to guide the registration, so is much more more spread and isotropic. Red dot number 3 lies in the white matter right under the cortex, so its distribution is elongated and parallel to the white matter surface.

Figure 2: Illustration of the VEM algorithm: (a) Histological section from the Allen atlas. The green dots represent manually placed landmarks. (b) Mean of synthesised MRI slice, after 5 iterations of the VEM algorithm. (c) Variance of synthesised MRI slice, overlaid on the mean. (d) Corresponding real MRI slice. The green dots represent the manually placed landmarks, corresponding to the ones in (a). The heat maps represent the approximate posterior distributions of shifts () corresponding to the red dots in (a).

When optimizing with respect to , it is more convenient to work with Equation 12 – since the term

can be neglected. Applying the chain rule of probability, and leaving aside terms independent of

, we obtain:


Maximisation of Equation 14 amounts to training the regressor, such that each input image patch is considered times, each with an output intensity corresponding to a differently shifted pixel location , and with weight

. In practice, and since injection of randomness is a crucial aspect of the training process of random forests, we found it beneficial to consider each patch

only once in each tree, with a shift sampled from the corresponding distribution – fed to the tree with weight .

The injection of additional randomness through sampling of no only greatly increases the robustness of the regressor against misregistration, but also decreases the computational cost of training – since only a single shift is considered per pixel. We also note that this sampling strategy still yields a valid stochastic optimiser for Equation 14, since is a discrete probability distribution over shifts. Such stochastic procedure (as well as other sources of randomness in the forest training algorithm) makes the maximisation of Equation 14 only approximate; this means that the coordinate ascent algorithm to maximise the lower bound of the objective function is no longer guaranteed to converge. In practice, however, the VEM algorithm typically converges after 5 iterations.

Combined with the conjugate prior on the variance

, the joint prediction of the forest is finally given by:


where is the guess made by tree ; is the total number of trees in the forest; and where we have dropped the dependency of and on for simplicity. A sample output of the forest is shown in Figure 2(b,c). Areas of higher uncertainty, which will be downweighted in the registration, include the horizontal crack on the histological image and cerebrospinal fluid regions; the latter may appear bright or dark, depending on whether they are filled with paraformaldehyde, air or Fomblin (further details on these data can be found in Section 3.1).

2.2.2 Computation of optimal deformation fields

Once the point estimate (i.e., the optimal regression forest for synthesis) has been computed, one can obtain the optimal registrations by maximizing . Using Bayes’s rule and taking the logarithm, we obtain:


which can be solved one image pair at the time. Substituting the Gaussian likelihoods into Equation 16, switching signs and disregarding terms independent of yields, for each image pair, the following cost function for the registration is obtained:


Thanks to the discrete nature of , a local minimum of the cost function in Equation 17 can be efficiently found with algorithms based on graph cuts (Ahuja et al., 1993), such as Boykov et al. (2001).

We note that the result does not need to be diffeomorphic or invertible, which might be a desirable feature of the registration. This is due to the properties of the deformation model, which was chosen due to the fact that it easily enables marginalisation over the deformation fields with variational techniques. In practice, we have found that, once the optimal (probabilistic) synthesis has been computed, we can obtain smoother and more accurate solutions by using more sophisticated deformation models and priors. More specifically, we implemented the image and landmark terms of Equation 17 in NiftyReg (Modat et al., 2010), which is a fast, powerful registration package, instantly getting access to its advanced, efficiently implemented deformation models, regularisers and optimisers. NiftyReg parametrises the deformation field with a grid of control points combined with cubic B-Splines (Rueckert et al., 1999). If represents the vector of parameters of the spatial transform for image pair , we optimise:


where is the bending energy of the transform parametrised by ; is the sum of squares of the symmetric part of the Jacobian after filtering out rotation (penalises stretching and shearing); is the Jacobian energy (given by its log-determinant); , , are the corresponding weights; and is a constant that scales the contribution of the image term, such that it is approximately bounded by 1:

, i.e., a value of 1 is achieved if all pixels are three standard deviations away from the predicted mean.

Note that this choice for the final model also enables comparison with mutual information as implemented in NiftyReg, which minimises:


where MI represents the mutual information. We note that finding the value of that matches the importances of the data terms in Equations 18 and 19 is a non-trivial task; however, our choice of defined above places the data terms in approximately the same range of values.

2.3 Summary of the algorithm and implementation details

The presented method is summarised in Algorithm 1. The approximate posteriors are initialised to , evenly spreading the probability mass across all possible shifts (i.e., maximum uncertainty in the registration). Given , Equation 14 is used to initialise the forest parameters . At that point, the VEM algorithm alternates between the E and M steps until convergence is reached. Convergence would ideally be assessed with but, since these parameters vary greatly from one iteration to the next due to the randomness injected in training, we use the predicted means and variances instead ().

0:  , , ,
0:  ,
  Initialise with Eq. 14 (random forest training)
  while  change do
     for  to  do
        Compute with Eq. 15
        while  changes do
           Fixed point iteration of (Eq. 13)
        end while
     end for
     Update with Eq. 14 (random forest retraining)
  end while
  for  to  do
     Compute final with Eq. 15
     Compute with Eq. 17 or Eq. 18
  end for
Algorithm 1 Simultaneous synthesis and registration

In the E-step, each image pair can be considered independently. First, the histological section is pushed through the forest to generate a prediction for the (registered) MR image, including a mean and a standard deviation for each pixel (Equation 15). Then, fixed point iterations of Equation 13 are run until convergence of . In the M-step, the approximate posteriors of all images are used together to retrain the random forest with Equation 14. When the algorithm has converged, the final predictions (mean, variance) can be generated for each voxel, and the final registrations can be computed with Equation 17, or with NiftyReg (see details below).

Injection of randomness is a crucial aspect of random forests, as it increases their generalization ability (Criminisi et al., 2011). Here we used bagging (Breiman, 1996) at both the image and pixel levels, and used random subsets of features when splitting data at the internal nodes of the trees. An additional random component in the stochastic optimization is the sampling of shifts to make the model robust against misregistration (see Section 2.2.1). While all these random elements have beneficial effects, these come at the expense of giving up the theoretical guarantees on the convergence of the VEM algorithm – though this was never found to be a problem in practice, as explained in Section 2.2.1 above.

For the final registration, we used the default regularisation scheme in NiftyReg, which is a weighted combination of the bending energy (second derivative) and the sum of squares of the symmetric part of the Jacobian. We note that NiftyReg uses by default; while using guarantees that the output is diffeomorphic, the other two regularisation terms () ensure in practice that the deformation field is well behaved.

3 Experiments and results

3.1 Data

We used two datasets to validate the proposed technique; one synthetic, and one real. The synthetic data, which consists only of MRI images, enables quantitative comparison of the estimated deformations with the ground truth fields that were used to generate them. The real dataset, on the other hand, enables qualitative evaluation in a real histology-MRI registration problem.

3.1.1 Synthetic MRI dataset

Since obtaining MRI and histological data with perfect spatial alignment is very difficult, we used a synthetic dataset based solely on MRI to quantitatively validate the proposed approach. These synthetic data were generated from 676 (real) pairs of T1- and T2-weighted scans from the publicly available ADNI dataset. The resolution of the T1 scans was approximately 1 mm isotropic; the ADNI project spans multiple sites, different scanners were used to acquire the images; further details on the acquisition can be found at The T2 scans correspond to an acquisition designed to study the hippocampus, and consist of 25-30 coronal images at 0.40.4 mm resolution, with slice thickness of 2 mm. These images cover a slab of tissue containing the hippocampi, which is manually oriented by the operator to be approximately orthogonal to the major axes of the hippocampi. Once more, further details on the acquisition at different sites can be found at the ADNI website.

The T1 scans were preprocessed with FreeSurfer (Fischl, 2012) in order to obtain skull-stripped, bias-field corrected images with a corresponding segmentation of brain structures (Fischl et al., 2002). We simplified this segmentation to three tissue types (gray matter, white matter, cerebrospinal fluid) and a generic background label. The processed T1 was rigidly registered to the corresponding T2 scan with mutual information, as implemented in NiftyReg (Modat et al., 2014). The registration was also used to propagate the brain mask and automated segmentation; the former was used to skull-strip the T2, and the latter for bias field correction using the technique described in Van Leemput et al. (1999). Note that we deform the T1 to the T2 – despite its lower resolution – because of its more isotropic voxel size.

From these pairs of preprocessed 3D scans, we generated a dataset of 1000 pairs of 2D images. To create each image pair, we followed these steps: 1. Randomly select one pair of 3D scans; 2. In the preprocessed T2 scan, randomly select a (coronal) slice, other than the first and the last, which sometimes display artefacts; 3. Downsample the T2 slice to 1 1 mm resolution, for consistency with the resolution of the T1 scans; 4. Reslice the (preprocessed) T1 scan to obtain the 2D image corresponding to the downsampled T2 slice; 5. Sample a random diffeomorphic deformation field (details below) in the space of the 2D slice; 6. Combine the deformation field with a random similarity transform, including rotation, scaling and translation; 7. Deform the T2 scan with the composed field (linear + nonlinear). 8. Rescale intensities to [0,255] and discretise with 8-bit precision.

To generate synthetic fields without biasing the evaluation, we used a deformation model different from that used by NiftyReg (i.e., a grid of control points and cubic B-Splines). More specifically, we created diffeormorphic deformations as follows. First, we generated random velocity fields by independently sampling bivariate Gaussian noise at each spatial location (no x-y correlation) with different levels of variance; smoothing them with a Gaussian filter; and multiplying them by a window function in order to prevent deformations close to the boundaries; we used , where is the distance to the boundary of the image in mm. Then, these velocity fields were integrated over unit time using a scaling and squaring approach (Moler and Van Loan, 2003; Arsigny et al., 2006) to generate the deformation fields. Sample velocity and deformation fields generated with different levels of noise are shown in Figure 3.

Figure 3: Synthetic velocity (top row) and corresponding deformation fields (bottom row) generated with three different levels of noise .

Finally, we synthetically generated spatially spread landmarks using the following procedure. First, we calculated the response to a Harris corner detector (Harris and Stephens, 1988). Then, we iteratively selected the pixel with the highest response , and multiplied the Harris response by a complementary Gaussian function centred at , i.e.,

with standard deviation equal to 1/10 of the image dimensions. We then applied the deformation field to the landmarks, and corrupted the output locations with Gaussian noise of variance .

The ADNI was launched in 2003 by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, the Food and Drug Administration, private pharmaceutical companies and non-profit organisations, as a $60 million, 5-year public-private partnership. The main goal of ADNI is to test whether MRI, positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to analyse the progression of MCI and early AD. Markers of early AD progression can aid researchers and clinicians to develop new treatments and monitor their effectiveness, as well as decrease the time and cost of clinical trials. The Principal Investigator of this initiative is Michael W. Weiner, MD, VA Medical Center and University of California - San Francisco. ADNI is a joint effort by co-investigators from industry and academia. Subjects have been recruited from over 50 sites across the U.S. and Canada. The initial goal of ADNI was to recruit 800 subjects but ADNI has been followed by ADNI-GO and ADNI-2. These three protocols have recruited over 1,500 adults (ages 55-90) to participate in the study, consisting of cognitively normal older individuals, people with early or late MCI, and people with early AD. The follow up duration of each group is specified in the corresponding protocols for ADNI-1, ADNI-2 and ADNI-GO. Subjects originally recruited for ADNI-1 and ADNI-GO had the option to be followed in ADNI-2.

3.1.2 Real data

To qualitatively evaluate our algorithm on real data, we used the Allen atlas, which is based on the left hemisphere of a 34-year-old donor. The histology of the atlas includes 106 Nissl-stained sections of the whole hemisphere in coronal plane, with manual segmentations of 862 brain structures. Due to the challenges associated with sectioning and mounting thin sections from complete hemispheres, artefacts such as cracks are present in the sections, which make registration difficult (see for intance Figure 2a). The sections are thick, and digitised at in-plane resolution with a customised microscopy system – though we downsampled them to to match the resolution of the MRI data (details below). We also downsampled the manual segmentations to the same resolution, and merged them into a whole brain segmentation that, after dilation, we used to mask the histological sections. The histology and associated segmentations can be interactively visualised at, and further details can be found in Ding et al. (2016). No 3D reconstruction of the histology was performed in their study.

In addition to the histology, high-resolution MRI images of the whole brain were acquired on a 7 T Siemens scanner with a custom 30-channel receive-array coil. The specimen was scanned in a vacuum-sealed bag surrounded by Fomblin to avoid artefacts caused by air-tissue interfaces. The images were acquired with a multiecho flash sequence (TR = 50 ms; = 20, 40, 60, 80; echoes at 5.5, 12.8, 20.2, 27.6, 35.2, and 42.8 ms), at isotropic resolution. Once more, the details can be found in (Ding et al., 2016). In this study, we used a single volume, obtained by averaging the echoes corresponding to flip angle = 20, which provided good contrast between gray and white matter tissue, as well as great signal-to-noise ratio. The combined image was bias field corrected with the method described in (Van Leemput et al., 1999) using the probability maps from the LONI atlas (Shattuck et al., 2008), which was linearly registered with NiftyReg (Modat et al., 2014). A coarse mask for the left hemisphere was manually delineated by JEI, and used to mask out tissue from the right hemisphere, which is not included in the histological analysis. Sample coronal slices of this dataset are shown in 2a (histology) and 2b (MRI).

3.2 Experimental setup

In the synthetic data, we considered three different levels of Gaussian noise ( mm) when generating the velocity fields, in order to model nonlinear deformations of different severity. The standard deviation of the Gaussian smoothing filter was set to 5 mm, in both the horizontal and vertical direction. The random rotations, translations and log-scalings of the similarity transform were sampled from zero-mean Gaussian distributions, with standard deviations of 2, 1 pixel, and 0.1, respectively. We then used NiftyReg with mutual information and our method to recover the deformations, using different spacings between control points (from 3 to 21 mm, with 3 mm steps) to evaluate different levels of model flexibility. Otherwise we used the default parameters of the package: three resolution levels, 64 bins in mutual information, and regularisation parameters , , and . The standard deviation of the manual landmark placement was set to mm (equivalent to 0.5 pixels).

The rest of model parameters were set to: (equivalent to a standard deviation of 5 mm); (equivalent to 4 pseudo-observations with sample standard deviation equal to 5); and being a grid covering a square with radius 10 mm, in increments of 0.5 mm. Finally, the random forest regressor consisted of 100 trees, using Gaussian derivatives and location as features. The Gaussian derivatives were of order up to three, computed at three different scales: 0, 2 and 4 mm. We grew the tree until a minimum size of 5 samples was reached at leaf nodes. We tested 5 randomly sampled features at each node. Bagging was used at both the slice and pixel levels, using 66% of the available images, and as many pixels per image as necessary in order to have a total of 25,000 training pixels. We tested our algorithm in two different scenarios: running it on all image pairs simultaneously, or on each image pair independently (i.e., with ); the latter represents the common case that a user runs the algorithm on just a pair of images. In this case, we used 66% of the pixels to train each tree.

In the real data, we compared mutual information based registration with our approach, using all slices simultaneously in the synthesis. In order to put the MRI in linear alignment with the histological sections, we used an iterative approach very similar to that of Yang et al. (2012). Starting from a stack of histological sections, we first rigidly aligned the brain MRI to the stack using mutual information. Then, we resampled the registered MR to the space of each histological section, and aligned them one by one using a similarity transform combined with mutual information. The registration of the MRI was then refined using the realigned sections, starting a new iteration. Upon convergence, we used the two competing methods to nonlinearly register the histological sections to the corresponding resampled MR images. We used the same parameters as for the experiment with the synthetic data, setting the control point spacing to the optimal values from such experiments (6 mm for the proposed approach, and 18 mm for mutual information; see Section 3.3.1 below); note that, for the manual landmarks, mm was equivalent to pixels at the resolution of this dataset.

3.3 Results

3.3.1 Synthetic data

Figures 4, 5 and 6 show the mean registration error as a function of the control point separation and the number of landmarks for three different levels of noise deformation: 10, 20 and 30 mm, which correspond to mild, medium and strong deformations, respectively. The mean error reflects the precision of the estimation, whereas the maximum is related to its robustness. When using mutual information, finer control point spacings in the deformation model yield transforms that are too flexible, leading to very poor results (even in presence of control points); see example in Figure 7. Both the mean and maximum error improve with larger spacings, flattening out at around 18-20 mm.

Figure 4: Mean and maximum registration error in mm for deformations with (mild).
Figure 5: Mean and maximum registration error in mm for deformations with (medium).
Figure 6: Mean and maximum registration error in mm for deformations with (strong).
Figure 7: Example from synthetic dataset: (a) Deformed T2 image, used as floating image in the registration. (b) Corresponding T1 scan, used as reference image. (c) Corresponding synthetic T2 image, after 5 iterations of our VEM algorithm. (d) Registered with mutual information. (e) Registered with our algorithm. Both in (d) and (e), the control point spacing was set to 6 mm. We have overlaid on all five images a manual outline of the gray matter surface (in red) and of the ventricles (in green), which were drawn using the T1 scan (b) as a reference. Note the poor registration produced by mutual information in the ventricles and cortical regions – see for instance the areas pointed by the yellow arrows in (d).

The proposed method, on the other hand, provides higher precision with flexible models, thanks to the higher robustness of the intramodality metric. The two versions of the method (estimating the regressor one image pair at the time or from all images simultaneously) consistently outperform mutual information in every scenario. An important difference in the results is that the mean error hits its minimum at a much smaller control point spacing (typically 6 mm), yielding a much more accurate registration (see example in Figure 7). Moreover, the maximum error has already flattened at that point, in almost every tested setting.

In addition to supporting finer control points spacings, the proposed method can more effectively exploit the information provided by landmarks. In mutual information based registrations, the landmarks guide the registration, especially in the earlier iterations, since their relative cost is high. But further influence on the registration (e.g., by improving the estimation of the joint histogram) is indirect and very limited. Our proposed algorithm, on the other hand, explicitly exploits the landmark information not only in the registration, but also in the synthesis: in Equation 13, the landmarks create a very sharp distribution not only at pixels with landmarks, but also in the surroundings, thanks to the MRF (e.g., as in Figure 2d, Tag 1). Therefore, very similar shifted locations of these pixels are consistently selected when sampling for each tree of the forest, greatly informing the synthesis. This is reflected in the quantitative results: the gap in performance between the proposed method and mutual information widens as the number of landmarks increases.

When no landmarks are used and image pairs are assessed independently, the proposed algorithm can be seen as a conventional inter-modality registration method. In that scenario, the results discussed above still hold: our method can be used at finer control point spacings, and provides average reductions of 11%, 22% and 15% in the mean error, at , and , respectively. We also note that, as one would expect, our method and mutual information produce almost identical results at large control point spacings.

Finally, we note a modest improvement is observed when image pairs are considered simultaneously – rather then independently. Nevertheless, the joint estimation consistently yields higher robustness at the finest control point spacing (3 mm), and also produces smaller errors across the different settings when the deformations are mild (Figure 4). We hypothesise that, even though the simultaneous estimation has the advantage of having access to more data (which is particularly useful with more flexible models, i.e., finer spacing), the independent version can also benefit from having a regressor that is tailored to the single image pair at hand.

3.3.2 Real data

Figure 8 shows a representative coronal section of the data, which covers multiple cortical and subcortical structures of interest (e.g., hippocampus, thalamus, putamen and pallidum). Comparing the segmentations propagated from the histology to the MRI with the proposed method (Figure 8d) and mutual information (Figure 8e), it is apparent that our algorithm produces a much more accurate registration. The contours of the white matter surface are rather inaccurate when using mutual information; see for instance the insular (Tag 1 in the figure), auditory (Tag 2), or polysensoral temporal cortices (Tag 3); or area 36 (Tag 4). Using the proposed method, the registered contours follow the underlying MRI intensities much more accurately. The same applies to subcortical structures. In the thalamus (light purple), it can be seen that the segmentation of the reticular nucleus (Tag 5) is too medial when using mutual information. The same applies to the pallidum (Tag 6), putamen (Tag 7) and claustrum (Tag 8). The hippocampus (dark purple; Tag 9) is too inferior to the actual anatomy in the MRI. Once more, the proposed algorithm produces, qualitatively speaking, much improved boundaries.

Figure 8: (a) Coronal slice of the MRI scan. (b) Corresponding histological section, registered with the proposed method. (c) Corresponding manual segmentation, propagated to MR space. (d) Close-up of the region inside the blue square, showing the boundaries of the segmentation; see main text (Section /3.3.2) for an explanation of the numerical tags. (e) Segmentation obtained when using mutual information in the registration. See for the color map.
Figure 9: (a) Sagittal slice of the MRI scan, with registered segmentation superimposed. The deformation fields used to propagate the manual segmentations from histology to MRI space were computed with mutual information. (b) Same as (a), but using our technique to register the data. (c) Axial slice, reconstruction with mutual information. (d) Same slice, reconstructed with our proposed method. See for the color map.

To better assess the quality of the reconstruction as a whole (rather than on a single slice), Figure 9 shows the propagated segmentations in the orthogonal views: sagittal (Figures 9a, 9b) and axial (Figures 9c, 9d). The proposed method produces reconstructed segmentations that are smoother and that better follow the anatomy in the MRI scan. In sagittal view, this can be easily observed in subcortical regions such as the putamen (Tag 1 in Figure 9b), the hippocampus (Tag 2) or the lateral ventricle (Tag 3); and also in cortical regions such as the premotor (Tag 4), parahippocampal (Tag 5) or fusiform temporal (Tag 6) cortices. The improvement is also apparent from how much less frequently the segmentation leaks outside the brain when using our algorithm. Similar conclusions can be derived from the axial view; see for instance the putamen (Tag 1 in Figure 9d), thalamus (purple region, Tag 2), polysensory temporal cortex (Tag 3) or insular cortex (Tag 4).

4 Discussion and conclusion

In this article, we presented a novel method to simultaneously estimate the registration and synthesis between a pair of corresponding images from different modalities. The results on both synthetic (quantitative) and real data (qualitative) show that the proposed algorithm is superior to standard inter-modality registration based on mutual information, albeit slower due to the need to iterate between registration and synthesis – especially the former, since it requires nested iteration of Equation 13. Our Matlab implementation runs in 2-3 minutes for images of size pixels, but parallelised implementation in C++ or on the GPU should greatly reduce the running time.

The quantitative experiments demonstrated that our algorithm supports much more flexible deformation models than mutual informations (i.e., smaller control point spacing) without compromising robustness, attributed to the more stable intra-modality metric (which we have made publicly available in NiftyReg). Moreover, the experiments on synthetic data also showed that our algorithm can more effectively take advantage of the information encoded in manually placed pairs of landmarks, since this can be exploited in both the registration and synthesis, which inform each other in the model fitting. The more landmarks we used, the larger the gap between our method and mutual information was – however, we should note that, in the limit, the performance of the two methods would be the same, since the registration error would go to zero in both cases.

We must note that, in the experiments with synthetic data, the relative contributions of the data terms in Equations 18) and 19 are slightly different, since computing the value of

that makes these contributions exactly equal is very difficult. However, the minor differences that our heuristic choice of

might introduce do not undermine the results of the experiments, since the approximate effect of modifying is mildly shifting the curves in Figures 4-6 to the left or right – which does not change the conclusions.

Our method also outperformed mutual information when applied to the data from the Allen Institute, which is more challenging due to the more complex relationships between the two contrast mechanisms, and the presence of artefacts such as cracks and tears. Qualitatively speaking, the superiority of our approach is clearly apparent from Figure 9, in which it produces a much smoother segmentation in the orthogonal planes. We note that we did not introduce any smoothness constraints in the reconstruction, e.g., by forcing the registered histological sections to be similar to their neighbours, through an explicit term in the cost function of the registration. Such a strategy would produce smoother reconstructions, but these would not necessarily be more accurate – particularly if one considers that the 2D deformations fields of the different sections are independent a priori, which makes the histological sections conditionally independent a posteriori, given the MRI data and the image intensity transform. Moreover, explicitly enforcing such smoothness in the registration would preclude qualitative evaluation through visual inspection of the segmentation in the orthogonal orientations.

The proposed algorithm is hybrid in the sense that, despite being formulated in a generative framework, it replaces the likelihood term of the synthesis by a discriminative element. We emphasise that such a change still yields a valid objective function (Equation 10) that we can approximately optimise with VEM – which maximises Equations 11 and 12 instead. The VEM algorithm alternately optimises for and

in a coordinate descent scheme, and is in principle guaranteed to converge. In our method, we lose this property due to the approximate optimisation of the random forest parameters, since injecting randomness is one of the key elements of the success of random decision trees. However, in practice, our algorithm typically converges in 5-6 iterations, in terms of changes in the predicted synthetic image (i.e., in

and ).

Our approach can also be used in an online manner, i.e., if data become progressively available at testing. For example, the random forest could be optimised on an (ideally) large set of images, considering them simultaneously in the framework. Then, when a new pair of images arrives, one can assume that the forest parameters are fixed and equal to , and proceed directly to the estimation of the synthetic image and deformation field . An alternative would be to fine tune to the new input, considering it in isolation or jointly with the other scans. But even if no other previous data are available (i.e., ), the registration uncertainty encoded in prevents the regression from overfitting, and enables our method to still outperform mutual information. This is in contrast with supervised synthesis algorithms, which cannot operate without training data.

Future work will follow four main directions. First, integrating deep learning techniques into the framework, which could be particularly useful when large amounts of image pairs are available, e.g., in a large histology reconstruction project. The main challenges to tackle are overfitting and avoiding to make the algorithm impractically slow. A possible solution to this problem would be to use a pretrained network, and only update the connections in the last layer during the analysis of the image pair at hand (e.g., as in Wang et al. 2017). A second direction of future work is the extension of the algorithm to 3D. Albeit mathematically straightforward (no changes are required in the framework), such extension poses problems from the practical perspective, e.g., the memory requirements for storing grow very quickly. A third avenue of future work is the application to other target modalities, such as optical coherence tomography (OCT). Finally, we will also explore the possibility of synthesizing histology from MRI. This a more challenging task that might require multiple input MRI contrasts, depending on the target stain to synthesise. However, synthetic histology would not only provide an estimate of the microanatomy of tissue imaged with MRI, but would also enable the symmetrisation of the framework presented in this article; by computing two syntheses, the robustness of the algorithm would be be expected to increase.

The algorithm presented in this paper represents a significant step towards solving the problem of aligning histological images and MRI, by exploiting the connection between registration and synthesis within a novel probabilistic framework. We will use this method to produce increasingly precise histological reconstructions of tissue, which in turn will enable us to build probabilistic atlases of the human brain at a superior level of detail.


This research was supported by the European Research Council (ERC) Starting Grant No. 677697 (project BUNGEE-TOOLS), and from the Wellcome Trust and EPSRC (203145Z/16/Z, WT101957, NS/A000027/1, NS/A000050/1). Support for this research was also provided in part by the National Institute for Biomedical Imaging and Bioengineering (P41EB015896, 1R01EB023281, R01EB006758, R21EB018907, R01EB019956), the National Institute on Aging (5R01AG008122, R01AG016495), the National Institute of Diabetes and Digestive and Kidney Diseases (1-R21-DK-108277-01), the National Institute for Neurological Disorders and Stroke (R01NS0525851, R21NS072652, R01NS070963, R01NS083534, 5U01NS086625), and was made possible by the resources provided by Shared Instrumentation Grants 1S10RR023401, 1S10RR019307, and 1S10RR023043. Additional support was provided by the NIH Blueprint for Neuroscience Research (5U01-MH093765), part of the multi-institutional Human Connectome Project. In addition, BF has a financial interest in CorticoMetrics, a company whose medical pursuits focus on brain imaging and measurement technologies. BF’s interests were reviewed and are managed by Massachusetts General Hospital and Partners HealthCare in accordance with their conflict of interest policies.

The collection and sharing of the MRI data used in the group study based on ADNI was funded by the Alzheimer’s Disease Neuroimaging Initiative (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: Alzheimer’s Association; Alzheimer’s Drug Discovery Foundation; BioClinica, Inc.; Biogen Idec Inc.; Bristol-Myers Squibb Company; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; F. Hoffmann-La Roche Ltd and its affiliated company Genentech, Inc.; GE Healthcare; Innogenetics, N.V.; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Development LLC.; Medpace, Inc.; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Synarc Inc.; and Takeda Pharmaceutical Company. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health ( The grantee organisation is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer’s Disease Cooperative Study at the University of California, San Diego. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California.


  • Adler et al. (2016) Adler, D.H., Ittyerah, R., Pluta, J., Pickup, S., Liu, W., Wolk, D.A., Yushkevich, P.A., 2016. Probabilistic atlas of the human hippocampus combining ex vivo mri and histology, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer. pp. 63–71.
  • Adler et al. (2014) Adler, D.H., Pluta, J., Kadivar, S., Craige, C., Gee, J.C., Avants, B.B., Yushkevich, P.A., 2014. Histology-derived volumetric annotation of the human hippocampal subfields in postmortem MRI. Neuroimage 84, 505–523.
  • Ahuja et al. (1993) Ahuja, R.K., Magnanti, T.L., Orlin, J.B., 1993. Network flows: theory, algorithms, and applications .
  • Amunts et al. (2013) Amunts, K., Lepage, C., Borgeat, L., Mohlberg, H., Dickscheid, T., Rousseau, M.É., Bludau, S., Bazin, P.L., Lewis, L.B., Oros-Peusquens, A.M., Shah, N.J., Lippert, T., Zilles, K., Evans, A.C., 2013. Bigbrain: an ultrahigh-resolution 3d human brain model. Science 340, 1472–1475.
  • Arganda-Carreras et al. (2010) Arganda-Carreras, I., Fernández-González, R., Muñoz-Barrutia, A., Ortiz-De-Solorzano, C., 2010. 3d reconstruction of histological sections: application to mammary gland tissue. Microscopy research and technique 73, 1019–1029.
  • Arsigny et al. (2006) Arsigny, V., Commowick, O., Pennec, X., Ayache, N., 2006. A log-euclidean framework for statistics on diffeomorphisms. Medical Image Computing and Computer-Assisted Intervention–MICCAI 2006 , 924–931.
  • Augustinack et al. (2005) Augustinack, J.C., van der Kouwe, A.J., Blackwell, M.L., Salat, D.H., Wiggins, C.J., Frosch, M.P., Wiggins, G.C., Potthast, A., Wald, L.L., Fischl, B.R., 2005. Detection of entorhinal layer ii using tesla magnetic resonance imaging. Annals of neurology 57, 489–494.
  • Boykov et al. (2001) Boykov, Y., Veksler, O., Zabih, R., 2001. Fast approximate energy minimization via graph cuts. IEEE Transactions on pattern analysis and machine intelligence 23, 1222–1239.
  • Breiman (1996) Breiman, L., 1996. Bagging predictors. Machine learning 24, 123–140.
  • Breiman (2001) Breiman, L., 2001. Random forests. Machine learning 45, 5–32.
  • Burgos et al. (2014) Burgos, N., Cardoso, M.J., Thielemans, K., Modat, M., Pedemonte, S., Dickson, J., Barnes, A., Ahmed, R., Mahoney, C.J., Schott, J.M., Duncan, J.S., Atkinson, D., Arridge, S.R., Hutton, B.F., Ourselin, S., 2014. Attenuation correction synthesis for hybrid PET-MR scanners: application to brain studies. IEEE transactions on medical imaging 33, 2332–2341.
  • Cachier et al. (2003) Cachier, P., Bardinet, E., Dormont, D., Pennec, X., Ayache, N., 2003. Iconic feature based nonrigid registration: the PASHA algorithm. Computer vision and image understanding 89, 272–298.
  • Chakravarty et al. (2006) Chakravarty, M.M., Bertrand, G., Hodge, C.P., Sadikot, A.F., Collins, D.L., 2006. The creation of a brain atlas for image guided neurosurgery using serial histological data. Neuroimage 30, 359–376.
  • Chartsias et al. (2017) Chartsias, A., Joyce, T., Dharmakumar, R., Tsaftaris, S.A., 2017. Adversarial image synthesis for unpaired multi-modal cardiac data, in: International Workshop on Simulation and Synthesis in Medical Imaging, Springer. pp. 3–13.
  • Cifor et al. (2011) Cifor, A., Bai, L., Pitiot, A., 2011. Smoothness-guided 3-d reconstruction of 2-d histological images. NeuroImage 56, 197–211.
  • Collins et al. (1994) Collins, D.L., Neelin, P., Peters, T.M., Evans, A.C., 1994. Automatic 3D intersubject registration of MR volumetric data in standardized talairach space. Journal of computer assisted tomography 18, 192–205.
  • Criminisi et al. (2011) Criminisi, A., Shotton, J., Konukoglu, E., 2011.

    Decision forests for classification, regression, density estimation, manifold learning and semi-supervised learning.

    Microsoft Research Cambridge, Tech. Rep. MSRTR-2011-114 5, 12.
  • Dauguet et al. (2007) Dauguet, J., Delzescaux, T., Condé, F., Mangin, J.F., Ayache, N., Hantraye, P., Frouin, V., 2007. Three-dimensional reconstruction of stained histological slices and 3d non-linear registration with in-vivo mri for whole baboon brain. Journal of neuroscience methods 164, 191–204.
  • Ding et al. (2016) Ding, S.L., Royall, J.J., Sunkin, S.M., Ng, L., Facer, B.A., Lesnar, P., Guillozet-Bongaarts, A., McMurray, B., Szafer, A., Dolbeare, T.A., Stevens, A., Tirrell, L., Benner, T., Caldejon, S., Dalley, R.A., Dee, N., Lau, C., Nyhus, J., Reding, M., Riley, Z.L., Sandman, D., Shen, E., van der Kouwe, A., Varjabedian, A., Write, M., Zollei, L., Dang, C., Knowles, J.A., Koch, C., Phillips, J.W., Sestan, N., Wohnoutka, P., Zielke, H.R., Hohmann, J.G., Jones, A.R., Bernard, A., Hawrylycz, M.J., Hof, P.R., Fischl, B., Lein, E.S., 2016. Comprehensive cellular-resolution atlas of the adult human brain. Journal of Comparative Neurology 524, 3127–3481.
  • Ebner et al. (2017) Ebner, M., Chung, K.K., Prados, F., Cardoso, M.J., Chard, D.T., Vercauteren, T., Ourselin, S., 2017. Volumetric reconstruction from printed films: Enabling 30 year longitudinal analysis in mr neuroimaging. NeuroImage 165, 238–250.
  • Evans et al. (1993) Evans, A.C., Collins, D.L., Mills, S., Brown, E., Kelly, R., Peters, T.M., 1993. 3D statistical neuroanatomical models from 305 MRI volumes, in: Nuclear Science Symposium and Medical Imaging Conference, 1993., 1993 IEEE Conference Record., IEEE. pp. 1813–1817.
  • Fischl (2012) Fischl, B., 2012. Freesurfer. Neuroimage 62, 774–781.
  • Fischl et al. (2002) Fischl, B., Salat, D.H., Busa, E., Albert, M., Dieterich, M., Haselgrove, C., Van Der Kouwe, A., Killiany, R., Kennedy, D., Klaveness, S., Montillo, A., Makris, N., Rosen, B., Dale, A.M., 2002. Whole brain segmentation: automated labeling of neuroanatomical structures in the human brain. Neuron 33, 341–355.
  • Harris and Stephens (1988) Harris, C., Stephens, M., 1988. A combined corner and edge detector., in: Alvey vision conference, Manchester, UK. pp. 10–5244.
  • Hertzmann et al. (2001) Hertzmann, A., Jacobs, C.E., Oliver, N., Curless, B., Salesin, D.H., 2001. Image analogies, in: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, ACM. pp. 327–340.
  • Hibbard and Hawkins (1988) Hibbard, L.S., Hawkins, R.A., 1988. Objective image alignment for three-dimensional reconstruction of digital autoradiograms. Journal of neuroscience methods 26, 55–74.
  • Holmes et al. (1998) Holmes, C.J., Hoge, R., Collins, L., Woods, R., Toga, A.W., Evans, A.C., 1998. Enhancement of MR images using registration for signal averaging. Journal of computer assisted tomography 22, 324–333.
  • Humm et al. (2003) Humm, J., Ballon, D., Hu, Y., Ruan, S., Chui, C., Tulipano, P., Erdi, A., Koutcher, J., Zakian, K., Urano, M., Zanzonico, P., Mattis, C., Dyke, J., Chen, Y., Harrington, P., O’Donoghue, J., Ling, C., 2003. A stereotactic method for the three-dimensional registration of multi-modality biologic images in animals: NMR, PET, histology, and autoradiography. Medical physics 30, 2303–2314.
  • Huynh et al. (2016) Huynh, T., Gao, Y., Kang, J., Wang, L., Zhang, P., Lian, J., Shen, D., 2016. Estimating CT image from MRI data using structured random forest and auto-context model. IEEE transactions on medical imaging 35, 174–183.
  • Iglesias et al. (2015) Iglesias, J.E., Augustinack, J.C., Nguyen, K., Player, C.M., Player, A., Wright, M., Roy, N., Frosch, M.P., McKee, A.C., Wald, L.L., Fischl, B., Van Leemput, K., 2015. A computational atlas of the hippocampal formation using ex vivo, ultra-high resolution MRI: application to adaptive segmentation of in vivo MRI. NeuroImage 115, 117–137.
  • Iglesias et al. (2017) Iglesias, J.E., Insausti, R., Lerma-Usabiaga, G., Van Leemput, K., Ourselin, S., Fischl, B., Caballero-Gaudes, C., Paz-Alonso, P.M., 2017. A probabilistic atlas of the thalamic nuclei combining ex vivo MRI and histology. Organization for Human Brain Mapping (OHBM).
  • Iglesias et al. (2013) Iglesias, J.E., Konukoglu, E., Zikic, D., Glocker, B., Van Leemput, K., Fischl, B., 2013. Is synthesizing MRI contrast useful for inter-modality analysis?, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer. pp. 631–638.
  • Ju et al. (2006) Ju, T., Warren, J., Carson, J., Bello, M., Kakadiaris, I., Chiu, W., Thaller, C., Eichele, G., 2006. 3d volume reconstruction of a mouse brain from histological sections using warp filtering. Journal of Neuroscience Methods 156, 84–100.
  • Kim et al. (1997) Kim, B., Boes, J.L., Frey, K.A., Meyer, C.R., 1997. Mutual information for automated unwarping of rat brain autoradiographs. Neuroimage 5, 31–40.
  • Kim et al. (2015) Kim, J., Glide-Hurst, C., Doemer, A., Wen, N., Movsas, B., Chetty, I.J., 2015. Implementation of a novel algorithm for generating synthetic CT images from magnetic resonance imaging data sets for prostate cancer radiation therapy. International Journal of Radiation Oncology* Biology* Physics 91, 39–47.
  • Krauth et al. (2010) Krauth, A., Blanc, R., Poveda, A., Jeanmonod, D., Morel, A., Székely, G., 2010. A mean three-dimensional atlas of the human thalamus: generation from multiple histological data. Neuroimage 49, 2053–2062.
  • Maes et al. (1997) Maes, F., Collignon, A., Vandermeulen, D., Marchal, G., Suetens, P., 1997. Multimodality image registration by maximization of mutual information. IEEE transactions on medical imaging 16, 187–198.
  • Malandain et al. (2004) Malandain, G., Bardinet, E., Nelissen, K., Vanduffel, W., 2004.

    Fusion of autoradiographs with an mr volume using 2-d and 3-d linear transformations.

    NeuroImage 23, 111–127.
  • Mazziotta et al. (2001) Mazziotta, J., Toga, A., Evans, A., Fox, P., Lancaster, J., Zilles, K., Woods, R., Paus, T., Simpson, G., Pike, B., Holmes, C., Collins, L., Thompson, P., MacDonald, D., Iacoboni, M., Schormann, T., Amunts, K., Palomero-Gallagher, N., Geyer, S., Parsons, L., Narr, K., Kabani, N., Le Goualher, G., Boomsma, D., Cannon, T., Kawashima, R., Mazoyer, B., 2001. A probabilistic atlas and reference system for the human brain: International consortium for brain mapping (ICBM). Philosophical Transactions of the Royal Society of London B: Biological Sciences 356, 1293–1322.
  • Mazziotta et al. (1995) Mazziotta, J.C., Toga, A.W., Evans, A., Fox, P., Lancaster, J., 1995. A probabilistic atlas of the human brain: Theory and rationale for its development: The international consortium for brain mapping (ICBM). Neuroimage 2, 89–101.
  • Modat et al. (2014) Modat, M., Cash, D.M., Daga, P., Winston, G.P., Duncan, J.S., Ourselin, S., 2014. Global image registration using a symmetric block-matching approach. Journal of Medical Imaging 1, 024003–024003.
  • Modat et al. (2010) Modat, M., Ridgway, G.R., Taylor, Z.A., Lehmann, M., Barnes, J., Hawkes, D.J., Fox, N.C., Ourselin, S., 2010. Fast free-form deformation using graphics processing units. Computer methods and programs in biomedicine 98, 278–284.
  • Moler and Van Loan (2003) Moler, C., Van Loan, C., 2003. Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later. SIAM review 45, 3–49.
  • Ourselin et al. (2001) Ourselin, S., Roche, A., Subsol, G., Pennec, X., Ayache, N., 2001. Reconstructing a 3d structure from serial histological sections. Image and vision computing 19, 25–31.
  • Parisi (1988) Parisi, G., 1988. Statistical field theory. Addison-Wesley.
  • Pitiot et al. (2006) Pitiot, A., Bardinet, E., Thompson, P.M., Malandain, G., 2006. Piecewise affine registration of biological images for volume reconstruction. Medical image analysis 10, 465–483.
  • Pluim et al. (2003) Pluim, J.P., Maintz, J.A., Viergever, M.A., 2003. Mutual-information-based registration of medical images: a survey. IEEE transactions on medical imaging 22, 986–1004.
  • Rangarajan et al. (1997) Rangarajan, A., Chui, H., Mjolsness, E., Pappu, S., Davachi, L., Goldman-Rakic, P., Duncan, J., 1997. A robust point-matching algorithm for autoradiograph alignment. Medical image analysis 1, 379–398.
  • Roy et al. (2014) Roy, S., Carass, A., Jog, A., Prince, J.L., Lee, J., 2014. MR to CT registration of brains using image synthesis, in: Proceedings of SPIE, NIH Public Access.
  • Roy et al. (2013) Roy, S., Carass, A., Prince, J.L., 2013. Magnetic resonance image example-based contrast synthesis. IEEE transactions on medical imaging 32, 2348–2363.
  • Rueckert et al. (1999) Rueckert, D., Sonoda, L.I., Hayes, C., Hill, D.L., Leach, M.O., Hawkes, D.J., 1999. Nonrigid registration using free-form deformations: application to breast MR images. IEEE transactions on medical imaging 18, 712–721.
  • Saygin et al. (2017) Saygin, Z., Kliemann, D., Iglesias, J., van der Kouwe, A., Boyd, E., Reuter, M., Stevens, A., Van Leemput, K., McKee, A., Frosch, M., Fischl, B., Augustinack, J.C., 2017. High-resolution magnetic resonance imaging reveals nuclei of the human amygdala: manual segmentation to automatic atlas. NeuroImage 155, 370–382.
  • Schmitt et al. (2007) Schmitt, O., Modersitzki, J., Heldmann, S., Wirtz, S., Fischer, B., 2007. Image registration of sectioned brains. International Journal of Computer Vision 73, 5–39.
  • Shattuck et al. (2008) Shattuck, D.W., Mirza, M., Adisetiyo, V., Hojatkashani, C., Salamon, G., Narr, K.L., Poldrack, R.A., Bilder, R.M., Toga, A.W., 2008. Construction of a 3D probabilistic atlas of human cortical structures. Neuroimage 39, 1064–1080.
  • Siversson et al. (2015) Siversson, C., Nordström, F., Nilsson, T., Nyholm, T., Jonsson, J., Gunnlaugsson, A., Olsson, L.E., 2015. MRI only prostate radiotherapy planning using the statistical decomposition algorithm. Medical physics 42, 6090–6097.
  • Tu et al. (2008) Tu, Z., Narr, K.L., Dollár, P., Dinov, I., Thompson, P.M., Toga, A.W., 2008. Brain anatomical structure segmentation by hybrid discriminative/generative models. IEEE transactions on medical imaging 27, 495–508.
  • van Tulder and de Bruijne (2015) van Tulder, G., de Bruijne, M., 2015. Why does synthesized data improve multi-sequence classification?, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer. pp. 531–538.
  • Van Leemput et al. (1999) Van Leemput, K., Maes, F., Vandermeulen, D., Suetens, P., 1999. Automated model-based bias field correction of mr images of the brain. IEEE transactions on medical imaging 18, 885–896.
  • Vercauteren et al. (2007) Vercauteren, T., Pennec, X., Perchant, A., Ayache, N., 2007. Non-parametric diffeomorphic image registration with the demons algorithm. Medical Image Computing and Computer-Assisted Intervention–MICCAI 2007 , 319–326.
  • Wang et al. (2017) Wang, G., Li, W., Zuluaga, M.A., Pratt, R., Patel, P.A., Aertsen, M., Doel, T., David, A.L., Deprest, J., Ourselin, S., Vercauteren, T., 2017. Interactive medical image segmentation using deep learning with image-specific fine-tuning. arXiv preprint arXiv:1710.04043 .
  • Wells et al. (1996) Wells, W.M., Viola, P., Atsumi, H., Nakajima, S., Kikinis, R., 1996. Multi-modal volume registration by maximization of mutual information. Medical image analysis 1, 35–51.
  • Wolterink et al. (2017) Wolterink, J.M., Dinkla, A.M., Savenije, M.H., Seevinck, P.R., van den Berg, C.A., Išgum, I., 2017. Deep MR to CT synthesis using unpaired data, in: International Workshop on Simulation and Synthesis in Medical Imaging, Springer. pp. 14–23.
  • Yang et al. (2012) Yang, Z., Richards, K., Kurniawan, N.D., Petrou, S., Reutens, D.C., 2012. Mri-guided volume reconstruction of mouse brain from histological sections. Journal of neuroscience methods 211, 210–217.
  • Yushkevich et al. (2006) Yushkevich, P.A., Avants, B.B., Ng, L., Hawrylycz, M., Burstein, P.D., Zhang, H., Gee, J.C., 2006. 3d mouse brain reconstruction from histology using a coarse-to-fine approach. Lecture Notes in Computer Science 4057, 230–237.
  • Yushkevich et al. (2009) Yushkevich, P.A., Avants, B.B., Pluta, J., Das, S., Minkoff, D., Mechanic-Hamilton, D., Glynn, S., Pickup, S., Liu, W., Gee, J.C., Grossman, M., Detre, J.A., 2009. A high-resolution computational atlas of the human hippocampus from postmortem magnetic resonance imaging at 9.4T . Neuroimage 44, 385–398.
  • Zhao et al. (1993) Zhao, W., Young, T.Y., Ginsberg, M.D., 1993. Registration and three-dimensional reconstruction of autoradiographic images by the disparity analysis method. IEEE Transactions on medical imaging 12, 782–791.
  • Zhu et al. (2017) Zhu, J.Y., Park, T., Isola, P., Efros, A.A., 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593 .