When analyzing collections of imaging data, a general goal is to quantify similarities and differences across images. In medical image analysis and computational anatomy, a common goal is to find patterns that can distinguish morphologies of healthy and diseased subjects aiding the understanding of the population epidemiology. Such distinguishing patterns are typically investigated by comparing single observations to a representative member of the underlying population, and statistical analyses are performed relative to this representation. In the context of medical imaging, it has been customary to choose the template from the observed data as a common image of the population. However, such an approach has been shown to be highly dependent on the choice of the image. In more recent approaches, the templates are estimated using statistical methods that make use of the additional information provided by the observed data .
In order to quantify the differences between images, the dominant modes of variation in the data must be identified. Two major types of variability in a collection of comparable images are intensity variation and variation in point-correspondences. Point-correspondence or warp variation can be viewed as shape variability of an individual observation with respect to the template. Intensity variation is the variation that is left when the observations are compensated for the true warp variation. This typically includes noise artifacts like systematic error and sensor noise or anatomical variation such as tissue density or tissue texture. Typically one would assume that the intensity variation consists of both independent noise and spatially correlated effects.
In this work, we introduce a flexible class of mixed-effects models that explicitly model the template as a fixed effect and intensity and warping variation as random effects, see Figure 1. This simultaneous approach enables separation of the random variation effects in a data-driven fashion using alternating maximum-likelihood estimation and prediction. The resulting model will therefore choose the separation of intensity and warping effects that is most likely given the patterns of variation found in the data. From the model specification and estimates, we are able to denoise observations through linear prediction in the model under the maximum likelihood estimates. Estimation in the model is performed with successive linearization around the warp parameters enabling the use of linear mixed-effects predictors and avoiding the use of sampling techniques to account for nonlinear terms. We apply our method on datasets of face images and 2D brain MRIs to illustrate its ability to estimate templates for populations and predict warp and intensity effects.
1.1 Outline of the paper
The paper is structured as follows. In Section 2, we give an overview of previously introduced methods for analyzing image data with warp variation. Section 3 covers the mixed-effects model including a description of the estimation procedure (Section 3.1) and how to predict from the model (Section 3.2). In Section 4, we give an example of how to model spatially correlated variations with a tied-down Brownian sheet. We consider two applications of the mixed-effects model to real-life datasets in Section 5 and Section 6 contains a simulation study that is used for comparing the precision of the model to more conventional approaches.
The model introduced in this paper focuses on separately modelling the intensity and warp variation. Image registration conventionally only focuses on identifying warp differences between pairs of images. The intensity variation is not included in the model and possible removal of this effect is considered as a pre-or postprocessing step. The warp differences are often found by solving a variational problem of the form
see for example . Here measures the dissimilarity between the fixed image and the warped image , is a regularization on the warp , and is a weight that is often chosen by ad-hoc methods. After registration, either the warp, captured in , or the intensity differences between and can be analyzed . Several works have defined methods that incorporate registration as part of the defined models. The approach described in this paper will also regard registration as a part of the proposed model and adress the following three problems that arise in image analysis: (a) being able to estimate model parameters such as in a data-driven fashion; (b) assuming a generative statistical model that gives explicit interpretation of the terms that corresponds to the dissimilarity and penalization ; and (c) being simultaneous in the estimation of population-wide effects such as the mean or template image and individual per-image effects, such as the warp and intensity effects. These features are of fundamental importance in image registration and many works have addressed combinations of them. The main difference of our approach to state-of-the-art statistical registration frameworks is that we propose a simultaneous random model for warp and intensity variation. As we will see, the combination of maximum likelihood estimation and the simultaneous random model for warp and intensity variation manifests itself in a trade-off where the uncertainty of both effects are taken into account simultaneously. As a result, when estimating fixed effects and predicting random effects in the model the most likely separation of the effects given the observed patterns of variation in the entire data material is used.
Methods for analyzing collections of image data, for example template estimation in medical imaging , with both intensity and warping effects can be divided into two categories, two-step methods and simultaneous methods. Two-step methods perform alignment as a preprocessing step before analyzing the aligned data. Such methods can be problematic because the data is modified and the uncertainty related to the modification is ignored in the subsequent analysis. This means that the effect of intensity variation is generally underestimated, which can introduce bias in the analysis, see  for the corresponding situation in 1D functional data analysis. Simultaneous methods, on the other hand, seek to analyze the images in a single step that includes the alignment procedure.
Conventional simultaneous methods typically use data terms to measure dissimilarity. Such dissimilarity measures are equivalent to the model assumption that the intensity variation in the image data consists solely of uncorrelated Gaussian noise. This approach is commonly used in image registration with the sum of squared differences (SSD) dissimilarity measure, and in atlas estimation . Since the data term is very fragile to systematic deviations from the model assumption, for example contrast differences, the method can perform poorly. One solution to make the data term more robust against systematic intensity variation and in general to insufficient information in the data term is to add a strong penalty on the variation of the warping functions. This approach is however an implicit solution to the problem, since the gained robustness is a side effect of regularizing another model component. As a consequence, the effect on the estimates is very hard to quantify, and it is very hard to specify a suitable regularization for a specific type of intensity variation. This approach is, for example, taken in the variational formulations of the template estimation problem in . An elegant instance of this strategy is the Bayesian model presented in  where the warping functions are modeled as latent Gaussian effects with an unknown covariance that is estimated in a data-driven fashion. Conversely, systematic intensity variation can be sought to be removed prior to the analysis, in a reversed two-step method, for example by using bias-correction techniques for MRI data . The presence of warp variation can however influence the estimation of the intensity effects.
Analysis of images with systematic intensity differences can be improved using data dissimilarity measures that are robust or invariant to such systematic differences. However, robustness and invariance come at a cost in accuracy. By choosing a specific kind of invariance in the dissimilarity measure, the model is given a pre-specified recipe for separating intensity and warping effects; the warps should maximize the invariant part of the residual under the given model parameters. Examples of classical robust data terms include -norm data terms , Charbonnier data terms , and Lorentzian data terms . Robust data terms are often challenging to use, since they may not be differentiable (-norms) or may not be convex (Lorentzian data term). A wide variety of invariant data terms have been proposed, and are useful when the invariances represent a dominant mode of variation in the data. Examples of classical data terms that are invariant to various linear and nonlinear photometric relationships are normalized cross-correlation, correlation-ratio and mutual information [20, 13, 36, 27]. Another approach for achieving robust or invariant data terms is to transform the data that is used in the data term. A classical idea is to match discretely computed gradients or other discretized derivative quantities . A related idea is to construct invariant data terms based on discrete transformations. This type of approach has become increasingly popular in image matching in recent years. Examples include the rank transform and the census transform [47, 22, 10, 11], and more recently the complete rank transform . While both robust and invariant data terms have been shown to give very good results in a wide array of applications, they induce a fixed measure of variation that does not directly model variation in the data. Thus, the general applicability of the method can come at the price of limited accuracy.
Several alternative approaches for analyzing warp and intensity simultaneously have been proposed [24, 15, 3, 45]. In  warps between images are considered as combination of two transformation fields, one representing the image motion (warp effect) and one describing the change of image brightness (intensity effect). Based on this definition warp and intensity variation can be modeled simultaneously. An alternative approach is considered in , where an invariant metric is used, which enables analysis of the dissimilarity in point correspondences between images disregarding the intensity variation. These methods are not statistical in the sense that they do not seek to model the random structures of the variation of the image data. A statistical model is presented in , where parameters for texture, shape variation (warp) and rendering are estimated using maximizing-a-posteriori estimation.
To overcome the mentioned limitations of conventional approaches, we propose to do statistical modeling of the sources of variation in data. By using a statistical model where we assume parametric covariance structures for the different types of observed variation, the variance parameters can be estimated from the data. The contribution of different types of variation is thus weighted differently in the data term. By using, for example, maximum-likelihood estimation, the most likely form of the variation given the data is penalized the least. We emphasize that in contrast to previous mixed-effects models incorporating warp effects [1, 48], the goal here is to simultaneously model warp and intensity effects. These effects impose randomness relative to a template, the fixed-effect, that is estimated during the inference process.
The nonlinear mixed-effects models are a commonly used tool in statistics. These types of models can be computationally intensive to fit, and are rarely used for analyzing large data sizes such as image data. We formulate the proposed model as a nonlinear mixed-effects model and demonstrate how certain model choices can be used to make estimation in the model computationally feasible for large data sizes. The model incorporates random intensity and warping effects in a small-deformation setting: We do not require warping functions to produce diffeomorphisms. The geometric structure is therefore more straightforward than in for example the LDDMM model . From a statistical perspective, the small-deformation setting is much easier to handle than the large-deformation setting where warping functions are restricted to produce diffeomorphisms.
Instead of requiring diffeomorphisms, we propose a class of models that will produce warping functions that in most cases do not fold. Another advantage of the small-deformation setting is that we can model the warping effects as latent Gaussian disparity vectors in the domain. Such direct modeling allows one to compute a high-quality approximation of the likelihood function by linearizing the model around the modes of the nonlinear latent random variables. The linearized model can be handled using conventional methods for linear mixed-effects models which are very efficient compared to sampling-based estimation procedures.
In the large-deformation setting, the metamorphosis model [41, 42] extends the LDDMM framework for image registration  to include intensity change in images. Warp and intensity differences are modeled separately in metamorphosis with a Riemannian structure measuring infinitesimal variation in both warp and intensity. While this separation has similarities to the statistical model presented here, we are not aware of any work which have considered likelihood-based estimation of variables in metamorphosis models.
3 Statistical model
We consider spatial functional data defined on taking values in . Let be functional observations on a regular lattice with points , that is, for , . Consider the model in the image space
for , and . Here denotes the template and is a warping function matching a point in to a point in the template . Moreover is the random spatially correlated intensity variation for which we assume that where the spatial correlation is determined by the covariance matrix . The term models independent noise. The template is a fixed-effect while , , and are random.
We will consider warping functions of the form
is coordinate-wise bilinear spline interpolation ofon a lattice spanned by . In other words, models discrete spatial displacements at the lattice anchor points. Figure 2 shows an example of disparity vectors on a grid of anchor points and the corresponding warping function.
The displacements are modeled as random effects, where is a covariance matrix, and, as a result, the warping functions can be considered nonlinear functional random effects. As
is assumed to be normally distributed with mean zero, small displacements are favorited and hence the warp effect will be less prone to fold. The model is a spatial extension of the phase and amplitude varying population pattern (pavpop) model for curves[34, 32].
First, we will consider estimation of the template from the functional observations, and we will estimate the contributions of the different sources of variation. In the proposed model, this is equivalent to estimating the covariance structure for the warping parameters, the covariance structure for the spatially correlated intensity variation, and the noise variance . The estimate of the template is found by considering model (2) in the back-warped template space
Because every back-warped image represents on the observation lattice, a computationally attractive parametrization is to model using one parameter per observation point, and evaluate non-observation points using bilinear interpolation. This parametrization is attractive, because Henderson’s mixed-model equations [12, 35] suggests that the conditional estimate for given is the pointwise average
if we ignore the slight change in covariance resulting from the back-warping of the random intensity effects. As this estimator depends on the warping parameters, the estimation of and the variance parameters has to be performed simultaneously with the prediction of the warping parameters. We note that, as in any linear model, the estimate of the template is generally quite robust against slight misspecifications of the covariance structure. And the idea of estimating the template conditional on the posterior warp is similar to the idea of using a hard EM algorithm for computing the maximum likelihood estimator for .
We use maximum-likelihood estimation to estimate variance parameters, that is, we need to minimize the negative log-likelihood function of model (2). Note that (2) contains nonlinear random effects due to the term where is a nonlinear transformation of . We handle the nonlinearity and approximate the likelihood by linearizing the model (2) around the current predictions of the warping parameters :
where denotes the Jacobian matrix of with respect to and
Letting , the linearized model can be rewritten
We notice that in this manner, can be approximated as a linear combination of normally distributed variables, hence the negative log-likelihood for the linearized model is given by
where . The idea of linearizing nonlinear mixed-effects models in the nonlinear random effects is a solution that has been shown to be effective and which is implemented in standard software packages [18, 29, 30]. The proposed model is, however, both more general and computationally demanding than what can be handled by conventional software packages. Furthermore, we note that the linearization in a random effect as done in model (7) is fundamentally different than the conventional linearization of a nonlinear dissimilarity measure such as in the variational problem (1). As we see from the linearized model (7), the density of is approximated by the density of a linear combination, , of multivariate Gaussian variables. The likelihood function for the first-order Taylor expansion in of the model (2) is thus a Laplace approximation of the true likelihood, and the quality of this approximation is approximately second order .
3.1.1 Computing the likelihood function
As mentioned above the proposed model is computationally demanding. Even the approximated likelihood function given in equation (8) is not directly computable because of the large data sizes. In particular, the computations related to determinants and inverses of the covariance matrix are infeasible unless we impose certain structures on these. In the following, we will assume that the covariance matrix for the spatially correlated intensity variation has full rank and sparse inverse. We stress that this assumption is merely made for computational convenience and that the proposed methodology is also valid for non-sparse precision matrices. The zeros in the precision matrix are equivalent to assuming conditional independences between the intensity variation in corresponding pixels given all other pixels . A variety of classical models have this structure, in particular (higher-order) Gaussian Markov random fields models have sparse precision matrices because of their Markov property.
To efficiently do computations with the covariances , we exploit the structure of the matrix. The first term is an update to the intensity covariance with a maximal rank of . Furthermore, the first term of the intensity covariance has a sparse inverse and the second term is of course sparse with a sparse inverse. Using the Woodbury matrix identity, we obtain
which can be computed if we can efficiently compute the inverse of the potentially huge intensity covariance matrix . We can rewrite the inverse intensity covariance as
Thus we can write in a way that only involves operations on sparse matrices. To compute the inner product , we first form the matrix and compute its Cholesky decomposition using the Ng-Peyton method  implemented in the spam R-package . By solving a low-rank linear system using the Cholesky decomposition, we can thus compute . The inner product is then efficiently computed as
To compute the log determinant in the likelihood, one can use the matrix determinant lemma similarly to what was done above to split the computations into low-rank computations and computing the determinant of ,
For the models that we will consider, the latter computation is done by using the operator approximation proposed in  which, for image data with sufficiently high resolution (e.g. ), gives a high-quality approximation of the determinant of the intensity covariance that can be computed in constant time.
By taking the described strategy, we never need to form a dense matrix, and we can take advantage of the sparse and low-rank structures to reduce the computation time drastically. Furthermore, the fact that we assume equal-size images allows us to only do a single Cholesky factorization per likelihood computation, which is further accelerated by using the updating scheme described in .
After the maximum-likelihood estimation of the template and the variance parameters, we have an estimate for the distribution of the warping parameters. We are therefore able to predict the warping functions that are most likely to have occurred given the observed data. This prediction parallels the conventional estimation of deformation functions in image registration. Let be the density for the distribution of the warping functions given the data and define , in a similar manner. Then, by applying , we see that the warping functions that are most likely to occur are the minimizers of the posterior
Given the updated predictions of the warping parameters, we update the estimate of the template and then minimize the likelihood to obtain updated estimates of the variances. This procedure is then repeated until convergence is obtained. The estimation algorithm is given in Algorithm 1. The run time for the algorithm will be very different depending on the data in question. As an example we ran the model for 10 MRI midsaggital slices (for more details see Section 5.2) of size , with . We ran the algorithm on an Intel Xeon E5-2680 2.5GHz processor. The run time needed for full maximum likelihood estimation in this setup was 1 hour and 15 minutes using a single core. This run time is without parallization, but it is possible to apply parallization to make the algorithm go faster.
The spatially correlated intensity variation can also be predicted. Either as the best linear unbiased prediction from the linearized model (7) (see e.g. equation 5 in ). Alternatively, to avoid a linear correction step when predicting , one can compute the best linear unbiased prediction given the maximum-a-posteori warp variables
The prediction of the spatially correlated intensity variation can, for example, be used for bias field correction of the images.
4 Models for the spatially correlated variations
The main challenge of the presented methods is the computability of the likelihood function, in particular computations related to the covariance matrix of the spatially correlated intensity variation . The same issues are not associated with the covariance matrix , for the warping parameters, as the dimensions of this matrix are considerably smaller than the dimensions of . In the end of this section, we will give a short description of how the displacement vectors can be modeled, but first we consider the covariance matrix .
As mentioned in the previous section, the path we will pursue to make likelihood computations efficient is to assume that the systematic random effect has a covariance matrix with sparse inverse. In particular, modeling as a Gaussian Markov random field will give sparse precision matrices . The Markov random field structure gives a versatile class of models that has been demonstrated to be able to approximate the properties of general Gaussian fields surprisingly well . Estimation of a sparse precision matrix is a fundamental problem and a vast literature exists on the subject. We mention in passing the fundamental works, [5, 8], which could be adapted to the present setup to estimate unstructured sparse precision matrices. We will however not pursue that extension in the present paper.
We here model as a tied-down Brownian sheet, which is the generalization of the Brownian bridge (which is Markov) to the unit square . The covariance function, , for the tied-down Brownian sheet is
The covariance is along the boundary of the unit square and reaches its maximal variance at the center of the image. These properties seem reasonable for many image analysis tasks, where one would expect the subject matter to be centered in the image with little or no variation along the image boundary.
Let be the covariance matrix for a Brownian sheet observed at the lattice spanned by and , , with row-major ordering. The precision matrix is sparse with the following structure for points corresponding to non-boundary elements:
For boundary elements, the elements outside the observation boundary vanish.
As explained in Section 3.1.1, the computational difficulties related to the computation of the log determinant in the negative log likelihood function (8) comes down to computing the log determinant of the intensity covariance . For the tied-down Brownian sheet, the log determinant can be approximated by means of the operator approximation given in [33, Example 3.4]. The approximation is given by
To compute the approximation we cut the sum off after 10,000 terms.
As a final remark, we note that the covariance function is the Green’s function for the differential operator on under homogeneous Dirichlet boundary conditions. Thus the conditional linear prediction of given by (10) is equivalent to estimating the systematic part of the residual as a generalized smoothing spline with roughness penalty
The tied-down Brownian sheet can also be used to model the covariance between the displacement vectors. Here the displacement vectors given by the warping variables are modeled as discretely observed tied-down Brownian sheets in each displacement coordinate. As was the case for the intensity covariance, this model is a good match to image data since it allows the largest deformations around the middle of the image. Furthermore, the fact that the model is tied down along the boundary means that we will predict the warping functions to be the identity along the boundary of the domain , and for the found variance parameters, the predicted warping functions will be homeomorphic maps of onto
with high probability.
In the applications in the next section, we will use the tied-down Brownian sheet to model the spatially correlated variations.
In this section, we will apply the developed methodology on two different real-life datasets. In the first example, we apply the model to a collection of face images that are difficult to compare due to varying expressions and lighting sources. We compare the results of the proposed model to conventional registration methods and demonstrate the effects of the simultaneous modeling of intensity and warp effects. In the second example, we apply the methodology to the problem of estimating a template from affinely aligned 2D MR images of brains.
5.1 Face registration
Consider the ten face images from the AT&T Laboratories Cambridge Face Database  in Figure 3. The images are all of the same person, but vary in head position, expression and lighting. The dataset contains two challenges from a registration perspective, namely the differences in expression that cause dis-occlusions or occlusions (e.g. showing teeth, closing eyes) resulting in large local deviations; and the difference in placement of the lighting source that causes strong systematic deviations throughout the face.
To estimate a template face from these images, the characteristic features of the face should be aligned, and the systematic local and global deviations should be accounted for. In the proposed model (2), these deviations are explicitly modeled through the random effect .
Using the maximum-likelihood estimation procedure, we fitted the model to the data using displacement vectors on an equidistant interior grid in . We used 5 outer and 3 inner iterations in Algorithm 1. The image value range was scaled to . The estimated variance scale for the random effect was ; for the warp variables, the variance scale was estimated to ; and for the residual variance, the estimated scale was .
To illustrate the effect of the simultaneous modeling of random intensity and warp effects, we estimated a face template using three more conventional variants of the proposed framework: a pointwise estimation that corresponds to model (2) with no warping effect; a Procrustes model that corresponds to model (2) with no intensity component and where the warp variables were modeled as unknown parameters and estimated using maximum-likelihood estimation; and a warp-regularized Procrustes method where the warp variables were penalized using a term where is the precision matrix for the 2D tied-down Brownian sheet with smoothing parameter (chosen to give good visual results).
The estimated templates for the proposed model and the alternative models described above can be found in Figure 4. Going from left to right, it is clear that the sharpness and representativeness of the estimates increase.
To validate the models, we can consider how well they predict the observed faces under the maximum-likelihood estimates and posterior warp predictions. These predictions are displayed in Figure 5. The rightmost column displays the five most deviating observed faces. From the left, the first three columns show the corresponding predictions from the Procrustes model, the warp-regularized Procrustes model and, for comparison, the predicted warped templates from the proposed model. It is clear that both the sharpness and the representativeness increase from left to right. The predictions in the third column show the warped template of model (2) which does not include the predicted intensity effect . The fourth column displays the full prediction from the proposed model given as the best linear unbiased prediction conditional on the maximum-a-posteori warp variables . The full predictions are very faithful to the observations, with only minor visible deviations around the eyes in the second and fifth row. This suggests that the chosen model for the spatially correlated intensity variation, the tied-down Brownian sheet, is sufficiently versatile to model the systematic part of the residuals.
|No alignment||Procrustes free warp||Procrustes regularized warp||proposed|
using different models. The models used to calculate the estimates are from left to right: model assuming no warping effect and Gaussian white noise for the intensity model, the same model but with a free warping function based on 16 displacement vectors, the same model but with a penalized estimation of warping functions (2D tied-down Brownian sheet with scale fixed), the full model (2).
|Procrustes||Regularized Procrustes||proposed warped template prediction||proposed full prediction||observation|
5.2 MRI slices
The data considered in this section are based on 3D MR images from the ADNI database . We have based the example on
images with 18 normal controls (NC), 13 with Alzheimer’s disease (AD) and 19 who are mild cognitively impaired (MCI). The 3D images were initially affinely aligned with 12 degrees of freedom and normalized mutual information (NMI) as a similarity measure. After the registration, the mid-sagittal slices were chosen as observations. Moreover the images were intensity normalized toand afterwards the mid-sagittal plane was chosen as the final observations. The mid-sagittal planes are given as observations on an equidistant grid on . Six samples are displayed in Figure 6 where differences in both contrast, placement and shape of the brains are apparent.
For the given data, we used 25 displacement vectors on an equidistant interior grid in . The number of inner iterations in the algorithm was set to , while the number of outer iterations was set to as the variance parameters and likelihood value already stabilized after a couple of iterations. The estimated variance scales are given by for the spatially correlated intensity variation, for the warp variation and for the residual variance. The estimated template can be found in the rightmost column in Figure 7.
For comparison, we have estimated a template without any additional warping (i.e. only using the rigidly aligned slices), and a template estimated using a Procrustes model with fixed warping effects and no systematic intensity variation, but otherwise comparable to the proposed model. These templates can be found in the leftmost and middle columns of Figure 7. Comparing the three, we see a clear increase in details and sharpness from left to right. The reason for the superiority of the proposed method is both that the regularization of warps is based on maximum-likelihood estimation of variance parameters, but also that the prediction of warps takes the systematic deviations into account. Indeed, we can rewrite the data term in the posterior (9) as
Thus, in the prediction of warps, there is a trade-off between the regularity of the displacement vectors (the term in eq. 9) and the regularity of the predicted spatially correlated intensity variation given the displacement vectors (the term ).
The difference in regularization of the warps is shown in Figure 8, where the estimated warps using the Procrustes model are compared to the predicted warps from the proposed model. We see that the proposed model predicts much smaller warps than the Procrustes model.
|Rigid registration with scaling||Procrustes free warp||proposed|
One of the advantages of the mixed-effects model is that we are able to predict the systematic part of the intensity variation of each image, which in turn also gives a prediction of the residual intensity variation—the variation that cannot be explained by systematic effects. In Figure 9, we have predicted the individual observed slices using the Procrustes model and the proposed model. As we also saw in Figure 8, the proposed model predicts less deformation of the template compared to the Procrustes model, and we see that the Brownian sheet model is able to account for the majority of the personal structure in the sulci of the brain. Moreover, the predicted intensity variation seems to model intensity differences introduced by the different MRI scanners well.
|Procrustes warped template prediction||warped template prediction from the proposed model||predicted spatially correlated intensity variation||full prediction||observation|
6 Simulation study
In this section, we present a simulation study for investigating the precision of the proposed model. The results are compared to the previously introduced models: Procrustes free warp and a regularized Procrustes. Data are generated from model (2) in which is taken as one of the MRI slices considered in Section 5.2. The warp, intensity and the random noise effects are all drawn from the previously described multivariate normal distributions with variance parameters respectively
and applied to the chosen template image . To consider more realistic brain simulations, the systematic part of the intensity effect was only added to the brain area of and not the background. As this choice makes the proposed model slightly misspecified, it will be hard to obtain precise estimates of the variance parameters. In practice, one would expect any model with a limited number of parameters to be somewhat misspecified in the presented setting. The simulations thus present a realistic setup and our main interest will be in estimating the template and predicting warp and intensity effects. Figure 10 displays 5 examples of the simulated observations as well as the chosen .
The study is based on 100 data sets of 100 simulated brains. For each simulated dataset we applied the proposed, Procrustes free warp and Procrustes regularized model. The regularization parameter, , in the regularized Procrustes model, was set to the true parameter used for generating the data .
The variance estimates based on the simulations are shown in Figure 11. The true variance parameters are plotted for comparison. We see some bias in the variance parameters. While bias is to be expected, the observed bias for the noise variance and the warp variance scale are bigger than what one would expect. The reason for the underestimation of the noise variance seems to be the misspecification of the model. Since the model assumes spatially correlated noise outside of the brain area, where there is none, the likelihood assigns the majority of the variation in this area to the systematic intensity effect. The positive bias of the warp variance scale seems to be a compensating effect for the underestimated noise variance.
The left panel of Figure 12 shows the mean squared difference for the estimated templates with the three types of models. We see that the proposed model produces conisderably more accurate estimates than the alternative frameworks.
To give an example of the difference between template estimates for the three different models, one set of template estimates for each of the models is shown in Figure 13. From this example we see that the template for the proposed model is slightly more sharp than the Procrustes models and are more similar to the true which was also the conclusion obtained from the density of the mean squared difference for the template estimates (Figure 12).
The right panel of Figure 12 shows the mean squared prediction/estimation error of the warp effects. The error is calculated using only the warp effects in the brain area since the background is completely untextured, and any warp effect in this area will be completely determined by the prediction/estimation in the brain area. We find that the proposed model estimates warp effects that are closest to the true warps. It is worth noticing that the proposed model is considerably better at predicting the warp effects than the regularized Procrustes model. This happens despite the fact that the value for the warp regularization parameter in the model was chosen to be equal to the true parameter (). Examples of the true warping functions in the simulated data and the predicted/estimated effects in the different models are shown in Figure 14. None of the considered models are able to make sensible predictions on the background of the brain, which is to be expected. In the brain region, the predicted warps for the proposed model seem to be very similar to the true warp effect, which we also saw in Figure 12 was a general tendency.
|True warp effect|
|Procrustes free warp|
7 Conclusion and outlook
We generalized the likelihood based mixed-effects model for template estimation and separation of phase and intensity variation to 2D images. This type of model was originally proposed for curve data 
. As the model is computationally demanding for high dimensional data, we presented an approach for efficient likelihood calculations. We proposed an algorithm for doing maximum-likelihood based inference in the model and applied it to two real-life datasets.
Based on the data examples, we showed how the estimated template had desirable properties and how the model was able to simultaneously separate sources of variation in a meaningful way. This feature eliminates the bias from conventional sequential methods that process data in several independent steps, and we demonstrated how this separation resulted in well-balanced trade-offs between the regularization of warping functions and intensity variation.
We made a simulation study to investigate the precision of the template and warp effects of the proposed model and for comparison with two other models. The proposed model was compared with a Procrustes free warp model, as well as a Procrustes regularized model. Since the noise model was misspecified, the proposed methodology could not recover precise maximum likelihood estimates of the variance parameters. However, the maximum likelihood estimate for the template was seen to be a lot sharper and closer to the true template compared to alternative Procrustes models. Furthermore, we demonstrated that the proposed model was better at predicting the warping effect than the alternative models.
The main restriction of the proposed model is the computability of the likelihood function. We resolved this by modeling intensity variation as a Gaussian Markov random field. An alternative approach would be to use the computationally efficient operator approximations of the likelihood function for image data suggested in . This approach would, however, still require a specific choice of parametric family of covariance functions, or equivalently, a family of positive definite differential operators. An interesting and useful extension would be to allow a free low-rank spatial covariance structure and estimate it from the data. This could, for example, be done by extending the proposed model (2
) to a factor analysis model where both the mean function and intensity variation is modeled in a common functional basis, and requiring a specific rank of the covariance of the intensity effect. Such a model could be fitted by means of an EM algorithm similar to the one for the reduced-rank model for computing functional principal component analysis proposed in, and it would allow simulation of realistic observations by sampling from the model.
For the computation of the likelihood function of the nonlinear model, we relied on local linearization which is a simple well-proven and effective approach. In recent years, alternative frameworks for doing maximum likelihood estimation in nonlinear mixed-effects models have emerged, see  and references therein. An interesting path for future work would be to formulate the proposed model in such a framework that promises better accuracy than the local linear approximation. This would allow one to investigate how much the linear approximation of the likelihood affects the estimated parameters. In this respect, it would also be interesting to compare the computing time across different methods to identify a suitable tradeoff between accuracy and computing time.
The proposed model introduced in this paper is a tool for analyzing 2D images. The model, as it is, could be used for higher dimensional images as well, but the analysis would be computationally infeasible with the current implementation. To extend the proposed model to 3D images there is a need to devise new computational methods for improving the calculation of the likelihood function.
-  Stéphanie Allassonniere, S Durrleman, and E Kuhn, Bayesian mixed effect atlas estimation with a diffeomorphic deformation model, SIAM Journal on Imaging Sciences, 8 (2015), pp. 1367–1395.
M. J. Black and P. Anandan, The robust estimation of multiple
motions: Parametric and piecewise-smooth flow fields
, Computer Vision and Image Understanding, 63 (1996), pp. 75–104.
-  Volker Blanz and Thomas Vetter, A morphable model for the synthesis of 3D faces, in Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’99, New York, NY, USA, 1999, ACM Press/Addison-Wesley Publishing Co., pp. 187–194.
-  Andrés Bruhn, Joachim Weickert, and Christoph Schnörr, Lucas/Kanade meets Horn/Schunck: Combining local and global optic flow methods, International Journal of Computer Vision, 61 (2005), pp. 211–231.
-  Tony Cai, Weidong Liu, and Xi Luo, A constrained minimization approach to sparse precision matrix estimation, Journal of the American Statistical Association, 106 (2011), pp. 594–607.
-  Bob Carpenter, Andrew Gelman, Matt Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Michael A Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell, Stan: A probabilistic programming language, Journal of Statistical Software, (2016).
-  Oliver Demetz, David Hafner, and Joachim Weickert, The complete rank transform: A tool for accurate and morphologically invariant matching of structures, in Proc. 2013 British Machine Vision Conference, Bristol, UK, 2013.
-  Jerome Friedman, Trevor Hastie, and Robert Tibshirani, Sparse inverse covariance estimation with the graphical lasso, Biostatistics, 9 (2008), pp. 432–441.
-  Reinhard Furrer, Stephan R Sain, et al., spam: A sparse matrix R package with emphasis on MCMC methods for Gaussian Markov random fields, Journal of Statistical Software, 36 (2010), pp. 1–25.
-  David Hafner, Oliver Demetz, and Joachim Weickert, Why is the census transform good for robust optic flow computation?, in International Conference on Scale Space and Variational Methods in Computer Vision, Springer, 2013, pp. 210–221.
-  David Hafner, Oliver Demetz, Joachim Weickert, and Martin Reißel, Mathematical foundations and generalisations of the census transform for robust optic flow computation, Journal of Mathematical Imaging and Vision, 52 (2015), pp. 71–86.
-  Charles R Henderson, Estimation of genetic parameters, Biometrics, 6 (1950), pp. 186–187.
-  Gerardo Hermosillo, Christophe Chefd’Hotel, and Olivier Faugeras, Variational methods for multimodal image matching, International Journal of Computer Vision, 50 (2002), pp. 329–343.
-  Gareth M James, Trevor J Hastie, and Catherine A Sugar, Principal component models for sparse functional data, Biometrika, 87 (2000), pp. 587–602.
Anne Jorstad, David Jacobs, and Alain Trouve,
A deformation and lighting insensitive metric for face recognition based on dense correspondences, IEEE, June 2011, pp. 2353–2360.
-  S. Joshi, Brad Davis, B Matthieu Jomier, and Guido Gerig B, Unbiased diffeomorphic atlas construction for computational anatomy, NeuroImage, 23 (2004), pp. 151–160.
-  Steffen L Lauritzen, Graphical models, Oxford University Press, 1996.
-  Mary J. Lindstrom and Douglas M. Bates, Nonlinear mixed effects models for repeated measures data, Biometrics, 46 (1990), pp. 673–687.
-  Jun Ma, Michael I. Miller, Alain Trouvé, and Laurent Younes, Bayesian template estimation in computational anatomy, NeuroImage, 42 (2008), pp. 252–261.
-  Frederik Maes, Andre Collignon, Dirk Vandermeulen, Guy Marchal, and Paul Suetens, Multimodality image registration by maximization of mutual information, Medical Imaging, IEEE Transactions on, 16 (1997), pp. 187–198.
-  Bo Markussen, Functional data analysis in an operator-based mixed-model framework, Bernoulli, 19 (2013), pp. 1–17.
-  Mahmoud A. Mohamed and Baerbel Mertsching, TV-L1 optical flow estimation with image details recovering based on modified census transform, in Advances in Visual Computing, Springer, 2012, pp. 482–491.
-  Radford M. Neal and Geoffrey E. Hinton, A view of the EM algorithm that justifies incremental, sparse, and other variants, in Learning in graphical models, Springer, 1998, pp. 355–368.
-  S. Negahdaripour, Revised definition of optical flow: integration of radiometric and geometric cues for dynamic scene analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, 20 (1998), pp. 961–979.
-  Esmond G Ng and Barry W Peyton, Block sparse Cholesky algorithms on advanced uniprocessor computers, SIAM Journal on Scientific Computing, 14 (1993), pp. 1034–1056.
-  A. Pai, S. Sommer, L. Sorensen, S. Darkner, J. Sporring, and M. Nielsen, Kernel bundle diffeomorphic image registration using stationary velocity fields and Wendland basis functions, IEEE Transactions on Medical Imaging, PP (2015), pp. 1–1.
-  Giorgio Panin, Mutual information for multi-modal, discontinuity-preserving image registration, in Advances in Visual Computing, Springer, 2012, pp. 70–81.
-  N. Papenberg, A. Bruhn, T. Brox, S. Didas, and J. Weickert, Highly accurate optical flow computations with theoretically justified warping, International Journal of Computer Vision, 67 (2006), pp. 141–158.
-  Jose Pinheiro and Douglas Bates, Mixed-effects models in S and S-PLUS, Springer Science & Business Media, 2006.
-  José Pinheiro, Douglas Bates, Saikat DebRoy, and Deepayan Sarkar, Linear and nonlinear mixed effects models, R package version, 3 (2007), p. 57.
-  Thomas Pock, Martin Urschler, Christopher Zach, Reinhard Beichel, and Horst Bischof, A duality based algorithm for TV--optical-flow image registration, in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2007, Springer, 2007, pp. 511–518.
-  Lars Lau Raket, pavpop version 0.10, 2016. https://github.com/larslau/pavpop/.
-  Lars Lau Raket and Bo Markussen, Approximate inference for spatial functional data on massively parallel processors, Computational Statistics & Data Analysis, 72 (2014), pp. 227 – 240.
Lars Lau Raket, Stefan Sommer, and Bo Markussen, A nonlinear
mixed-effects model for simultaneous smoothing and registration of functional
, Pattern Recognition Letters, 38 (2014), pp. 1–7.
-  G. K. Robinson, That BLUP is a good thing: The estimation of random effects, Statistical Science, 6 (1991), pp. 15–32.
-  Alexis Roche, Grégoire Malandain, Xavier Pennec, and Nicholas Ayache, The correlation ratio as a new similarity measure for multimodal image registration, in Medical Image Computing and Computer-Assisted Interventation—MICCAI’98, Springer, 1998, pp. 1115–1124.
-  Håvard Rue and Håkon Tjelmeland, Fitting Gaussian Markov random fields to Gaussian fields, Scandinavian journal of Statistics, 29 (2002), pp. 31–49.
-  Ferdinando Silvestro Samaria, The Database of Faces, AT&T. http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html.
-  A. Sotiras, C. Davatzikos, and N. Paragios, Deformable medical image registration: A survey, IEEE Transactions on Medical Imaging, 32 (2013), pp. 1153–1190.
-  Li Su, Andrew M. Blamire, Rosie Watson, Jiabao He, Benjamin Aribisala, and John T. O’Brien, Cortical and subcortical changes in Alzheimer’s disease: A longitudinal and quantitative MRI study, Current Alzheimer Research, 13 (2016), pp. 534–544.
-  A. Trouvé and L. Younes, Local geometry of deformable templates, SIAM Journal on Mathematical Analysis, 37 (2005), pp. 17–59.
-  Alain Trouvé and Laurent Younes, Metamorphoses through Lie group action, Foundations of Computational Mathematics, 5 (2005), pp. 173–198.
-  N.J. Tustison, B.B. Avants, P.A. Cook, Yuanjie Zheng, A. Egan, P.A. Yushkevich, and J.C. Gee, N4ITK: Improved N3 bias correction, IEEE Transactions on Medical Imaging, 29 (2010), pp. 1310–1320.
-  Russ Wolfinger, Laplace’s approximation for nonlinear mixed models, Biometrika, 80 (1993), pp. 791–795.
-  Xudong Xie and Kin-Man Lam, Face recognition using elastic local reconstruction based on a single face image, Pattern Recognition, 41 (2008), pp. 406–417.
-  Laurent Younes, Shapes and Diffeomorphisms, Springer, 2010.
-  Ramin Zabih and John Woodfill, Non-parametric local transforms for computing visual correspondence, in Computer Vision—ECCV’94, Springer, 1994, pp. 151–158.
-  Miaomiao Zhang, Nikhil Singh, and P. Thomas Fletcher, Bayesian estimation of regularization and atlas building in diffeomorphic image registration, in Information Processing for Medical Imaging (IPMI), Lecture Notes in Computer Science, Springer, 2013, pp. 37–48.