Do Public Datasets Assure Unbiased Comparisons for Registration Evaluation?

03/20/2020 ∙ by Jie Luo, et al. ∙ Harvard University 0

With the increasing availability of new image registration approaches, an unbiased evaluation is becoming more needed so that clinicians can choose the most suitable approaches for their applications. Current evaluations typically use landmarks in manually annotated datasets. As a result, the quality of annotations is crucial for unbiased comparisons. Even though most data providers claim to have quality control over their datasets, an objective third-party screening can be reassuring for intended users. In this study, we use the variogram to screen the manually annotated landmarks in two datasets used to benchmark registration in image-guided neurosurgeries. The variogram provides an intuitive 2D representation of the spatial characteristics of annotated landmarks. Using variograms, we identified potentially problematic cases and had them examined by experienced radiologists. We found that (1) a small number of annotations may have fiducial localization errors; (2) the landmark distribution for some cases is not ideal to offer fair comparisons. If unresolved, both findings could incur bias in registration evaluation.



There are no comments yet.


page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The evaluation of non-rigid registration is challenging for two reasons: firstly, quantitative validation between aligned images is sometimes difficult due to lack of ground truth. Secondly, the location where accurate registration is needed may vary by surgical procedure, e.g., in brain atlas building, good alignment of the ventricle region is sought after, while in image-guided neurosurgery, surgeons are interested in accurate registration of the preoperative (p-) tumor boundary to the intraoperative (i-) coordinate space [1, 2, 3, 4]. Because of these issues, it is difficult to set up a unified standard to characterize registration error [5, 6].

1.0.1 Related work

In early work, image similarity measures, such as the sum of squared differences or mutual information, were used as evaluation criteria for registration

[5, 6, 7]. The “Retrospective Image Registration Evaluation” (RIRE) project [8, 9, 10] introduced Target Registration Error (TRE) and Fiducial Registration Error (FRE) to evaluate registration. TRE is the true error of registered target points in physical space, while FRE represents the error of registered annotated fiducial markers in image space. Even though annotated fiducial markers may not be truly accurate due to operator error, FRE is often used as a surrogate of TRE for convenience [11, 12, 13]. The Vista [14] and NIREP [15] projects included additional registration error measures, e.g., the region of interest (ROI) overlap, the average volume difference and the Jacobian of the deformation field. Recently, multiple registration approaches were compared based on Computed Tomography of the abdomen [16, 17, 18] and Magnetic Resonance (MR) images of the brain [19, 20, 21, 22].

Due to its simplicity and the reliability issues of other criteria [23], FRE has become the most widely used registration error measure. However, using FRE has certain limitations:

  1. Because fiducial markers (landmarks) are annotated by localization algorithms (manual, automatic or semi-automatic methods), they may contain Fiducial Localization Error (FLE) [9], which measures the discrepancy between an annotated landmark and its true location. FLEs cause false registration errors and should be avoided.

  2. The FRE only estimates the error at specific landmark locations, thus a dense population of landmarks is preferred. If landmarks are sparse or are not distributed evenly across the entire ROI, a bias that favors regions with landmarks in the registration evaluation may be introduced.

Recently, more annotated data sets are becoming publicly available and these sets are being used to compare existing algorithms and evaluate new methods. Newly proposed registration algorithms are then characterized only by demonstrating superior performance on these datasets [24, 25].

To provide an unbiased evaluation of registration, the quality of the annotations is crucial. However, to the best of our knowledge, objective quality control over annotation in public datasets has been overlooked by the image registration community. Even though many providers claim to have mitigated FLEs and other problems by having multiple observers localize the landmarks (and averaging their results) [7], an objective third-party screening can be reassuring for intended users.

In this study, we perform a third-party screening of the annotations of two benchmark datasets for image-guided neurosurgery, RESECT [27] and BITE [28]. Both datasets have corresponding landmarks on p-MR and i-Ultrasound (US) images. Minimizing the FLE for these landmarks is the standard evaluation method in related registration challenges [25]

. The tool we choose for the screening is called the variogram, which has been extensively used to describe the spatial dependence of minerals in geostatistics. The variogram has been brought to the medical imaging field as a means to identify vector outliers for landmark-based image registration

[26]. In this screening, we want to (1) detect any obvious FLEs; and (2) examine the distribution of annotated landmarks. We also provide constructive discussion about the impact of our findings.

2 Method

For each pair of annotated images, we compute displacements between pairs of corresponding landmarks to generate a 3D vector field . By analyzing , we can assess the quality of the annotations. The variogram characterizes the spatial dependence of and provides an intuitive 2D representation for visual inspection [26].

In this section, we review constructing the variogram for a vector field and explain how to use the variogram to flag potential FLEs and problematic landmark distributions.

2.1 Constructing the variogram

In an image registration example, let , and be the fixed and moving images. represents a pair of manually labeled corresponding landmarks on and . is the displacement vector from to . For pairs of landmarks, we have a set of displacement vectors .

Given a landmark location , let represent the distance to . The theoretical variogram is the expected value of the differences between and other ’s whose starting points are away from :


here describes the spatial dependency of displacement vectors as a function of the distance.

In this study, we are interested in the pairwise spatial dependence of all displacement vectors. Therefore we use the empirical variogram cloud instead of . Given a vector field , can be constructed as follows:

  1. For each pair of vectors , compute ;

  2. Compute ;

  3. Plot all to obtain .

Figure 1: (a) An Illustration of how to compute and . Here and are two displacement vectors for landmark and respectively. is the distance between and . While measures the displacement difference; (b) A hypothetical generated from the vector field in Fig.1(a). Since the vector field has 5 displacement vectors, has value points. and are two value points that demonstrate the typical increasing trend of . Here , thus .

Fig.1(a) illustrates computing and for two vectors. Fig.1(b) shows a hypothetical variogram cloud generated from the vector field in Fig.1(a). The horizontal and vertical axes represent and respectively. Given displacement vectors, each has pairs of corresponding , thus contains value points.

A common dependency assumption is that displacement vectors which are close to each other tend to be more similar than those far apart [29]. In other words, for a point in , a smaller usually corresponds to a smaller . As a result, a typical tends to exhibit an increasing trend. For conciseness, In the rest of this article, we call variogram.

2.2 Potential FLEs

A pair of annotated landmarks forms a displacement vector , which should indicate the deformation of . If exhibits a spatial dependency that differs from other vectors, it could indicate FLE for . We call these abnormally behaved vectors outliers. In general, there are global outliers and local outliers :

  • have large differences with the majority of displacement vectors in .

  • do not have extreme values, but are considerably different from their neighbors.

Vector outliers tend to have different spatial dependence from other landmarks, which can be captured by the values of , hence we can use to distinguish and . In the example of Fig.2(a), we deliberately added two problematic landmarks, one with global error (blue) and one with local error (green), to a vector field. In Fig.2(b), all blue points in are from adding , which can be easily identified because all of its corresponding points have distinctively higher values of . In Fig.2(c), all green points in are from adding . We can also distinguish at the bottom-left corner of , because some of its points yield small while having unusually large .

Figure 2: (a) Manually added atypically behaved displacement vectors. The blue point is , the green point is (b,c) Colorized global and local outliers identified in . Each point represents the difference between a pair of displacement vectors, e.g., blue points are vector pairs that involve the global outlier.

It is noteworthy that some critical displacement vectors, which indicate large tissue deformation, may share the same features as outliers. Therefore, is used to flag suspicious and , which can be further examined by experienced radiologists.

2.3 Atypical variogram patterns

Since represents the distance between a pair of vectors, the distribution of annotated landmarks can also be reflected in the pattern of . Compared to observing a 3D visualization of , where atypical patterns may be hidden because of the viewpoint,, the variogram’s 2D representation provides a clearer representation of landmark distribution. Fig.3(a) shows the typical smooth and steadily increasing pattern of an evenly distributed vector field . Other variogram patterns may indicate undesirable distributions of the vector field. Two undesirable patters are clustered landmarks and isolated landmarks:

  1. If landmarks are clustered into two (or more) distinct groups, the clustering is evident in as illustrated in Fig. 3(b).

  2. If a landmark is isolated from other landmarks, its points in only exist in areas where is large. Fig.3(c) shows an isolated landmark and its values in .

Figure 3: (a) of an evenly distributed vector field; (b) of a vector field that has clusters; (c) of a vector field that has an isolated point.

We construct for all data and manually flag cases with the above atypical patterns for further examinations.

3 Experiments

RESECT [27] and BITE [28] are two high-quality clinical datasets that contain p-MR and i-US images of patients with brain tumors. They are widely used to evaluate registration algorithms for image-guided neurosurgery. RESECT includes 23 patients with each having p-MR (pMR), before-resection US (bUS), during-resection US (dUS) and after-resection US (aUS) images. Four image pairs, i.e., pMR-aUS, pMR-bUS, bUS-aUS and bUS-dUS, have been annotated with corresponding landmarks. For BITE, pre- and post-operative MR, and i-US images have been acquired from 14 patients. These images were further annotated and put into three groups (1) Group 1: bUS and aUS; (2) Group 2: pMR-bUS; (3) Group 4: pMR and post-MR.

In order to provide an objective, thrid-party screening of the annotations in these two datasets, we generated for all 700+ landmark pairs and flagged those landmark pairs with potential FLE issues. Two operators visually inspected the flagged landmark pairs and together categorized them into three categories: (1) They are certain that the landmark pair is problematic; (2) looks atypical, but they are unsure whether the landmark pair is problematic; (3) looks normal. In addition, they also flagged cases with clusters or isolated landmark problems.

3.0.1 Findings

After the objective screening, we found that the vast majority of landmarks have normal-looking , which indicates that both datasets have high-quality annotations. In total, there are 29 pairs of landmarks that potentially have FLEs. In addition, we also identified 4 cluster cases and 11 isolated landmarks. All flagged data are summarized in Table 1. Fig.4 and Fig.5 show the of some landmark pairs that were flagged as potentially having FLE’s. Fig.6 gives two examples of ’s of a flagged cluster and isolated landmarks.

Figure 4: Examples of category one atypical landmark pairs (red). The first row shows their ’s while the second row displays their 3D displacement vectors.
Certain Unsure Cluster Isolated
1(9), 2(10), 19(11)
3(5), 4(3), 7(1, 4),
n/a 18(14)
pMR-bUS 1(9), 16(6)
1(13), 2(14), 3(1)
15(13), 25(15)
19 2(14), 19(1)
1(7), 7(8), 12(11)
15(3) 25
1(11), 18(12)
bUS-dUS 21(3), 27(11) 6(10), 7(22) 19 n/a
BITE G1 3(4), 10(1) n/a 12
2(3), 4(4)
G2 9(5) 12(1) n/a 3(21)
G4 n/a 1(6) n/a 3(16)
Table 1: Indexes of problematic landmark pairs, e.g., 1(9) means patient 1 and landmark pair No.9.

Since the brain may undergo deformation during surgery, atypical behaviors of ’s may indicate actual deformation of the brain. In order to investigate whether “problematic” landmarks contain localization errors, we sent the findings (mixed with good landmarks) to 3 experienced neuro-radiologists for validation and rating. They carefully examined the landmark coordinates in physical space using Slicer [30] and assigned a score from [1(poor), 2(questionable), 3 (acceptable) , 4 (good)]. Landmarks in Category one and category two received an average score of 1.5 and 2.4 respectively. Fig.7(a) shows the user interface for the validation procedure.

Figure 5: Examples of category two landmark pairs. These landmarks have atypical ’s, but their displacement vectors could be reasonable.
Figure 6: Examples of (a) two flagged clusters and (b) two isolated landmarks.

3.0.2 Potential evaluation bias

FLEs and non-evenly distributed landmarks can incur bias in the registration evaluation:

  1. Since most FRE metrics takes into account all landmarks equally (not weighted), landmarks with FLEs produce false registration error and can drive the algorithm towards aligning those inaccurately located markers.

  2. cluster and isolated landmarks both are not (densely) evenly distributed, thus they incur evaluation bias that prioritizes regions that have landmarks. As in the p-US to i-US registration example shown in Fig.7(b), landmarks only exist in the sulcus region (as two clusters). Here, we have registered p-US images and , which are registered by two different registration methods. In the landmark-based evaluation, has a better FRE score than because it perfectly aligns the sulcus region (while ignoring the rest of image). However, in surgeons’ eyes, it is which is more reasonable (useful) since it provides accurate tumor boundary alignment.

Figure 7: (a) The user interface for the landmark validation; (b) Evaluation bias caused by un-evenly distributed landmarks. On the left is an i-US image. and are registered p-US’s using two different registration methods.

4 Discussion

Manual landmark annotation is (mostly) a subjective task, thus public datasets may inevitably have FLEs. To mitigate the evaluation bias caused by FLEs, one strategy is to apply a landmark weighting (selection) scheme [31, 32, 33]. However, at the current stage, these methods are still not thoroughly validated and should be approached with caution. RESECT and BITE are both purposefully created for image-guided neurosurgery. We understood that anatomical features in the brain, mainly corner points and small holes, will most likely not appear uniformly in image space. In order to achieve a non-biased comparison between registration algorithms, dataset providers can add notes to describe the distribution (limitations) of landmarks so that users, if necessary, can incorporate more criteria, such as surfaces [34], in the evaluation.

Whether or not public datasets assure unbiased comparisons for registration evaluation is a crucial question that deserves more attention from the image registration community. From this objective third-party screening, we conclude that RESECT and BITE are both reliable datasets with a small number of problematic landmarks and landmark distributions that may bias the non-weighted FRE evaluation. Besides the fore-mentioned using advanced evaluation criteria, some data providers offer to update their repository based on users’ feedback, e.g., adding or correcting annotations, which is another solution to assure unbiased registration evaluation.

As a tool for objective screening, variogram may have limitations. Nevertheless, we believe that this paper will serve as a foundation and draw more attenton to this topic.


  • [1] Gerard, I.J., et al.: Brain shift in neuronavigation of brain tumors: A review. Med. Image Anal. 35, 403-420 (2017)
  • [2] Morin, F., et al.: Brain-shift compensation using intraoperative ultrasound and constraint-based biomechanical simulation. Med. Image Anal. 40: 133-153 (2017)
  • [3] Luo, J., et al: A feature-driven active framework for ultrasound-based brain shift compensation. In: MICCAI’18. LNCS, vol. 11073, pp. 30–38. (2018)
  • [4]

    Luo M., Larson P.S., Martin A.J., Konrad P.E., Miga M.I.: An Integrated Multi-physics Finite Element Modeling Framework for Deep Brain Stimulation: Preliminary Study on Impact of Brain Shift on Neuronal Pathways. In: Shen D. et al. (eds). MICCAI 2019. LNCS, vol 11768. pp. 682-690. Springer, Cham (2019)

  • [5] Maintz, J.B.A., Viergever, M.A.: A Survey of Medical Image Registration. Med. Image Anal. 2(1): 1-36 (1998)
  • [6] Sotiras, A., Davatzikos, C.: Deformable Medical Image Registration: A Survey. IEEE Trans. Med. Imaging 32(7), 1153-1190 (2013)
  • [7] Song, J.: Methods for evaluating image registration. PhD thesis. (2017)
  • [8] West, J., Fitzpatrick, J.M. et al.: Comparison and evaluation of retrospective intermodality brain image registration techniques. J. Comp. Asst. Tomog., 21(4):554–566, 1997.
  • [9] Fitzpatrick J.M., Hill D.L.G. and Maurer C.R. Image registration. In Sonka M. and Fitzpatrick J.M., editors, Handbook of Medical Imaging, Vol.2, Chapter.8, pp447-513. SPIE, 2000
  • [10] Fitzpatrick J.M.:The retrospective image registration evaluation project., March 2007.
  • [11] Fitzpatrick, J.M.: Fiducial registration error and target registration error are uncorrelated. In Proc. SPIE 7261, Medical Imaging’09; doi: 10.1117/12.813601 (2009)
  • [12] Datteri R.D., Dawant B.M.: Estimation and Reduction of Target Registration Error. in MICCAI 2012. LNCS, vol.7512, pp. 139-146 (2012)
  • [13] Min, Z. and Meng, M.Q.: TTRE and FRE are uncorrelated in a paired-point rigid registration. in 32th Computer Assisted Radiology and Surgery International Congress. pp 243-244. (2018)
  • [14] Hellier, P., Barillot, C., Corouge, L., Gibaud, B., Le Goualher, G., Collins, D.L., Evans, A., Malandain, G., Ayache, N., Christensen, G.E. and Johnson, H.J.: Retrospective evaluation of inter-subject brain registration. IEEE Transactions on Medical Imaging, 22(9):1120–1130, 2003.
  • [15] Christensen, G., Geng, X., Kuhl, J., Bruss, J., Grabowski, T., Pirwani, I., Vannier, M., Allen, J. and Damasio, H.: Introduction to the non-rigid image registration evaluation project (NIREP), in Proc. Int. Workshop Biomed. Image Registrat’06, pp. 128–135. (2006)
  • [16] Xu Z., et al.: Evaluation of Six Registration Methods for the Human Abdomen on Clinically Acquired CT. IEEE Trans Biomed. Eng. 63(8): 1563-1572 (2016)
  • [17] Kabus, S., Klinder, T., Murphy, K., van Ginneken, B., Lorenz, C. and Pluim. J.P.W.: Evaluation of 4D-CT lung registration. In MICCAI, pages 747–754, 2009
  • [18] Murphy, K., Van Ginneken, B., Reinhardt, J.M., Kabus, S., Ding, K., Deng, X., Cao, K., Du, K., Christensen, G.E., Garcia, V., et al: Evaluation of registration methods on thoracic ct: the EMPIRE10 challenge. IEEE TMI, 30(11):1901–1920, 2011.
  • [19] Yassa, M.A. and Stark C.E.L.: A quantitative evaluation of cross-participant registration techniques for mri studies of the medial temporal lobe. NeuroImage, 44:319–327, 2009.
  • [20] Klein, A., Andersson, J., Ardekani, B.A., Ashburner, J., Avants, B., Chiang, M., Christensen, G.E., Collins, D.L., Gee, J. Hellier, P., Song, J.H., Jenkinson, M., Lepage, C., Rueckert, D., Thompson, P., Vercauteren, T., Woods, R.P., Mann, J.J., and Parsey. R.V.: Evaluation of 14 nonlinear deformation algorithms applied to human brain MRI registration. NeuroImage, 46(3):786–802, July 2009.
  • [21] Klein, A., Ghosh, S.S, Avants, B., Yeo, B.T.T., Fischl, B., Ardekani, B., Gee, J.C., Mann, J.J., and Parsey. R.V.: Evaluation of volume-based and surface-based brain image registration methods. NeuroImage, 51(1):214–220, 2010.
  • [22] Ou Y., et al.: Comparative evaluation of registration algorithms in different brain databases with varying difficulty. IEEE TMI 33(10), 2039-2065 (2014)
  • [23] Rohlfing T.: Image Similarity and Tissue Overlaps as Surrogates for Image Registration Accuracy: Widely Used but Unreliable. IEEE TMI 31(2), 153–163 (2012)
  • [24] Machado, I., et al.: Deformable MRI-Ultrasound registration using correlation-based attribute matching for brain shift correction: Accuracy and generality in multi-site data. NeuroImage, 202:116094. (2019)
  • [25] Xiao. Y., et al.: Evaluation of MRI to ultrasound registration methods for brain shift correction: The CuRIOUS2018 Challenge, in IEEE Transactions on Medical Imaging. (2020)
  • [26] Luo, J., Frisken, S., Machado, I. et al. Using the variogram for vector outlier screening: application to feature-based image registration. Int J CARS 13(12), pp.1871–1880 (2018).
  • [27] Xiao, Y., Fortin, M., Unsgard, G., Rivaz, H. and Reinertsen, I.: Retrospective evaluation of cerebral tumors (RESECT): A clinical database of pre-operative mri and intra-operative ultrasound in low-grade glioma surgeries. Medical physics, 44(7):3875–3882 (2017)
  • [28] Mercier, L., et al.: On-line database of clinical MR and ultrasound images of brain tumors. Med Phys. 39(6):3253–61. (2012)
  • [29] Cressie NAC, Statistics for spatial data, p900. Wiley, USA (1991)
  • [30] Kikinis R, Pieper SD, Vosburgh K (2014) 3D Slicer: a platform for subject- 24 specific image analysis, visualization, and clinical support. Intraoperative Imaging Image-Guided Therapy, Jolesz FA, Editor Vol.3(19), p277–289
  • [31] Danilchenko, A. and Fitzpatrick J.M.: General Approach to First-Order Error Prediction in Rigid Point Registration. IEEE TMI 30(3), pp. 679-693 (2011)
  • [32] Shamir R.R., Joskowicz, L. and Shoshan Y.: Fiducial Optimization for Minimal Target Registration Error in Image-Guided Neurosurgery, in IEEE Transactions on Medical Imaging, 31(3), pp. 725-737 (2012)
  • [33] Thompson, S., Penney, G., Dasgupta, P. and Hawkes, D.: Improved Modelling of Tool Tracking Errors by Modelling Dependent Marker Errors, in IEEE Transactions on Medical Imaging, 32(2), pp. 165-177 (2013).
  • [34] dos Santos, T.R. Pose-independent surface matching for intra-operative soft-tissue marker-less registration. in Medical Image Analysis, 18(7), pp. 1101-1114 (2014)