Robust, fast and accurate: a 3-step method for automatic histological image registration

03/28/2019 ∙ by Johannes Lotz, et al. ∙ 0

We present a 3-step registration pipeline for differently stained histological serial sections that consists of 1) a robust pre-alignment, 2) a parametric registration computed on coarse resolution images, and 3) an accurate nonlinear registration. In all three steps the NGF distance measure is minimized with respect to an increasingly flexible transformation. We apply the method in the ANHIR image registration challenge and evaluate its performance on the training data. The presented method is robust (error reduction in 99.6 of the cases), fast (runtime < 4 seconds) and accurate (median relative target registration error 0.0019).



There are no comments yet.


page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In cancer diagnostics and histology related basic research, much insight into molecular and cellular interactions, tissue growth, and tissue organization is gained by analyzing consecutive but differently stained histological sections. For this procedure, a fixed tissue is transferred in a paraffin block and cut into m thin slices. Then, slices are subsequently stained by e.g. immunohistochemistry, and finally examined by a scientist or physician using conventional or virtual microscopy.

In order to correlate the staining intensities, staining patterns, and even subcellular localizations of various proteins or antigens, multiple stainings are frequently required. To recombine the information from the separate stains, a precise, multi-modal image registration is essential.

We present a 3-step registration pipeline that consists of 1) a robust pre-alignment, 2) a parametric registration computed on coarse resolution images, and 3) an accurate nonlinear registration.

2 Related Work

The underlying variational image registration framework of this work has been described in [1, 2] and its application to histological images was first described in [3] in 2006. A general issue has been the handling of large images and the associated computational complexity and runtimes. At this time, the elastic registration of two images from slices of a human brain with pixels took about 100 minutes on a workstation and 3 minutes on a cluster computer. Later, a faster implementation for regular workstations reducing memory read and write operations has been proposed in [4] in 2013. The authors report a registration time of 104 seconds for a pair of images from the DIR-Lab 4DCT dataset (approx. voxels). Additional optimizations including the exploitation of special instruction sets of modern CPUs has been recently described in [5], reducing the registration time for two images to 19 seconds. The present work builds on top of these implementations.

3 Three Steps for Robust, Fast and Accurate Registration

Following the framework established by Fischer & Modersitzki [2], we formulate image registration as minimization of a suitable objective function , where and are the reference and template images and

is the wanted transformation function. Central for any suitable objective function is a distance or image similarity measure that quantifies alignment quality. Here we use the Normalized Gradient Fields (NGF) distance measure

[6]. In the discrete setting we assume we have 2D images composed from pixels with uniform size in each dimension and pixel centers . Thus, we use the NGF distance measure given by

with , , and the edge parameter steering the sensitivity with respect to strength of edges in images as well as to noise. The NGF distance measure forces the alignment of edges and therefore is based on morphological structures which makes it robust with respect to staining differences [7]. In general, NGF is suitable for multi-modal image registration.

Minimizing the NGF distance measure is part of all three steps that build up our registration pipeline. To solve the optimization problem we use a multilevel approach, starting with a low image resolution and refining the transformation on higher image resolutions iteratively. This reduces the risk of being tracked in local minima and speeds up the optimization process [8].

All three registration steps rely on the edge parameter , the number of levels , the maximum image dimension in x and y for finest level and are set independently for each step.

3.1 Step 1: Pre-Alignment (Automatic Rotation Alignment)

After manual tissue processing in the lab, neighboring tissue slices can end up in arbitrary positions on the object plate (such as upside down or turned in various ways). Therefore, we do not make any assumptions on initial tissue positioning and aim to find a rough rigid transformation in a first step, correcting for translation and rotation. The result is then used as an initial guess for a more flexible registration in the second step.

The Automatic Rotation Alignment (ARA) starts by determining the center of mass [9]

of both images (where each pixel’s gray value is used as its mass). The vector pointing from the center of mass of the reference image to the center of mass of the template image is then used as initial translation. Several possible transformation are computed by starting

rigid registrations with different initial rotations, equidistantly sampled from . From all rigid registrations, the result with the minimal distance measure is selected as intermediate result.

3.2 Step 2: Parametric Registration

The second step of the registration pipeline is a parametric registration with an affine deformation model. In 2D, an affine deformation

has 6 degrees of freedom and we set

with parameters . Then, we minimize the objective function

with respect to the parameters . The resulting transformation is then used as initial guess for a subsequent non-parametric registration in the last step.

We employ an iterative Gauss-Newton optimization and use the parameters from the pre-alignment (translation , rotation angle ) as initial guess. That is, we setup the initial affine paramters such that

3.3 Step 3: Non-parametric Registration

The final step is a non-parametric image registration. Here, the transformation is given by

with the displacement field .

Other than in parametric registration, the non-parametric deformation is controlled by an additional term in the objective function, the regularizer. We use curvature regularization [10] with

We then minimize the following objective function with respect to the deformation and displacement , respectively:

where is a regularization parameter, which controls the smoothness of the computed deformation. The regularization parameter is manually chosen to provide a smooth deformation and to avoid topological changes (grid foldings) while being flexible enough to correct local changes improving image similarity.

For our numerical implementation, the displacement field is discretized with 1st order B-Splines defined on an uniform control point grid with points. Then we optimize the non-parametric objective function with respect to the control points. To this end, we use a L-BFGS quasi Newton optimization together with multi-level continuation to avoid local minima and to speed up computations.

3.4 Registration Parameters

The final set of registration parameters is shown in Table 1.

Step 1: Pre-Alignment
Number of rotation angles 32
Maximum image dimension 200 pixels
Number of levels 4
NGF 0.1
Step 2: Parametric Registration
Maximum image dimension 1000 pixels
Number of levels 5
NGF 0.1
Step 3: Non-Parametric Registration
Maximum image dimension 8000 pixels
Number of levels 7
NGF 1.0
regularizer parameter 0.1
number of grid points 257257
Table 1: Parameters used in the registration pipeline.

4 Data Preprocessing

Before registration, all images are converted into an in-house multilevel image format based on sqlite111 Without this conversion, the image loading time is increased by about five seconds per registration.

In addition, all images are converted to gray and inverted while loading.

5 Application to Anhir Challenge Data

Figure 1: Spy-view of an image pair after pre-alignment, parametric and non-parametric registration (left to right).

The algorithm has been applied to the data from the ANHIR registration challenge 222 [11, 12, 13]. The Registration performance is measured by evaluating the average () and the median () of the median relative target registration error () over all image pairs following [11]. The over all landmarks of one image pair is computed as

where are the template landmarks and the warped reference landmarks and is the image extent. and are computed as the average and median

over the s of the image pairs. After registration, landmark errors for the training data of and are reached. On the subset of pairs where the registration is robust, landmarks errors of and are reached.

The reduction of the registration error in the training data after each step in the pipeline is shown in the box plots in Figure 2. While the median error is reduced after each step, those cases that fail in the pre-alignment cannot be recovered at a later stage. Figure 1 shows one of the image pairs after pre-alignment, parametric registration and non-parametric registration.

The resulting deformations do not contain foldings. The average maximum area change in one grid cell was 1.35 %.

The algorithm is robust in 99.6 % of the training cases . Robustness in the ANHIR challenge is defined as the percentage of the cases where the landmark error is reduced compared to the initial configuration.

Multiple parametrizations were tested on the training data and the parameter set with the lowest median median rTRE (MMrTRE) was selected for submission.

The whole registration process including image loading, pre-alignment, parametric and non-parametric registration takes on average 4.0 seconds on an Intel(R) Core(TM) i7-7700K CPU (4.20GHz, four cores) with 32 GB of RAM.

Figure 2: Histogram of the Median-rTRE measured on the training data before alignment and after each registration step.


  • [1] Bernd Fischer and Jan Modersitzki, “Fast Image Registration - A Variational Approach,” in Numerical Analysis and Computational Mathematics, G Psihoyios, Ed., 2003, pp. 69–74.
  • [2] Jan Modersitzki, FAIR: Flexible Algorithms for Image Registration, SIAM, 2009.
  • [3] Oliver Schmitt, Jan Modersitzki, Stefan Heldmann, Stefan Wirtz, and Bernd Fischer, “Image Registration of Sectioned Brains,”

    International Journal of Computer Vision

    , vol. 73, no. 1, pp. 5–39, Sept. 2006.
  • [4] Jan Rühaak, Stefan Heldmann, Till Kipshagen, and Bernd Fischer, “Highly Accurate Fast Lung CT Registration,” Proceedings of SPIE 8669, Medical Imaging 2013: Image Processing, Mar. 2013.
  • [5] Lars König, Jan Rühaak, Alexander Derksen, and Jan Lellmann, “A Matrix-Free Approach to Parallel and Memory-Efficient Deformable Image Registration,” SIAM Journal on Scientific Computing, vol. 40, no. 3, pp. B858–B888, Jan. 2018.
  • [6] E. Haber and J. Modersitzki, “Intensity Gradient Based Registration and Fusion of Multi-modal Images,” Methods of Information in Medicine, vol. 46, no. 03, pp. 292–299, 2007.
  • [7] Wouter Bulten,

    “Epithelium segmentation using deep learning in H&E-stained prostate specimens with immunohistochemistry as reference standard,”

    Scientific Reports, vol. accepted, pp. 19, 2019.
  • [8] Eldad Haber and Jan Modersitzki, “A Multilevel Method for Image Registration,” SIAM Journal on Scientific Computing, vol. 27, no. 5, pp. 1594–1607, 2006.
  • [9] Millard F. Beatty, Principles of Engineering Mechanics, Number 32-33 in Mathematical Concepts and Methods in Science and Engineering. Plenum Press, New York, 1986.
  • [10] Bernd Fischer and Jan Modersitzki, “Curvature based image registration,” Journal of Mathematical Imaging and Vision, pp. 81–85, 2003.
  • [11] Jiri Borovec, Arrate Munoz-Barrutia, and Jan Kybic, “Benchmarking of Image Registration Methods for Differently Stained Histological Slides,” in 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Oct. 2018, pp. 3368–3372, IEEE.
  • [12] Rodrigo Fernandez-Gonzalez, Arthur Jones, Enrique Garcia-Rodriguez, Ping Yuan Chen, Adam Idica, Stephen J. Lockett, Mary Helen Barcellos-Hoff, and Carlos Ortiz-De-Solorzano, “System for combined three-dimensional morphological and molecular analysis of thick tissue specimens,” Microscopy Research and Technique, vol. 59, no. 6, pp. 522–530, Dec. 2002.
  • [13] Laxmi Gupta, Barbara Mara Klinkhammer, Peter Boor, Dorit Merhof, and Michael Gadermayr, “Stain independent segmentation of whole slide images: A case study in renal histology,” in 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, Apr. 2018, pp. 1360–1364, IEEE.