Probabilistic Inference for Camera Calibration in Light Microscopy under Circular Motion

10/30/2019 ∙ by Yuanhao Guo, et al. ∙ 0

Robust and accurate camera calibration is essential for 3D reconstruction in light microscopy under circular motion. Conventional methods require either accurate key point matching or precise segmentation of the axial-view images. Both remain challenging because specimens often exhibit transparency/translucency in a light microscope. To address those issues, we propose a probabilistic inference based method for the camera calibration that does not require sophisticated image pre-processing. Based on 3D projective geometry, our method assigns a probability on each of a range of voxels that cover the whole object. The probability indicates the likelihood of a voxel belonging to the object to be reconstructed. Our method maximizes a joint probability that distinguishes the object from the background. Experimental results show that the proposed method can accurately recover camera configurations in both light microscopy and natural scene imaging. Furthermore, the method can be used to produce high-fidelity 3D reconstructions and accurate 3D measurements.



There are no comments yet.


page 2

page 3

page 4

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Circular motion is commonly used in light microscopy because it simplifies the imaging protocol [1]

. A specimen in circular motion presents its longitudinal views from a full revolution, which makes most of its features observable. The acquired images can be considered as the readout of multiview imaging. Thus, the image-based 3D reconstruction approaches may produce a 3D volumetric representation of a specimen, from which 3D visualization and quantitative measurements can be performed. Because this type of methods requires an accurate estimation of the projection from the 3D world frame to each of the images, a robust and accurate camera calibration becomes necessary

[2, 3]

, even for the popular deep learning based reconstruction methods

[4, 5, 6].

Conventional calibration methods capture images for a regular pattern like a checkboard to estimate the camera intrinsic configurations, including the focal length, pixel size, image center, etc [7, 8]. Bundle adjustment is then taken to get the camera extrinsic configurations, i.e. the camera motion (transformation) against the subject [9]. Alternatively, the silhouette-based methods achieves this goal by taking the multiview masks of the subject as input. Three-dimensional (3D) projective geometry generates an accurate projection from the subject to each of the masks. The area coherence based method aims to maximize the overlap area between the projected images and the groundtruth masks [10]; The silhouette coherence based method aims at maximizing the intersection of the contours from the projected images and the groundtruth masks [11]; The shape coherence based method defines the voxel residual as the intersection of the cone-shaped projections, thus avoiding extract contours or surface area of the projected images [12, 13]. It should be noted that the methods mentioned above depend on either accurate key point matching or precise image segmentation. However, both are often difficult to achieve in light microscopy due to the commonly existing transparency/translucency of specimen.

To address the above issues, we propose a probabilistic inference based method for the camera calibration. In contrast to previous methods, our model is advantageous because it does not require accurate key point detection or precise image segmentation. Our method first transforms the images acquired from the circular motion into a probabilistic representation without the need for image segmentation. We then initialize a range of voxels which may cover the subject of interest (SoI). We project those voxels onto the probabilistically represented images according to 3D projective geometry. In such a manner, we assign a joint probability on each voxel by integrating the information from all the axial-view images. Finally, we formulate the camera calibration problem as the maximization of the separation of the SoI from the background, which is represented as a probabilistic inference. Through this probabilistic inference, we do not need to assign a hard-coded value on each voxel, which is usually determined by its total visibility from all the segmentations (silhouettes). We demonstrate the proposed method in a graphical scheme shown in Fig. 1.

We organize the remainder of this work as follows. In Section 2 we elaborate the proposed method. In Section 3 we show the experimental results and the performance of our method. In Section 4 we summarize our work and suggest some directions for future improvement of our method.

2 Methodology

In this section, we will elaborate the proposed probabilistic inference for the camera calibration in light microscopy under circular motion. The zebrafish are commonly used in life-science study [14]. The images used in this study were collected from a group of zebrafish larvae using a light microscope under circular motion based on the VAST-BioImager [1]. The VAST-BioImager loads a specimen into the view of a high-throughput camera (Allied Vision Systems, Pro Silica GE 1050 CCD, pixel size , image size pixels). A stepper motor manipulates the specimen to rotate along its longitudinal axis and the equipped camera captures image for each axial-view. Some examples of the acquired images are shown in Fig. 1 (A).

2.1 Camera Parameterization

Let denote the center of a voxel in 3D world frame. The classical pinhole model finds its pixel location on an image via the projection matrix , denoted as . can be decomposed as . is the camera intrinsic matrix of the following form.


Where, denotes the focal length; denote the scaling factors; define the image center. Because these configurations are provided by the vender, we focus on the camera extrinsic parameters, i.e. the camera motion, encoded in and

. Those two terms are the 3D rotation and translations, respectively. As the subject in our settings is rotating along a fixed axis, the yaw and pitch remain identical during motion; the roll and translation vary from different views. So, we organize the camera extrinsic parameters that will be optimized into a vector

, where and are the yaw and pitch; is the reparameterized translation; include the roll for each view (the starting view is defined as 0 degree). Given a fixed camera configuration , we can derive the projection matrix for each image.

2.2 Probabilistic Representation

According to the color distribution, we construct two probabilistic models for the foreground () and background (

). We randomly select one image from the axial-view images. Then we either scratch on the image or roughly mask the image to collect the pixels separating the foreground and background. According to the normal distribution model, we use the RGB values of the collected pixels to construct the probabilistic models as follows.


Where and denote the mean and covariance of the color distribution, respectively. Given the axial-view images , we obtain the probabilistic representation for each of the images based on Eq.2.

2.3 Probabilistic Inference for Camera Calibration

Given a voxel , , and a projection matrix for an image , , we could find the pixel location of on that image through . In this way we can match the pixel location for the voxel on each of the image by the . By integrating the probabilities of the voxel from all the

images, we obtain its total probabilities which denote its likelihood of belonging to the foreground and background. This may be implemented by the form of joint probability distribution



Taking all the voxels into account, we formulate the cost function as the joint probability distribution of all the voxels, which indicates the total likelihood of the separation from the foreground to the background in the constrained 3D world frame.


For simplification, we use the following logarithmic form of the cost function.


We use the evolution strategy[16], one of the unconstrained optimization methods, to maximize the cost function. We have found that the evolution strategy offers a stable and robust performance if given a reasonable initialization. Specifically, we initialize the camera parameters as , where is the initial circular motion step manipulated by the stepper motor. We terminate the optimization when the output of the cost function is smaller than . In Fig.2, we compare the reconstruction effect of the camera calibration. A well-calibrated camera configuration produces a high-fidelity 3D model, and the projected masks can well-distinguish the subject.

Figure 2: An illustration of the effect of camera calibration. (A) A noisy camera configuration generates a poor 3D model and the projections departure from the realm of the subject. (B) A well-calibrated camera configuration produces a vivid 3D model of which the projected masks tightly attach the subject.
Figure 3: Reconstructed 3D models for the zebrafish dataset with different calibration methods.
Figure 4: Reconstructed 3D shapes for the dinosaur statue with different calibration methods.

3 Experiments

In this section, we set up the voxel residual (VR) maximization method [12] as a baseline. We separately evaluate the VR and the proposed probabilistic inference (PI) model on a dataset acquired from our light microscope imaging system with circular motion. In order to evaluate the generalizability of our method, we compare the performance of the two methods on a public dataset of a dinosaur statue which is vailable from The pythonic implementation of our method is available at We transfer some functions from [17].

3.1 Performance on Light Microscopy Data

We acquired images for three zebrafish specimens (ZF1, ZF2, ZF3) separately from 3, 4 and 5 days post fertilization (DPF). For each specimen, we evenly sample 21 views from the full revolution. The mechanical drift makes the rotation step of the specimen unstable and the rotation axis is not aligned with the specimen center. Thus, we need to calibrate the camera configurations to obtain accurate 3D models of the specimens.

As we do not have the groudtruth (GT) camera parameters in this dataset, we propose two strategies to compare the performance of the VR and PI methods. Based on the calibrated camera data obtained from these two methods, we first apply the shape-based 3D reconstruction method[12] to obtain 3D models. We then (1) inspect the visual effects of the 3D shapes; (2) use two 3D metrics, i.e. volume (V) and surface area (SA), to measure the 3D shapes.

In Figure 3 we visualize one selected view of the 3D reconstructed zebfafish models. We find that the proposed PI model provides similar performance as the baseline method, but without the need for accurate image segmentation. In Table 1 we show the 3D metrics. The VR method has been validated for its performance in obtaining accurate 3D measurements in light microscopy imaging under circular motion [12]. We see that the 3D metrics of the PI model are very close to those of the VR method. In fact, our method applies in diverse types of light microscopes, e.g. the bright-field and fluorescent, and diverse types of biological models, e.g. the zebrafish and mice (data not shown).

[h] ZF1 ZF2 ZF3 V() VR 0.251 0.281 0.326 PI 0.249 0.278 0.325 SA() VR 3.25 3.57 3.88 PI 3.24 3.57 3.87

Table 1: 3D Metrics Comparison on Zebrafish Data

3.2 Performance on Public Dataset

For the public dataset, i.e. the dinosaur statue, we have the groundtruth (GT) calibration data. Since its camera parameterization is slightly different from the settings in our zebrafish dataset, we design a strategy to enable the evaluation of the VR and PI methods on this dataset. (1) We decompose the camera parameters from the calibration data. (2) We keep the camera intrinsic parameters fixed, and add a certain level of noise on the camera extrinsic parameters, aligning the evaluation to our circular motion imaging protocol. Specifically, we choose 10 rotation angles of roll from the total 36 ones. We perturb the settings by adding 10 degrees on each of the chose angles. (3) We apply the VR and the proposed PI methods to optimize those noisy camera parameters. (4) We compare the optimized camera parameters with the GT data.

In Table 2 we compare the calibrated camera parameters from the VR and PI methods with the GT. Our method obtains rather accurate camera calibration result in the natural scene imaging condition. For a perceptual evaluation, we again apply the shape-based method to reconstruct the 3D model of the dinosaur statue. In Figure 3.1 (A), we show one original input image; We then separately show the 3D model reconstructed from: (B) the noisy camera parameters; (C) the groundtruth camera parameters; (D) the calibrated camera parameters of the VR method; (E) the calibrated camera parameters of the proposed PI model. The results show that our method can be used for the generic multiview stereo.

[h] Roll GT 87.25 79.00 69.15 59.22 49.21 VR 87.25 78.95 69.01 59.11 48.77 PI 87.29 78.88 69.18 59.22 48.91 Roll GT 39.21 29.23 19.28 9.28 -1.36 VR 38.40 28.28 18.25 8.55 -1.26 PI 38.50 28.66 18.59 8.76 -1.24

Table 2: Calibration Results Comparison on Public Data

4 Conclusions

We proposed a new camera calibration method for light microscopy under circular motion. Our method takes a probabilistic inference to maximize the likelihood for the separation of the foreground and background. The method is free to complicated image pre-processing, like key point detection and image segmentation, which still recovers accurate camera parameters in light microscopy imaging.The potential of the method lies in its successful application in the camera calibration of generic imaging situations. Here we point out two issues. (1) As the cost function optimization in our method is unconstrained, which requires a relatively good initialization for the solution. In most imaging setups the mechanical parameters are usually known, e.g. the rotation angle of the control motors, so this problem could be solved by offering a reasonable estimation for the camera extrinsic parameters. (2) Due to the intensive 3D projections from the voxels to each of the images, the efficiency of our method still needs to improve. We plan to apply a parallel scheduler to accelerate the performance of the proposed method.


This study is supported in part by grant from the Chinese Academy of Sciences (No.Y9S9MS01/292019000056) and grant from the University of Chinese Academy of Sciences (No.115200M001). This study is partially supported by the National Natural Science Foundation of China (No.31971289). We thank Dr. Wolfgang Niem at Univ of Hannover and VGG group at Univ of Oxford to provide the dinosaur data.


  • [1] Carlos Pardo-Martin, Tsung-Yao Chang, Bryan Kyo Koo, Cody L Gilleland, Steven C Wasserman, and Mehmet Fatih Yanik, “High-throughput in vivo vertebrate screening,” Nature methods, vol. 7, no. 8, pp. 634, 2010.
  • [2] Richard Hartley and Andrew Zisserman,

    Multiple view geometry in computer vision

    Cambridge university press, 2003.
  • [3] Gil Ben-Artzi, Yoni Kasten, Shmuel Peleg, and Michael Werman, “Camera calibration from dynamic silhouettes using motion barcodes,” in CVPR, 2016, pp. 4095–4103.
  • [4] Abhishek Kar, Christian Häne, and Jitendra Malik, “Learning a multi-view stereo machine,” in NeurIPS, 2017, pp. 365–376.
  • [5] Po-Han Huang, Kevin Matzen, Johannes Kopf, Narendra Ahuja, and Jia-Bin Huang, “Deepmvs: Learning multi-view stereopsis,” in CVPR, 2018, pp. 2821–2830.
  • [6] Xiuming Zhang, Zhoutong Zhang, Chengkai Zhang, Josh Tenenbaum, Bill Freeman, and Jiajun Wu, “Learning to reconstruct shapes from unseen classes,” in NeurIPS, 2018, pp. 2257–2268.
  • [7] Janne Heikkila, Olli Silven, et al., “A four-step camera calibration procedure with implicit image correction,” in CVPR, 1997, vol. 97, p. 1106.
  • [8] Zhengyou Zhang, “A flexible new technique for camera calibration,” TPAMI, vol. 22, 2000.
  • [9] Changchang Wu, Sameer Agarwal, Brian Curless, and Steven M Seitz, “Multicore bundle adjustment,” in CVPR, 2011, pp. 3057–3064.
  • [10] Hendrik PA Lensch, Wolfgang Heidrich, and Hans-Peter Seidel, “A silhouette-based algorithm for texture registration and stitching,” Graphical Models, vol. 63, no. 4, pp. 245–262, 2001.
  • [11] Carlos Hernández, Francis Schmitt, and Roberto Cipolla, “Silhouette coherence for camera calibration under circular motion,” TPAMI, vol. 29, no. 2, pp. 343–349, 2007.
  • [12] Yuanhao Guo, Wouter J Veneman, Herman P Spaink, and Fons J Verbeek, “Three-dimensional reconstruction and measurements of zebrafish larvae from high-throughput axial-view in vivo imaging,” Biomedical optics express, vol. 8, no. 5, pp. 2611–2634, 2017.
  • [13] Yuanhao Guo, Yunpeng Zhang, and Fons J Verbeek, “A two-phase 3-d reconstruction approach for light microscopy axial-view imaging,” IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 7, pp. 1034–1046, 2017.
  • [14] Kerstin Howe, Matthew D Clark, Carlos F Torroja, James Torrance, Camille Berthelot, Matthieu Muffato, John E Collins, Sean Humphray, Karen McLaren, Lucy Matthews, et al., “The zebrafish reference genome sequence and its relationship to the human genome,” Nature, vol. 496, no. 7446, pp. 498, 2013.
  • [15] Kalin Kolev, Thomas Brox, and Daniel Cremers, “Fast joint estimation of silhouettes and dense 3d geometry from multiple images,” TPAMI, vol. 34, no. 3, pp. 493–505, 2012.
  • [16] Nikolaus Hansen and Stefan Kern, “Evaluating the cma evolution strategy on multimodal test functions,” in International Conference on Parallel Problem Solving from Nature, 2004, pp. 282–291.
  • [17] Tordoff Ben, “Carving a dinosaur,”