DeepAI
Log In Sign Up

Multiview point cloud registration with anisotropic and space-varying localization noise

01/03/2022
by   Denis Fortun, et al.
0

In this paper, we address the problem of registering multiple point clouds corrupted with high anisotropic localization noise. Our approach follows the widely used framework of Gaussian mixture model (GMM) reconstruction with an expectation-maximization (EM) algorithm. Existing methods are based on an implicit assumption of space-invariant isotropic Gaussian noise. However, this assumption is violated in practice in applications such as single molecule localization microscopy (SMLM). To address this issue, we propose to introduce an explicit localization noise model that decouples shape modeling with the GMM from noise handling. We design a stochastic EM algorithm that considers noise-free data as a latent variable, with closed-form solutions at each EM step. The first advantage of our approach is to handle space-variant and anisotropic Gaussian noise with arbitrary covariances. The second advantage is to leverage the explicit noise model to impose prior knowledge about the noise that may be available from physical sensors. We show on various simulated data that our noise handling strategy improves significantly the robustness to high levels of anisotropic noise. We also demonstrate the performance of our method on real SMLM data.

READ FULL TEXT VIEW PDF

page 11

page 16

01/12/2018

Noisy Expectation-Maximization: Applications and Generalizations

We present a noise-injected version of the Expectation-Maximization (EM)...
05/12/2021

Efficient Algorithms for Estimating the Parameters of Mixed Linear Regression Models

Mixed linear regression (MLR) model is among the most exemplary statisti...
08/20/2020

DeepGMR: Learning Latent Gaussian Mixture Models for Registration

Point cloud registration is a fundamental problem in 3D computer vision,...
08/10/2013

High-Dimensional Regression with Gaussian Mixtures and Partially-Latent Response Variables

In this work we address the problem of approximating high-dimensional da...
09/06/2016

Joint Alignment of Multiple Point Sets with Batch and Incremental Expectation-Maximization

This paper addresses the problem of registering multiple point sets. Sol...
07/02/2021

Unsupervised Single Image Super-resolution Under Complex Noise

While the researches on single image super-resolution (SISR), especially...
07/14/2020

Explicit Regularisation in Gaussian Noise Injections

We study the regularisation induced in neural networks by Gaussian noise...

1 Introduction

The acquisition of point cloud data has become ubiquitous for the perception of the 3D environment with the advent of sensors such as LiDAR and time-of-flight cameras, used for many applications in robotics, autonomous driving or augmented reality. The registration of point clouds is a fundamental step to achieve various tasks, including 3D scene reconstruction, pose estimation or localization. Besides these classical application fields, we particularly focus in this work on point cloud data acquired in single molecule localization microscopy (SMLM)

[Sauer13], which is one of the reference techniques in fluorescence microscopy. Each point in SMLM represents the localization of a fluorophore attached to specific proteins, with a localization accuracy of a few nanometers. Registration plays an important role in SMLM in the single particle reconstruction paradigm [Fortun18], where multiple randomly oriented copies of a biological particle are registered in order to reconstruct a single 3D model with improved resolution.

Most registration methods are based on an iterative procedure that alternates between the computation of distances between points, and the estimation of a rigid transformation that aligns the point clouds. The iterative closest point (ICP) method is the oldest and most popular representative of this approach [Besl92]. It retains only the smallest point-to-point distances to establish binary correspondences that enable a fast estimation of the optimal transformation parameters. However, the choice of binary correspondences limits the robustness of ICP to noise, missing data and initialization.

To circumvent this issue, the binary matches can be replaced by fuzzy correspondences. This generalization of ICP can be formalized by representing the target point cloud as a Gaussian mixture model (GMM) and solving a maximum likelihood clustering problem with an expectation-maximization algorithm (EM) [Gold98, Chui00, Granger02]. This strategy can be generalized further by considering a generative approach [Horaud10, Eckart18], where the GMM represents the distribution that generated both point clouds. In this framework, the GMM is estimated jointly with the rigid transformation. The generative approach can be naturally extended to the registration of multiple point sets, and it can reduce considerably the computational complexity when a low number of GMM components is used. A large proportion of multiview registration methods can be cast in this generative framework, which we will call EM-GMM in the remainder of the paper.

A major limitation of EM-GMM methods is the absence of explicit localization noise model. The localization noise comes from the measurement imperfection inherent to any physical acquisition devices. This is particularly true in SMLM where the points are estimated by a localization algorithm that usually provides known Gaussian uncertainty associated to each point, with a much higher variance in the axial direction of the microscope. To illustrate the amount of anisotropic noise in SMLM, we show in Figure 

1 two acquisitions of a macromolecular assembly called centriole, which is known to have a barrel-like structure. The symmetry axis of the centriole is aligned with the axial direction z in the first acquisition (Fig. 1 a,c), and it is orthogonal to z in the second acquisition (Fig. 1 b,d). Fig 1.a and Fig. 1.b both show a view of the centriole in the direction of its symmetry axis, but the ring shape that corresponds to the circonference of the barrel is clearly retrieved in Fig 1.a, whereas the axial uncertainty completely blurs the shape in the z direction in Fig. 1.b. This example shows how the anisotropic noise can drastically modify the geometry of an object. Consequently, in the absence of noise model, two point clouds such as those of Fig. 1 cannot be represented by a common GMM, as it is done in the standard EM-GMM approach. The origin of the problem is that the GMM has to play two different roles: it has to fit both the shape of the object and the noise. Moreover, the absence of explicit noise model also prevents from incorporating prior knowledge about the localization uncertainty available in SMLM and potentially in other acquisition devices.

In this paper, we propose a generalization of EM-GMM registration methods that takes into account space-varying Gaussian localization noise with arbitrary covariances. We explicitly model the observed data as noisy measurement of unknown clean data. Thus, the GMM is fitted on the clean data to model the shape of the object, and it is decoupled from the noise model that generated the observed data. Since the clean data is unknown, our strategy is to consider it as a latent variable in the EM algorithm, which induces an additional marginalization in the expected log likelihood maximized in the EM algorithm. We use a stochastic EM approach to approximate the integral in the estimation of the rigid transformation parameters, and we derive analytical solutions at each step of the algorithm. We apply our strategy in the framework of the joint registration of multiple point clouds algorithm (JRMPC) [Evangelidis17], which is a generic EM-GMM method that can be applied to multiple point clouds. Our approach is generic and could also be integrated in the numerous EM-GMM variants of [Evangelidis17]. We show experimentally on simulated data that our method significantly improves registration and GMM reconstruction compared to the baseline method in the presence of high anisotropic and space varying noise. We also demonstrate the performance and interest of our approach on real SMLM data.

Figure 1: Real data of centriole acquired with dStorm [Heilemann08]. The two columns show two examples of point clouds, with a 90 degrees tilt of the centriole between the two views. The first row represent a view in the direction of the axis of symmetry of the centriole (top view), and the second row represents a view parallel to the axis of symmetry (side view).

2 Related works

In this section, we focus on reviewing the main EM-GMM strategies and their relation with noise handling.

The earliest works on the EM-GMM framework were dedicated to pairwise registration [Chui00, Granger02]. The GMM was designed to fit the target point cloud by defining each of its points as the center of a Gaussian component. In this setting, the E-step amounts to the computation of weights of soft assignments, and the M-step is the estimation of the rigid transformation. The variances can be embedded in a coarse-to-fine scheme [Granger02], or they can be estimated in the M-step [Myronenko10, Horaud10]

. Several variants have been proposed to handle outliers

[Hermans11], topological constraints [Myronenko10], symmetrisation of the registration [Combes20], or non rigid deformations [Chui00]. This approach has been recently extended to the registration of multiple point clouds in [Zhu20], where a point is modeled as a mixture of Gaussians centered in the nearest neighbor points in each other point clouds.

From the point of view of noise modeling, this formulation is based on an assumption of Gaussian localization uncertainty of each point [Granger02]. When the Gaussians have fixed isotropic covariances, the M-step has an analytical solution and it boils down to standard ICP with fuzzy correspondences. Some methods have extended this approach to handle anisotropic noise [Estepar04, Maier11]. The estimation of the transformation (M-step) becomes a generalized total least square problem that is solved with an iterative algorithm [Ohta98, Balachandran09]. Leveraging anisotropic covariances has also been proposed for the matching (E-step) in [Maier11].

One of the main limitations of these methods is to rely on the assumption of perfect correspondences between noise-free points in the two point clouds. In practice, physical measurement devices are likely to produce different sampling schemes for each new acquisition. Consequently, the perfect correspondence assumption is violated in most cases. A proper noise modeling should decouple the localization uncertainty from the sampling uncertainty.

Moreover, this category of methods has two major limitations already mentioned in Section 1: it is restricted to pairwise registration (excluding [Zhu20]), and since there are as many Gaussian components as points, the computational complexity can become prohibitive. The generative EM-GMM methods [Eckart15, Evangelidis17] have been developed to address these issues. They estimate the GMM as the common distribution of all the point clouds registered in the same pose. This is done by extending the M-step with the estimation of the means and covariances of the GMM in addition to the transformation parameters. It can be formalized as a conditional EM procedure [Evangelidis17].

In order to improve the robustness of the baseline approach [Evangelidis17], some works extended it to handle data composed of both point localization and orientation [Billings15, Ravikumar17, Min20]

with a Hybrid mixture model. Modeling outlier points and missing data is also a recurrent issue that is usually addressed with an additional uniformly distributed class

[Chui00, Myronenko10, Evangelidis17], or replacing the GMM distribution with a Student’s t-mixture model [Ravikumar18].

The generative methods implicitly assume that the localization noise in the data is Gaussian and spatially invariant. The vast majority of methods consider isotropic covariance of the Gaussian components. An anisotropic variant can be found in [Eckart15], but the critical step of estimation of the transformation parameters in the M-step is performed with an isotropic approximation of the covariance, so the anisotropy is not properly handled throughout the algorithm. A solution for the M-step is proposed in [Eckart18] with a linear approximation valid only for small angles. Moreover, as mentioned in Section 1, prior knowledge about the noise covariance cannot be included in the model since it is implicitly contained in the covariances of the estimated Gaussian components.

Note that recent methods based on deep neural networks

[Aoki19, Wang19, Yew20] also do not rely on explicit noise modeling. These methods are either based on the computation of local [Wang19, Yew20] or global (PointNet) [Aoki19] features. However, the anisotropic noise in SMLM can drastically modify (locally ang globally) the geometry of an object (see Fig. 1). Consequently, we experimented that these registration algorithms fail to register SMLM data. In [Fan21], we proposed a neural network approach dedicated to the special case of cylindrically symmetric shapes. It yields a coarse alignment and cannot be applied to non-symmetric structures.

Multiview registration in the context of SMLM data has been investigated in a few works. However, the problem is addressed in a simplified setting. In [Broeken15], a template is assumed to be available and is used as a target for pairwise registration. In [Salas17], point clouds data is transformed to produce 2D image projections, which are used to apply standard reconstruction methods developed for tomography with unknown angles in cryo-electron microscopy. Finally, [Heydarian21] uses the motion averaging framework, based on the computation of all pairwise registrations between input point clouds, followed by a global fusion of the transformation parameters. The pairwise registration step is performed with the method described in [Jian10], which assumes isotropic noise, which is incompatible with the highly anisotropic localization uncertainty of SMLM data.

We also mention that the derivation of a GMM clustering method in the presence of uncertainty in the data has been introduced in [Ozerov13] in the context of acoustic data. Our work is inspired from the solution of [Ozerov13] for the GMM fitting part of our algorithm.

3 Registration with anisotropic uncertainty modelling

3.1 Generative formulation with noise modelling

We have point clouds denoted by , where is the number of points of the th point cloud. The total number of points is denoted . The goal is to estimate a rigid transformation composed of a rotation matrice

and a translation vector

for each point cloud such that the transformed points clouds with are in the same pose.

In the generative formulation of [Evangelidis17], the aligned points are assumed to be drawn from the same distribution, modeled as a GMM composed of Gaussian components. Thus, the GMM acts as a clustering of the union of all registered point clouds. Our purpose is to introduce to this model an explicit noise prior. To this end, we model the observed points clouds as noisy versions of unknown clean point clouds . The distribution of each point is an independent Gaussian pertubation from

(1)

where is the known covariance matrix of the measurement noise and

(2)

with .

The distribution of the unknown clean data is represented by the GMM

(3)

where , and are the mean, covariance matrix and weight of the th GMM component, respectively, is the rigid transformation of the th point cloud, and

regroups the parameters of the rigid transformation and the GMM. Note that the probability

is well defined as a function of because is an invertible one-to-one mapping with a Jacobian determinant equal to one.

3.2 Maximum likelihood estimation with partially stochastic EM algorithm

3.2.1 Maximum likelihood estimation

Under the independence assumption of each point, the likelihood of the point clouds is defined as

(4)

To find the expression of , we have to marginalize over the unknown clean data :

(5)

We show in Appendix A that it can be written

(6)

where is the noise covariance rotated by . Thus, the likelihood function writes

(7)

In the standard case [Evangelidis17], the likelihood has the same general form, but the covariance of the Gaussian components are , instead of in (7). Thus, the GMM covariance implicitly contains the noise covariance. If is isotropic (which is required to ensure a reasonable computational cost), this standard model is restricted to isotropic and shift invariant noise covariance. In our formulation (7), the noise covariance of each point is decoupled from the Gaussian clusters covariance . This opens the way to anisotropic and spatially varying noise, and it allows us to use prior knowledge of the noise covariance .

The optimization of the likelihood (7) is cumbersome and is achieved with the EM algorithm [Dempster77], which is a widely used procedure for estimating Gaussian mixture densities [Bilmes98]. The EM algorithm introduces an auxiliary function , such that if , then . Based on this property, the EM algorithm proceeds by defining a series of models such that maximizes and thereby .

3.2.2 Derivation of the auxiliary function

To derive the auxiliary function, the problem is reformulated with the introduction of latent variables. In the conventional GMM case [Bilmes98, Evangelidis17], the latent variables are defined such that the value of indicates the Gaussian component assigned to the point . The auxiliary function is then defined as the expected log-likelihood , which involves a marginalization of over .

In this work, in addition to the Gaussian component assignment , we formalize the role of by also considering it as latent data. Since there are two types of latent variables, the auxiliary function is a marginalization over and :

(8)

The term of (8) is called the complete-data likelihood, as opposed to the incomplete-data likelihood (7). In our application, we have

(9)

Using (9) and following the same strategy than for the standard GMM case [Bilmes98], we show in Appendix B that (8) writes

(10)

where we use the notation , and where

(11)

are the conditional distributions of the latent data. Note that the auxiliary function in [Evangelidis17] can be obtained by replacing in (10) the term by , where is the Dirac function, which removes the integral over the clean data . The major difficulty in our case is to deal with this integral in the maximization of Q in (10).

To lighten the notations in what follows, the parameters of and will be both denoted , , , and , without reference to the iteration number .

Using (6), the expression of is simply:

(12)

where

(13)

We detail in Appendix C the derivation of the expression of :

(14)

where

Based on (14), the point can be seen as a denoised version of under the hypothesis that the point is assigned to the Gaussian component .

3.2.3 Space-alternating generalized EM

In a conventional EM algorithm, the E-step would consist in computing the conditional distributions and , and the M-step would be the actual maximization of defined in (10). However, maximizing with respect to all the parameters in is here an intractable task. This issue can be circumvented by searching for that only guarantees an increase . This approach is called Generalized Expectation Maximization (GEM) and ensures that the likelihood increases along the iterations. Several GEM strategies can be considered.

In [Evangelidis17], an Expectation Conditional Maximization (ECM) algorithm [Meng93] is used. The parameters are splitted in two parts: the GMM parameters , and the transformation parameters . Accordingly, the M-step is splitted in subproblems related to each variable:

(15)
(16)

In this work, we use instead a Space-alternating generalized expectation maximization (SAGE) [SAGE] algorithm. The parameters are, as in ECM, sequentially updated, but the conditional distributions are also updated at each step. The SAGE algorithm can be written

(17)
(18)

The interest of SAGE is to benefit from the updated value to derive new conditional distributions when computing in (18). The counterpart is that the SAGE algorithm requires two E-steps per iteration whereas the ECM algorithm requires only one E-step per iteration.

The next two Sections detail how we solve the two subproblems (17) and (18). Without loss of generality and in order not to overload the notations, all parameters related to the second operand of will be denoted .

3.3 Update of the transformation parameters

The maximization with respect to (17) is not tractable because of the integral over in (10). To tackle this problem, we resort to a stochastic version of the EM algorithm [Celeux85]. The principle is to approximate the integral in the auxiliary function by stochastic simulation. This is achieved here by drawing a unique sample from for each . Since the distribution is defined in the referential of the aligned point clouds, we have to bring the sample back in the original referential by applying the inverse transform. We obtain (where we recall that and are the parameters of the previous iteration ). The auxiliary function (10) is then approximated by:

(19)

Thus, the estimation of the rigid transforms amounts to solve

(20)

Under the hypothesis , this is a classical weighted Procruste problem that admits a closed form solution based on SVD [Sorkine09, Evangelidis17]. In the general case, the problem (20) can be solved using convex optimization tools [Horaud10].

Note that the auxiliary function could be approximated by a larger number of samples, which would lead to the Monte-Carlo EM approach [Wei90], but we found experimentally that one sample was sufficient. We emphasize that our approach is partially stochastic since the simulation of samples is only applied to , while the other latent variables are handled as in deterministic EM. Algorithm 1 summarizes the different steps to update the transformation parameters.

1:E-step: computation of conditional expectations and generation of samples according to .
2:,
3:
4:
5:generate by sampling from
6:
7: M-rigid step: update the rigid transformation parameters
8:
Algorithm 1 Update of the transformation parameters

3.4 Update of the parameters of the GMM

When the transformation parameters are fixed, the optimization problem (18) amounts to a GMM fitting on the union of the aligned point clouds . In the noise-free case, the update equations are the same as in the standard GMM case [Bilmes98, Evangelidis17]. In our model, we have to deal with the integral over the clean data in (10), which makes the problem more difficult. We show in Appendix D that the following update equations can be derived from the expression of in (10):

(21)
(22)
(23)

with the notation . These updates have the same form as in the standard noise-free case. The only difference is that the terms that involve the unknown clean data are replaced here by their conditional expectation. We show in Appendix D that these conditional expectations have simple closed forms. This leads to the update steps summarized in Algorithm 2.

1:E-step: computation of conditional expectations
2:,
3:
4:.
5:
6:M-GMM step: update the GMM parameters
7:
8:
9:
Algorithm 2 Update of the parameters of the GMM

The same algorithm has previously been derived in [Ozerov13] for GMM fitting with uncertainties on acoustic data. In [Ozerov13], the update equations are directly given after showing that the complete data belongs to the exponential family. We follow here an alternative derivation from the auxiliary function , which provides details that are omitted in [Ozerov13].

4 Implementation details

In practice, we do not use the update equation of the priors (21), in accordance with [Evangelidis17]. We observed that keeping them constant does not alter the convergence and the quality of the registration.

The methods described in Section 3 are not robust to outlier points that may not fit with the reconstructed GMM distribution. To overcome this issue, we use the standard approach that consists in adding a class dedicated to outliers, modeled by a uniform distribution. We did not include this outlier handling in the description of the method for the sake of clarity. When the outlier class is included, Eq. (3) becomes

(24)

where is the uniform distribution parameterized by the volume of the 3D convex hull encompassing the data. Since the priors are not updated, we can show that this modification only modifies the normalization in the computation of the posteriors in the E-step, such that (12) becomes (for ):

(25)

The outlier handling role of the normalization in (25) can be understood intuitively. Suppose that is an outlier. Without this additional class, since we ensure , all points have a similar influence in the update of the parameters. However, with this additional class, as is an outlier, we expect to be much smaller than , so that the posterior is close to for ( is close to 1 since ). This means that this point will have very few impacts on the update of the parameters.

As in [Evangelidis17], we consider GMM with isotropic covariances, i.e. . In this case, the update of the transformation parameters in (20) has a simple closed form solution. Concerning the update of the covariance of the GMM in Alg. 2, we can easily show that the variance parameter can be obtained by .

5 Experimental Results

In this section we provide an experimental analysis of our method on synthetic and real data. Since our algorithm is an extension of the baseline method JRMPC [Evangelidis17], we focus our evaluation on the comparison with this method.

5.1 Results on synthetic data

5.1.1 Dataset

Figure 2: Simulated data of the centriole (top) and triplets (bottom).

Figure 3: Examples of simulated point clouds with different levels of anisotropic noise from the bunny (top) and centriole (bottom) models ( in all the examples).

Since our primary goal is to apply our algorithm to microscopy data, we simulated the tubulin structure of the centriole, which is a fundamental macromolecular assembly involved in most cellular processes. To this end, we used the structural analysis of [LeGuennec20], where detailed information about the geometry of the centriole is provided. Our simulated structure is shown in Figure 2. It is composed of 9 triplets of overlapping tubes, organized in a ninefold cylindrical symmetry (C9) around the z-axis. It creates a barrel-like structure, with a smaller radius at the top of the structure than at the bottom. We will call this object centriole. We also simulated an object composed of 54 source points also organized in triplets with a C9 symmetry (Figure 2). It is reminiscent of the structure of the centriole and of other biological particles such as nuclear pore. We will call this objet triplets. Finally, we also used the bunny model from the traditional Stanford dataset111https://graphics.stanford.edu/data/3Dscanrep/. We fix the number of points of the ground truth model to 2000 for centriole and bunny.

A point cloud is created by first rotating the ground truth model, then adding outliers drawn from a uniform law, and finally drawing points from the Gaussian uncertainty distribution . We consider uncertainties of the form

(26)

where is the noise level and

imposes anisotropy in one direction. It mimics SMLM data, where the uncertainty in the axial direction is larger than in the lateral plane. To simulate space varying noise, the noise level at each point is defined as a sample of a Gaussian distribution centered on

. In Figure 3, we show examples of simulated point clouds with several values of . We fix the proportion of outliers points to in all our experiments. Finally, the number of point clouds has been set to .

5.1.2 Error metric

Since the pose of the object of interest is arbitrary in the common space, we do not expect the estimated transformation to be close to the inverse of the ground truth transformation , but we expect to be close to , for . The translation parameters are easy to estimate (when the rotation parameters are well-estimated), so we only consider the rotation parameters to evaluate the accuracy of the registration. Thus, our error metric is defined as the distance between the matrices and , or equivalently between the matrices and . Several functions have been proposed to measure the distance between 3D rotation matrices [Rotation]. The distance between and can be computed from the angle of the rotation matrix

(that should be close to the identity matrix). This angle can be computed using the quaternion representation (as in

[Rotation]), or simply by:

(27)

The distance is then

(28)

(it is possible to compute directly from (27) by using an absolute value for the term inside the function).

The centriole and triplets models have a ninefold cylindrical symmetry with respect to the z-axis. This means that the rotation by an angle of () does not change the object. In this case, the error metric has to be adapted since there are nine correct rotation parameters. We write the distance defined in (28) between and , where defines the rotation matrix around the z-axis by an angle of . We define the error metric by computing the nine possible cases and selecting the minimum value: .

Figure 4: Impact of different noise parameters on the registration error. Top row: influence of with ; Bottom row: influence of with

. The vertical bar represents the standard deviation of the error.

, ,
Ourmethod
error = 1.63 error = 0.81 error = 2.92         error = 3.03
JRMPC
error = 12.9 error = 4.3 error = 20.1         error = 50.2
Figure 5: Visual results of the registration of 5 point clouds generated from the triplets and bunny models, for different levels of noise. We show the ground truth models that were used to generate the point clouds (each point cloud has a different color), registered with our method (top row) and JRMPC (bottom row). We consider two combinations of noise parameters: , on the first two columns and , on the last two columns. The quantitative error is given for each case.

Figure 6: Comparison of the centers of the estimated GMM obtained with our method and JRMPC for the triplets (top row), centriole (middle row), and Bunny (bottom row). Two different views are shown in each case: the first view is on the first three columns and the second view is on the last three columns. The noise parameters have been set to and .

5.1.3 Hyperparameter setting and initialization

We observed that the number of Gaussian components does not have a substantial impact on the registration results as long as is of the same order of magnitude as the number of points. We fixed for the triplets, for the centriole and for the bunny.

As explained in the previous section and as in [13], the priors are not updated and are all set to the same value except that represents the outlier probability. The proportion of outliers can be defined as

(29)

We observed that variations of the value of had very little impact on the registration results. In the following, the priors have been set so that .

Initialization plays a crucial role in EM procedures. As in [Evangelidis17], we suppose we have a rough initialization of the transformation parameters. Rotation parameters are initialized by drawing the Euler angles from a Gaussian law centered on the ground truth angles, with a standard deviation of degrees. For the initialization of the GMM, we use these transformation parameters to transform each point set to the common space and we select randomly different points among the transformed points to initialize . The variances are initialized with high values.

We use the same hyperparameters and initialization for JRMPC and our method. To account for the random nature of the initialization, we run both algorithms five times with different GMM initializations and retain for each one the result with the highest likelihood.

5.1.4 Analysis of the results

We are particularly interested in the robustness to noise, which is determined by the noise level and the anisotropy parameter defined in (26). In Figure 4, we show the influence of and on the registration error for our method and JRMPC. As expected, we first observe that the two methods provide comparable results when the anisotropy parameter is equal to 1. Indeed, the implicit assumption of JRMPC is isotropic Gaussian noise. The differences for can be explained by the fact that we apply Gaussian noise with spatially varying covariance, which violates the model of JRMPC. We also observe that when the anisotropy parameter grows (with a fixed noise level ), the error increases much slower with our method than with JRMPC (top line of Fig. 4). This highlights the benefit of the proposed modeling. Finally, the same behavior occurs when the noise level increases with a fixed (bottom line of Fig. 4). For low noise levels, the original data is only slightly corrupted by noise and the influence of the anisotropy parameter is negligible, whereas for higher noise levels the influence of the anisotropy parameter becomes preponderant, which explains the better performances of our method.

In Figure 5, we illustrate the visual quality of the registration results associated to the quantitative errors of Fig. 4. Since the input point clouds are corrupted with a high level of noise, their mere superimposition after registration is not informative enough about the quality of the registration. Therefore, we show instead the underlying ground truth models that were used to generate the data. We consider two combinations of noise parameters ((, ) and (, )), and we give the associated quantitative error. In the case (, ), the registration with our method is almost perfect, while the results of JRMPC are less accurate. In the higher noise regime (, ), our results stay visually satisfying, but JRMPC completely fails to estimate correct registration parameters.

It is also interesting to look at the GMM density estimated jointly with the registration parameters. We show in Figure 6 the centers of the Gaussian components of the GMM estimated with our method and JRMPC, for a medium noise level defined by and . We can see in each case that the GMM estimated with our method recovers sharper details of the original shapes.

These results show how the integration of our noise model in the EM estimation improves both numerically and visually the registration results. This is explained by the fact that with JRMPC, each Gaussian component of the GMM has to account both for the shape of the object and the noise. Therefore, the reconstructed GMM of Figure 6 tends to fit the noise with JRMPC. Our method decouples noise handling from shape modeling, such that the GMM actually fits only the shape of the objects in Figure 6. Since the GMM acts as a target for the registration of the input point clouds in the EM algorithm, the quality of the registration is directly intertwined with the quality of the GMM reconstruction.

5.2 Real data

We applied our method on real microscopy data. We used direct stochastic optical reconstruction microscopy (dStorm), which one of the most popular SMLM techniques, combined with expansion microscopy [Zwettler20]. It produces point clouds with a known anisotropic localization uncertainty of the form (26). We imaged the tubulin structure of the centriole, which is a macromolecular assembly that plays a crucial role in several cellular mechanisms and has motivated a lot of recent research to decipher its structural properties [LeGuennec21]. It is know to have a barrel-like shape and a C9 symmetry structure, from which our centriole simulated structure is inspired from (see Fig. 3). The acquired point cloud contained several centrioles, and we manually cropped ten of them, shown in Fig. 1.e-f, to perform registration. As already discussed in Section 1 and Figure 1, we can see that the noise in the axial direction z of the microscope is much higher than in the lateral plane. The localization noise of each point is known and follows a Gaussian law with a covariance of the form (26). The experimental distribution of the anisotropy parameter can be approximated by a Gaussian law with mean 5 and standard deviation 1.5. Therefore, it is crucial to combine the information of each view to reconstruct high resolution particles. Fig 7 shows all the input point clouds in their original pose. We initialize the registration parameters with the coarse registration method dedicated to centrioles described in [Fan21].

The reconstruction of the GMM is of particular interest in our application since our goal is to reconstruct a model of the particle with a higher resolution than in the original data. The estimated centers of the GMM can serve as such a reconstruction. In Fig. 8, we show the centers estimated with our method and JRMPC. Similarly to the simulated case, we observe that our method captures more precisely the shape of the centriole, and the reconstruction of JRMPC is more influenced by the presence of noise. To obtain Fig. 8.c and 8.f, we created volumes by summing Gaussians of fixed variance centered on each of the estimated centers, and we projected the volumes in the direction of the symmetry axis of the centriole. With our method we can recover the ninefold symmetry of the centriole in Fig. 8.c, whereas the JRMPC result is much more blurred. This shows that our noise handling approach can reveal biologically relevant information that is lost with the baseline method.

In Fig. 9.a,b,e,f we show the registered point clouds obtained with the two methods. It is difficult to visually assess the quality of the registration because of the high amount of noise. Therefore, we show in In Fig. 9.c,d,g,h cleaned versions of the registered point clouds. They are obtained by applying the postprocessing described in [Evangelidis17]

that removes points classified as outliers and points assigned to Gaussian components with high variance (superior to 2.5 times the median variance). A Gaussian component with high variance means that it does not cluster accurately the shape of the object and is likely to also capture noise. We see that the cleaned point clouds remain very noisy with JRMPC, while our clean data recovers the shape of the centriole. This means that the variances of the GMM clusters do not capture noise and are concentrated on the particle.

Figure 7: Superimposition of the ten real dStorm acquisitions of centriole in their original pose (the red and blue point clouds are presented individually in Figure 1). The first column represents a view orthogonal to the direction, and the second column represents a view parallel to the direction.
Our method
(a) (b)               (c)
JRMPC
(d) (e)               (f)
Figure 8: Centers of the estimated GMM on real data. The first two columns are two orthogonal views. The last column shows the projection in the direction of the centriole axis of the volume obtained by evaluating the GMM on a 3D grid.
Original data Cleaned data
Ourmethod
(a) (b) (c)                (d)
JRMPC
(e) (f) (g)                (h)
Figure 9: Registered point clouds from the real data of Fig. 1. In the first two columns, we show the original point clouds, and in the last two columns, we show the cleaned point clouds obtained after post processing.

6 Conclusion

We presented a novel method to handle anisotropic and space-varying noise in the EM-GMM framework for point clouds registration. Unlike previous methods, we define an explicit Gaussian prior on the localization noise. We derived closed form updates for arbitrary anisotropic covariances at each step of our stochastic EM algorithm. Experimental results showed that our method outperforms the baseline approach [Evangelidis17] when the point clouds are corrupted with high anisotropic noise, both in terms of registration accuracy and GMM reconstruction. We also emphasize the practical interest of our method on real SMLM data, for which there is an urgent need of registration methods taking into account high localization uncertainty.

Appendix A Computation of

The term can be written

(30)

Using , where , and with the change of variables , we get: