TEASER-plusplus
A fast and robust point cloud registration library
view repo
Semidefinite Programming (SDP) and Sums-of-Squares (SOS) relaxations have led to certifiably optimal non-minimal solvers for several robotics and computer vision problems. However, most non-minimal solvers rely on least squares formulations, and, as a result, are brittle against outliers. While a standard approach to regain robustness against outliers is to use robust cost functions, the latter typically introduce other non-convexities, preventing the use of existing non-minimal solvers. In this paper, we enable the simultaneous use of non-minimal solvers and robust estimation by providing a general-purpose approach for robust global estimation, which can be applied to any problem where a non-minimal solver is available for the outlier-free case. To this end, we leverage the Black-Rangarajan duality between robust estimation and outlier processes (which has been traditionally applied to early vision problems), and show that graduated non-convexity (GNC) can be used in conjunction with non-minimal solvers to compute robust solutions, without requiring an initial guess. We demonstrate the resulting robust non-minimal solvers in applications, including point cloud and mesh registration, pose graph optimization, and image-based object pose estimation (also called shape alignment). Our solvers are robust to 70-80 specialized local solvers, and faster than specialized global solvers. We also extend the literature on non-minimal solvers by proposing a certifiably optimal SOS solver for shape alignment.
READ FULL TEXT VIEW PDFA fast and robust point cloud registration library
Certifiably robust geometric perception with outliers
Graduated Non-Convexity (GNC) and Adaptive Trimming (ADAPT) algorithms for outlier robust estimation
SAT-based, MILP, and belief propagation preimage attacks on SHA-256 and other cryptographic hash functions
Robust estimation is a crucial tool for robotics and computer vision, being concerned with the estimation of unknown quantities (e.g., the state of a robot, or of variables describing the external world) from noisy and potentially corrupted measurements. Corrupted measurements (i.e., outliers) can be caused by sensor malfunction, but are more commonly associated with incorrect data association and model misspecification [1, 2].
In the outlier-free case, common estimation problems are formulated as a least squares optimization:
(1) |
where is the variable we want to estimate (e.g., the pose of an unknown object); is the domain of (e.g., the set of 3D poses); () are given measurements (e.g., pixel observations of points belonging to the object); and the function is the residual error for the -th measurement, quantifying the mismatch between the expected measurement at an estimate and the actual measurement . In typical robotics and computer vision applications, problem (1) is difficult to solve globally, due to the nonlinearity of the residual errors and the nonconvexity of the domain . Despite these challenges, the research community has developed closed-form solutions and global solvers for many such problems. Specifically, while closed-form solutions are rare [3, 4], Semidefinite Programming (SDP) and Sums-of-Squares (SOS) [5, 6] relaxations have been recently shown to be a powerful tool to obtain certifiably optimal solutions to relevant instances of problem (1), ranging from pose graph optimization [7, 8], rotation averaging [9], anisotropic registration [10], two-view geometry [11], and PnP [12]. The resulting techniques are commonly referred to as non-minimal solvers, to contrast them against methods that solve eq. (1) using only a small (minimal) subset of measurements (see related work in Section II).
Unfortunately, in the presence of outliers, eq. (1)’s solution provides a poor estimate for . This limits the applicability of existing non-minimal solvers, since they can be applied only after the outliers have been removed. Yet, the theory of robust estimation suggests regaining robustness by substituting the quadratic cost in eq. (1) with a robust cost :
(2) |
For instance, can be a Huber loss, a truncated least squares cost, or a Geman-McClure cost [13]. To date, the application of non-minimal solvers to eq. (2) has been limited. Some of the non-minimal solvers designed for (1) can be extended to include a convex robust cost function, such as the Huber loss [14]; however, it is known that convex losses have a low breakdown point and are still sensitive to gross outliers [15]. In rare cases, the literature provides robust non-minimal solvers that achieve global solutions to specific instances of problem (2), such as robust registration [2, 16]. However, these solvers cannot be easily extended to other estimation problems, and they rely on solving large SDPs, which is currently impractical for large-scale problems.
Contributions. In this paper, we aim to reconcile non-minimal solvers and robust estimation, by providing a general-purpose^{1}^{1}1For a given problem (2), we only assume the existence of a non-minimal solver for the corresponding outlier-free problem (1). algorithm to solve problem (2) without requiring an initial guess. We achieve this by combining non-minimal solvers with an approach known as graduated non-convexity (GNC). In contrast, standard algorithms for problem (2) rely on iterative optimization to refine a given initial guess, which causes the result to be brittle when the quality of the guess is poor. Particularly, we propose three contributions.
First, we revisit the Black-Rangarajan duality [13] between robust estimation and outlier processes. We also revisit the use of graduated non-convexity (GNC) as a general tool to solve a non-convex optimization problem without an initial guess. While Black-Rangarajan duality and GNC have been used in early vision problems, such as stereo reconstruction, image restoration and segmentation, and optical flow, we show that combining them with non-minimal solvers allows solving spatial perception problems, ranging from mesh registration, pose graph optimization, and image-based object pose estimation.
Our second contribution is to tailor Black-Rangarajan duality and GNC to the Geman-McClure and truncated least squares costs. We show how to optimize these functions by alternating two steps: a variable update, which solves a weighted least squares problem using non-minimal solvers; and a weight update, which updates the outlier process in closed form.
Our approach requires a non-minimal solver, but currently there is no such solver for image-based object pose estimation (also known as shape alignment). Our third contribution is to present a novel non-minimal solver for shape alignment. While related techniques propose approximate relaxations [17], we provide a certifiably optimal solution using SOS relaxation.
We demonstrate our robust non-minimal solvers on point cloud registration (P-REG), mesh registration (as known as generalized registration, G-REG), pose graph optimization (PGO), and shape alignment (SA). Our solvers are robust to 70-80% outliers, outperform RANSAC, are more accurate than specialized local solvers, and faster than global solvers.
Minimal Solvers. Minimal solvers use the smallest number of measurements necessary to estimate . Examples include the 2-point solver for the Wahba problem (a rotation-only variant of P-REG) [18], the 3-point solver for P-REG [3], and the 12-point solver [19] for G-REG with point-to-plane correspondences. Notably, the approach [20] enables minimal solvers for a growing number of estimation problems.
Non-minimal Solvers. Minimal solvers do not leverage data redundancy, and, consequently, can be sensitive to measurement noise. Therefore, non-minimal solvers have been developed. Typically, non-minimal solvers assume Gaussian measurement noise, which results in a least squares optimization framework. In some cases, the resulting optimization problems can be solved in closed form, e.g., Horn’s [3] and Arun’s [4] methods for P-REG, or PLICP [21] for G-REG. Generally, however, the resulting optimization problems are hard, and researchers have developed exponential-time methods for their solution, such as Branch and Bound (BnB). Hartley and Kahl [22] introduce a BnB search over the rotation space to globally solve several vision problems. Olsson et al. [23] develop optimal solutions for G-REG. Recently, Semidefinite Programing (SDP) and Sums of Squares (SOS) relaxations [5, 6] have been used to develop polynomial-time algorithms with certifiable optimality guarantees. Briales and Gonzalez-Jimenez [10] solve G-REG using SDP. Carlone et al. [7, 24] use Lagrangian duality and SDP relaxations for PGO. Rosen et al. [8] develop SE-Sync, a fast solver for the relaxation in [7, 24]. Mangelson et al. [25] apply the sparse bounded-degree variant of SOS relaxation, also for PGO. Currently, there are no certifiably optimal non-minimal solvers for shape alignment; Zhou et al. [17] propose a convex relaxation to obtain an approximate solution for SA.
Global Methods. We refer to a robust method as global, if it does not require an initial guess. Several global methods adopt the framework of consensus maximization [26, 27, 28], which looks for an estimate that maximizes the number of measurements that are explained within a prescribed estimation error. Consensus maximization is NP-hard [27, 29], and related work investigates both approximations and exact algorithms. RANSAC [30]
is a widely used heuristic
[31], applicable when a minimal solver exists. But RANSAC provides no optimality guarantees, and its running time grows exponentially with the outlier ratio [32]. Tzoumas et al. [29] develop the general-purpose Adaptive Trimming (ADAPT) algorithm, that has linear running time, and, instead, is applicable when a non-minimal solver exists. Mangelson et al. [33] propose a graph-theoretic method to prune outliers in PGO. Exact solutions for consensus maximization are based on BnB [28]: see [34] for the Wahba problem, and [32, 35] for P-REG.Another framework for global methods is M-estimation, which resorts to robust cost functions. Enqvist et al. [36] use a truncated least squares (TLS) cost, and propose an approach that scales, however, exponentially with the dimension of the parameter space. More recently, SDP relaxations have also been used to optimize robust costs. Carlone and Calafiore [14] develop convex relaxations for PGO with
-norm and Huber loss functions. Lajoie
et al. [15] adopt a TLS cost for PGO. Yang and Carlone [2, 16] develop an SDP relaxation for the Wahba and P-REG problem, also adopting a TLS cost. Currently, the poor scalability of the state-of-the-art SDP solvers limits these algorithms to only small-size problems.Finally, graduated non-convexity (GNC) methods have also been employed to optimize robust costs [37]. Rather than directly optimizing a non-convex robust cost, these methods sequentially optimize a sequence of surrogate functions, which start from a convex approximation of the original cost, but then gradually become non-convex, converging eventually to the original cost. Despite GNC’s success in early computer vision applications [13, 38], its broad applicability has remain limited due to the lack of non-minimal solvers. Indeed, only a few specialized methods for spatial perception have used GNC. Particularly, Zhou et al. [39] develop a method for P-REG, using Horn’s or Arun’s methods [3, 4] (see also [40]).
Local Methods. In contrast to global methods, local methods require an initial guess. In the context of M-estimation, these methods iteratively optimize a robust cost function till they converge to a local minimum [41]. Zach et al. [42, 43] include auxiliary variables and propose an iterative optimization approach that alternates updates on the estimates and the auxiliary variables; the approach still requires an initial guess. Bouaziz et al. [44] propose robust variants of the iterative closest point algorithm for P-REG. Sünderhauf and Protzel [45], Olson and Agarwal [46], Agarwal et al. [47], Pfingsthorn and Birk [48] propose local methods for PGO. Wang et al. [49] investigate local methods for shape alignment.
We review the Black-Rangarajan duality [13], and a tool for global optimization known as graduated non-convexity [37].
This section revisits the Black-Rangarajan duality between robust estimation and outlier process [13]. This theory is less known in robotics, and its applications have been mostly targeting early vision problems, with few notable exceptions.
Given a robust cost function , define . If satisfies , , and , then the robust estimation problem (2) is equivalent to
(3) |
where () are slack variables (or weights) associated to each measurement , and the function (the so called outlier process) defines a penalty on the weight . The expression of depends on the choice of robust cost function .
The conditions on are satisfied by all common choices of robust costs [13]. Besides presenting this fundamental result, asserting the equivalence between the outlier process (3) and the robust estimation (2), Black and Rangarajan provide a procedure (see Fig. 10 in [13]) to compute and show that common robust cost functions admit a simple analytical expression for . Interestingly, the outlier process (3) has been often used in robotics and SLAM [47, 45], without acknowledging the connection with robust estimation, and with a heuristic design of the penalty terms .
Graduated non-convexity (GNC) is a popular approach for the optimization of a generic non-convex cost function and has been used in several endeavors, including vision [37]
and machine learning
[50] (see [51] for more applications). The basic idea of GNC is to introduce a surrogate cost , governed by a control parameter , such that (i) for a certain value of , the function is convex, and (ii) in the limit (typically for going to 1 or infinity) one recovers the original (non-convex) . Then GNC computes a solution to the non-convex problem by starting from its convex surrogate and gradually changing (i.e., gradually increasing the amount of non-convexity) till the original non-convex function is recovered. The solution obtained at each iteration is used as initial guess for the subsequent iteration, while no initial guess is needed in the first iteration since GNC starts with a convex cost which can be optimized globally from any initial guess.Let us shed some light on GNC with two examples.
The Geman-McClure function is a popular (non-convex) robust cost. The following equation shows the GM function (left) and the surrogate function including a control parameter (right):
(4) |
where is a given parameter that determines the shape of the Geman McClure function .
The surrogate function (shown in Fig. 1(a)) is such that: (i) becomes convex for large (in the limit of , becomes quadratic), and (ii) recovers when . GNC minimizes the function by repeatedly minimizing the function for decreasing values of .
We present an algorithm that combines GNC, Black-Rangarajan duality, and non-minimal solvers to solve problem (2) without requiring an initial guess.
We start by providing an overview of the proposed algorithm, then we delve into technical details and tailor the approach to two specific robust cost functions (Section IV-B). Instead of optimizing directly (2), we use GNC and, at each outer iteration, we fix a and optimize:
(7) |
Since non-minimal solvers cannot be used directly to solve (7), we use the Black-Rangarajan duality and rewrite (7) using the corresponding outlier process:
(8) |
As discussed in Section IV-B and [13], it is easy to compute the penalty terms even for the surrogate function .
Finally, we solve (8) by alternating optimization, where at each inner iteration we first optimize over the (with fixed ), and then we optimize over , (with fixed ). In particular, at inner iteration , we perform the following:
The process is then repeated for changing values of , where each change of increases the amount of non-convexity.
While the combination of GNC and Black-Rangarajan duality has been investigated in related works [13, 39], its applicability has been limited by the lack of global solvers for the variable update in (9). For instance, [39] focuses on a specific problem (point cloud registration) where (9) can be solved in closed form [4, 3], while [13] focuses on a Markov Random Field formulation for which global solvers and heuristics exist [52]. One of the main insights behind our approach is that modern non-minimal solvers (developed over the last 5 years) allow solving (9) globally for a broader class of problems, including spatial perception problems such as SLAM, mesh registration, and object localization from images.
Here we tailor the GNC algorithm to two cost functions, the Geman McClure and the Truncated Least Squares costs, provide expressions for the penalty term , and discuss how to solve the weight update step. The proofs of the following propositions are given in the supplementary material.
Consider the Geman-McClure function and its GNC surrogate with control parameter , as introduced in Example 2. Then, the minimization of the surrogate function is equivalent to the outlier process with penalty term chosen as:
(11) |
Moreover, defining the residual , the weight update at iteration can be solved in closed form as:
(12) |
Consider the truncated least squares function and its GNC surrogate with control parameter , as introduced in Example 3. Then, the minimization of the surrogate function is equivalent to the outlier process with penalty term:
(13) |
Moreover, defining the residual , the weight update at iteration can be solved in closed form as:
(14) |
For GNC-GM, we start with a convex surrogate () and decrease till we recover the original cost (). In practice, calling the maximum residual after the first variable update, we initialize , update at each outer iteration, and stop when decreases below . For GNC-TLS, we start with a convex surrogate () and increase the till we recover the original cost (). In practice, we initialize , update at each outer iteration, and stop when the sum of the weighted residuals converges. For each outer iteration we perform a single variable and weight update. For both robust functions, we set the parameter to the maximum error expected for the inliers, see Remarks 1-2 in [16].
We showcase our robust non-minimal solvers in three spatial perception applications: point cloud and mesh registration (Section V-A), pose graph optimization (Section V-B), and shape alignment (Section V-C).
Setup. In generalized 3D registration, given a set of 3D points , , and a set of primitives , (being points, lines and/or planes) with putative correspondences (potentially including outliers), the goal is to find the best rotation and translation that align the point cloud to the 3D primitives. The residual error is , where denotes the distance between a primitive and a point after the transformation is applied. The formulation can also accommodate weighted distances to account for heterogeneous and anisotropic measurement noise. In the outlier-free case, Horn’s method [3] gives a closed-form solution when all the 3D primitives are points and the noise is isotropic, and [10] develops a certifiably optimal relaxation when the 3D primitives include points, lines, and planes and the noise is anisotropic. We now show that, using the GNC-GM and GNC-TLS solvers, we can efficiently robustify these non-minimal solvers. We benchmark our algorithms against state-of-the-art techniques in point cloud registration and mesh registration.
Point Cloud Registration Results. We use the Bunny dataset from the Stanford 3D Scanning Repository [53]. We first scale the Bunny point cloud to be inside a unit cube, and then apply a random rotation and translation to get a transformed copy of the Bunny.
correspondences are randomly chosen, where we add zero-mean Gaussian noise with standard deviation
to the inliers, while corrupt the outliers with randomly generated points as in [2]. We benchmark the performance of GNC-GM and GNC-TLS against (i) RANSAC with 200 maximum iterations using Horn’s 3-point minimal solver [3], (ii) ADAPT [29], and (iii) TEASER [2].Fig. 2 reports the statistics for - outlier rates (all methods work well below ). Fig. 2(a)-(b) show the rotation and translation errors for increasing outliers. RANSAC starts failing at 80%. GNC-GM, GNC-TLS, and ADAPT break at 90% outliers. TEASER is a specialized robust global solver and outperforms all other techniques; unfortunately, it currently does not scale to large problem instances and does not extend to other registration problems (e.g., mesh registration); TEASER does not have outer iterations, hence we omit it in Fig. 2(c). While GNC-GM, GNC-TLS, and ADAPT achieve similar errors, GNC-GM and GNC-TLS require roughly a constant number of iterations, while ADAPT’s iterations increase with the number of outliers as shown in Fig. 2(c). Since these three techniques have a similar per-iteration complexity, GNC-GM and GNC-TLS are remarkably faster. We omit time comparisons for this application, since RANSAC is implemented in C++ and the other techniques are implemented in Matlab, making a fair comparison challenging. For point cloud registration, GNC-GM is essentially the same as Fast Global Registration [39], while below we show that the use of non-minimal solvers allows extending GNC to other spatial perception applications.
Mesh Registration Results. We apply GNC-GM and GNC-TLS to register a point cloud to a mesh, using the non-minimal solver [10]. We use the “car-2” mesh model from the PASCAL+ dataset [54]. We generate a point cloud from the mesh by randomly sampling points lying on the vertices, edges and faces of the mesh model, and then apply a random transformation and add Gaussian noise with . We establish 40 point-to-point, 80 point-to-line and 80 point-to-plane correspondences, and create outliers by adding incorrect point to point/line/plane correspondences (Fig. 3(d)). We benchmark GNC-GM and GNC-TLS against (i) RANSAC with 10,000 maximum iterations using the 12-point minimal solver in [19] and (ii) ADAPT [29].
Fig. 3(a)-(c) show the errors and iterations for each technique. GNC-GM, GNC-TLS, and ADAPT are robust against 80% outliers, while RANSAC breaks at 50% outliers. The number of iterations of ADAPT grows linearly with the number of outliers, while GNC-GM’s and GNC-TLS’s iterations remain constant. Fig. 3(e) shows a successful registration with GNC-TLS and Fig. 3(f) shows an incorrect registration from RANSAC, both obtained in a test with 70% outliers.
Setup. PGO is one of the most common estimation engines for SLAM [1]. PGO estimates a set of poses , (typically sampled along the robot trajectory) from pairwise relative pose measurements (potentially corrupted with outliers). The residual error is the distance between the expected relative pose and the measured one:
where and are known parameters describing the measurement noise distribution, and denotes the Frobenious norm. SE-Sync [8] is a fast non-minimal solver for PGO.
PGO Results. We test GNC-GM and GNC-TLS on two standard benchmarking datasets: INTEL and CSAIL, described in [8]. In each dataset, we preserve the odometry measurements, but we spoil loop closures with outliers. To create outliers, we sample random pairs of poses and add a random measurement between them. We benchmark GNC-GM and GNC-TLS against (i) g2o [47], (ii) dynamic covariance scaling (DCS) [47], (iii) pairwise consistent measurement set maximization (PCM) [33], and (iv) ADAPT [29]. DCS and PCM are fairly sensitive to the choice of parameters: we tested parameters for DCS, and thresholds for PCM, and, for the sake of clarity, we only reported the choice of parameters ensuring the best performance.
Fig. 4(a) shows the average trajectory errors for the INTEL dataset. g2o is not a robust solver and performs poorly across the spectrum. DCS and PCM are specialized robust local solvers, but their errors gradually increase with the percentage of outliers. GNC-GM, GNC-TLS, and ADAPT are insensitive to up to outliers and preserve an acceptable performance till of outliers; GNC-TLS dominates the others. Fig. 4(b) reports the results on the CSAIL dataset. Also in this case, GNC-TLS dominates the other techniques and is robust to 90% outliers. Fig. 4(c) reports the CPU times required by the techniques to produce a solution on CSAIL; in this case, all techniques are implemented in C++.
Setup. In shape alignment, given 2D features , on a single image and 3D points , of an object with putative correspondences (potentially including outliers), the goal is to find the best scale , rotation , and translation of the object, that projects the 3D shape to the 2D image under weak perspective projection. The residual function is , where is the weak perspective projection matrix (equal to the first two rows of a identity matrix). Note that is a 2D translation, but under weak perspective projection one can extrapolate a 3D translation (i.e., recover the distance of the camera to the object) using the scale .
A Non-minimal Solver for Shape Alignment. The literature is missing a global solver for shape alignment, even in the outlier-free case. Therefore, we start by proposing a non-minimal solver for (weighted) outlier-free shape alignment:
(15) |
where are constant weights. The next proposition states that the global minimizer of problem (15) can be obtained by solving a quaternion-based unconstrained optimization.
Define the non-unit quaternion , where is the unit-quaternion corresponding to and is the unknown scale. Moreover, if , define
as the vector of degree-2 monomials in
. Then the globally optimal solutions () of problem (15) can be obtained from the solution of the following optimization:(16) |
where , , and are known quantities, whose expression is given in the supplementary material.
Problem (16) requires minimizing a degree-4 polynomial in 4 variables; to this end, we apply SOS relaxation and relax (16) to the following SOS optimization:
(17) |
which can be readily converted to an SDP and solved with certifiable optimality [5, 6]. We use the GloptiPoly 3 [56] package in Matlab to solve problem (17) and found that empirically the relaxation is always exact. Solving the SDP takes about 80 ms on a desktop computer.
Shape Alignment Results. We test the performance of GNC-GM and GNC-TLS, together with our SOS solver on the FG3DCar dataset [55], where we use the ground-truth 3D shape model as and the ground-truth 2D landmarks as . To generate outliers, we set incorrect correspondences between 3D points and 2D features. We benchmark GNC-GM and GNC-TLS against (i) Zhou’s method [17], (ii) RANSAC with 100 maximum iterations using a 4-point minimal solver (we use our SOS solver as minimal solver), and (iii) ADAPT [29].
Fig 5(a)-(c) show translation errors, rotation errors, and number of iterations for all compared techniques. Statistics are computed over all 600 images in the FG3DCar dataset. The performance of Zhou’s method degrades quickly with increasing outliers. RANSAC breaks at 60% outliers. GNC-GM, GNC-TLS, and ADAPT are robust against 70% outliers, while GNC-GM and GNC-TLS require a roughly constant number of iterations. Qualitative results for GNC-GM, RANSAC, and Zhou’s approach are given in Fig. 5(d)-(f), respectively.
We proposed a general-purpose approach for robust estimation that leverages modern non-minimal solvers. The approach allows extending the applicability of Black-Rangarajan duality and Graduated Non-convexity (GNC) to several spatial perception problems, ranging from mesh registration and shape alignment to pose graph optimization. We believe the proposed approach can be a valid replacement for RANSAC. While RANSAC requires a minimal solver, our GNC approach requires a non-minimal solver. Our approach is deterministic, is resilient to a large number of outliers, and is significantly faster than many specialized solvers. As a further contribution, we presented a novel non-minimal solver for shape alignment.
Supplementary Material
The outlier process (11) is derived by following the Black-Rangarajan procedure in Fig. 10 of [13]. Therefore, we here prove the weight update rule in eq. (12). To this end, we derive the gradient of the objective function with outlier process in eq. (10) with respect to :
(A18) |
From eq. (A18) we observe that if , then ; and if , then . These facts, combined with the monotonicity of w.r.t. , ensure that there exists a unique such that the gradient vanishes:
(A19) |
This vanishing point is the global minimizer of (10).
The outlier process (13) is derived by following the Black-Rangarajan procedure in Fig. 10 of [13]. Therefore, we here prove the weight update rule in eq. (14). To this end, we derive the gradient of the objective function with outlier process in eq. (10) with respect to :
(A20) |
From eq. (A20) we observe that if , then ; and if , then . Therefore, the global minimizer can be obtained by setting the gradient to zero, leading to:
(A21) |
In the main document, we claim that the optimal solutions for the shape alignment problem
(A22) |
can be obtained from the optimal solution of the quaternion-based unconstrained optimization:
(A23) |
Here we prove this proposition, and provide a formula to compute , and , as well as construct from . Particularly, we complete the proof in two steps: (i) we marginalize out the translation and convert problem (A22) into an equivalent translation-free problem; (ii) we reparametrize the rotation matrix using unit-quaternion and then arrive at the unconstrained optimization in (A23). The two steps follow in more detail below:
(i) Translation-free Shape Alignment. To this end, we develop the objective function of (A22):
(A24) |
whose partial derivative w.r.t. is:
(A25) |
By setting the derivative (A25) to zero, we derive in closed form as:
(A26) |
where
(A27) |
By inserting eq. (A26) back to the original problem (A22), we obtain an optimization only involving the scale and rotation:
(A28) |
where:
(A29) |
(ii) Quaternion-based Translation-free Problem. We now show that the translation-free shape alignment problem (A28) can be converted to the quaternion-based unconstrained shape alignment problem (A23). To this end, we denote as the unit-quaternion corresponding to the rotation matrix . It can be verified by inspection that is a linear function of , the vector of all degree-2 monomials of :
(A30) |
where is a constant matrix, and converts a vector to a matrix with proper dimension (in this case a matrix). The expression of the matrix is as follows:
(A31) |
Using the notation in (A30) and developing the squares in the objective function of problem (A28):
(A32) |
where the , , and can be computed by:
(A33) |
Now, since , we define and it is straightforward to verify that , and therefore the objective function in (A32) is equal to the objective function of problem (A23). In addition, because , there is no constraint on , because the unit-norm constraint of the quaternion disappears.
After we solve problem (A23), we can recover the optimal solution of the original constrained optimization (A22) from the optimal solution of problem (A23) using the following formula:
(A34) |
In summary, to solve the outlier-free (weighted) shape alignment problem (A22), we first calculate , , using eq. (A33), solve the unconstrained optimization (A23) (using SOS relaxation as discussed in the main document), and then recover the optimal scale, rotation, translation using eq. (A34).
J. B. Lasserre, “Global optimization with polynomials and the problem of moments,”
SIAM J. Optim., vol. 11, no. 3, pp. 796–817, 2001.IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)
, 2018.