1. Introduction
The problem of pointcloud registration comes up in computer vision and graphics [52, 59, 65], and in distributed approaches to molecular conformation [20, 17] and sensor network localization [16, 9]. The registration problem in question is one of determining the coordinates of a point cloud from the knowledge of (possibly noisy) coordinates of smaller point cloud subsets (called patches) that are derived from through some general transformation. In certain applications [45, 59, 40], one is often interested in finding the optimal transforms (one for each patch) that consistently align . This can be seen as a subproblem in the determination of the coordinates of [16, 51].
In this paper, we consider the problem of rigid registration in which the points within a given are (ideally) obtained from through an unknown rigid transform. Moreover, we assume that the correspondence between the local patches and the original point cloud is known, that is, we know beforehand as to which points from are contained in a given . In fact, one has a control on the correspondence in distributed approaches to molecular conformation [17] and sensor network localization [9, 69, 16]. While this correspondence is not directly available for certain graphics and vision problems, such as multiview registration [49], it is in principle possible to estimate the correspondence by aligning pairs of patches, e.g., using the ICP (Iterative Closest Point) algorithm [6, 51, 36].
1.1. Twopatch Registration
The particular problem of twopatch registration has been wellstudied [21, 34, 2]. In the noiseless setting, we are given two point clouds and in , where the latter is obtained through some rigid transform of the former. Namely,
(1) 
where is some unknown orthogonal matrix (that satisfies ) and is some unknown translation.
The problem is to infer and from the above equations. To uniquely determine and , one must have at least nondegenerate points^{1}^{1}1By nondegenerate, we mean that the affine span of the points is dimensional.. In this case, can be determined simply by fixing the first equation in (1) and subtracting (to eliminate ) any of the remaining equations from it. Say, we subtract the next equations:
By the nondegeneracy assumption, the matrix on the right of is invertible, and this gives us . Plugging into any of the equations in (1), we get .
In practical settings, (1) would hold only approximately, say, due to noise or model imperfections. A particular approach then would be to determine the optimal and by considering the following leastsquares program:
(2) 
Note that the problem looks difficult a priori since the domain of optimization is , which is nonconvex. Remarkably, the global minimizer of this nonconvex problem can be found exactly, and has a simple closedform expression [19, 39, 32, 21, 34, 2]. More precisely, the optimal is given by , where is the singular value decomposition (SVD) of
in which and are the centroids of the respective point clouds. The optimal translation is .
The fact that twopatch registration has a closedform solution is used in the socalled incremental (sequential) approaches for registering multiple patches [6]. The most wellknown method is the ICP algorithm [51]
(note that ICP uses other heuristics and refinements besides registering corresponding points). Roughly, the idea in sequential registration is to register two overlapping patches at a time, and then integrate the estimated pairwise transforms using some means. The integration can be achieved either locally (on a patchbypatch basis), or using global cyclebased methods such as synchronization
[52, 35, 53, 59, 63]. More recently, it was demonstrated that, by locally registering overlapping patches and then integrating the pairwise transforms using synchronization, one can design efficient and robust methods for distributed sensor network localization [16] and molecular conformation [17]. Note that, while the registration phase is local, the synchronization method integrates the local transforms in a globally consistent manner. This makes it robust to error propagation that often plague local integration methods [35, 63].1.2. Multipatch Registration
To describe the multipatch registration problem, we first introduce some notations. Suppose are the unknown global coordinates of a point cloud in . The point cloud is divided into patches , where each is a subset of . The patches are in general overlapping, whereby a given point can belong to multiple patches. We represent this membership using an undirected bipartite graph . The set of vertices represents the point cloud, while represents the patches. The edge set connects and , and is given by the requirement that if and only if . We will henceforth refer to as the membership graph.
In this paper, we assume that the local coordinates of a given patch can (ideally) be related to the global coordinates through a single rigid transform, that is, through some rotation, reflection, and translation. More precisely, with every patch we associate some (unknown) orthogonal transform and translation . If point belongs to patch , then its representation in is given by (cf. (1) and Figure 1)
(3) 
Alternatively, if we fix a particular patch , then for every point belonging to that patch,
(4) 
In particular, a given point can belong to multiple patches, and will have a different representation in the coordinate system of each patch.
The premise of this paper is that we are given the membership graph and the local coordinates (referred to as measurements), namely
(5) 
and the goal is to recover the coordinates , and in the process the unknown rigid transforms , from (5). Note that the global coordinates are determined up to a global rotation, reflection, and translation. We say that two points clouds (also referred to as configurations) are congruent if one is obtained through a rigid transformation of the other. We will always identify two congruent configurations as being a single configuration.
Under appropriate nondegeneracy assumptions on the measurements, one task would be to specify appropriate conditions on under which the global coordinates can be uniquely determined. Intuitively, it is clear that the patches must have enough points in common for the registration problem to have an unique solution. For example, it is clear that the global coordinates cannot be uniquely recovered if is disconnected.
In practical applications, we are confronted with noisy settings where (4) holds only approximately. In such cases, we would like to determine the global coordinates and the rigid transforms such that the discrepancy in (4) is minimal. In particular, we consider the following quadratic loss:
(6) 
where is the Euclidean norm on . The optimization problem is to minimize with respect to the following variables:
The input to the problem are the measurements in (5). Note that our ultimate goal is to determine ; the rigid transforms can be seen as latent variables.
The problem of multipatch registration is intrinsically nonconvex since one is required to optimize over the nonconvex domain of orthogonal transforms. Different ideas from the optimization literature have been deployed to attack this problem, including Lagrangian optimization and projection methods. In the Lagrangian setup, the orthogonality constraints are incorporated into the objective; in the projection method, the constraints are forced after every step of the optimization [49]. Following the observation that the registration problem can be viewed as an optimization on the Grassmanian and Stiefel manifolds, researchers have proposed algorithms using ideas from the theory and practice of manifold optimization [40]. A detailed review of these methods is beyond the scope of this paper, and instead we refer the interested reader to these excellent reviews [18, 1]. Manifoldbased methods are, however, local in nature, and are not guaranteed to find the global minimizer. Moreover, it is rather difficult to certify the noise stability of such methods.
1.3. Contributions
The main contributions of the paper can be organized into the following categories.

Algorithm: We demonstrate how the translations can be factored out of (6), whereby the leastsquares problem can be reduced to the following optimization:
(7) where are the th subblocks of some positive semidefinite block matrix of size . Given the solution of (7), the desired global coordinates can simply be obtained by solving a linear system. It is virtually impossible to find the global optimum of (7) for largescale problems (), since this involves the optimization of a quadratic cost on a huge nonconvex parameter space. In fact, the simplest case with as the Laplacian matrix corresponds to the MAXCUT problem, which is known to be NPhard. The main observation of this paper is that (7) can instead be relaxed into a convex program, namely a semidefinite program, whose global optimum can be approximated to any finite precision in polynomial time using standard offtheshelf solvers. This yields a tractable method for global registration described in Algorithm 2. The corresponding algorithm derived from the spectral relaxation of (7) that was already considered in [40] is described in Algorithm 1 for reference.

Exact Recovery: We present conditions on the coefficient matrix in (7) for exact recovery using Algorithm 2. In particular, we show that the exact recovery questions about Algorithm 2 can be mapped into rigidity theoretic questions that have already been investigated earlier^{2}^{2}2The authors thank the anonymous referees for pointing this out. in [68, 25]. The contribution of this section is the connection made between the matrix in (7) and various notions of rigidity considered in these papers. We also present an efficient randomized rank test for than can be used to certify exact recovery (motivated by the work in [31, 26, 54]).

Stability Analysis: We study the stability of Algorithms 1 and 2 for the noise model in which the patch coordinates are perturbed using noise of bounded size (note that the stability of the spectral relaxation was not investigated in [40]). Our main result here is Theorem 13 which states that, if satisfies a particular rank condition, then the registration error for Algorithm 2 is within a constant factor of the noise level. To the best of our knowledge, there is no existing algorithm for multipatch registration that comes with a similar stability guarantee.

Empirical Results: We present numerical results on simulated data to numerically verify the exact recovery and noise stability properties of Algorithms 1 and 2. Our main empirical findings are the following:
(1) The semidefinite relaxation performs significantly better than spectral and manifoldbased optimization (say, with the spectral solution as initialization) in terms of the reconstruction quality (cf. first plot in Figure 7).
(2) Up to a certain noise level, we are actually able to solve the original nonconvex problem using the semidefinite relaxation (cf. second plot in Figure 7).
1.4. Broader Context and Related Work
The objective (6) is a straightforward extension of the objective for twopatches [19, 21, 34, 2]. In fact, this objective was earlier considered by Zhang et al. for distributed sensor localization [69]. The present work is also closely tied to the work of Cucuringu et al. on distributed localization [16, 17], where a similar objective is implicitly optimized. The common theme in these works is that some form of optimization is used to globally register the patches, once their local coordinates have been determined by some means. There is, however, some fundamental differences between the various algorithms used to actually perform the optimization. Zhang et al. [69] use alternating leastsquares to iteratively optimize over the global coordinates and the transforms, which to the best of our knowledge has no convergence guarantee. On the other hand, Cucuringu et al. [16, 17] first optimize over the orthogonal transforms (using synchronization [53]), and then solve for the translations (in effect, the global coordinates) using leastsquares fitting. In this work, we combine these different ideas into a single framework. While our objective is similar to the one used in [69], we jointly optimize the rigid transforms and positions. In particular, the algorithms considered in Section 2 avoid the convergence issues associated with alternating leastsquares in [69], and are able to register patch systems that cannot be registered using the approach in [16, 17].
Another closely related work is the paper by Krishnan et al. on global registration [40], where the optimal transforms (rotations to be specific) are computed by extending the objective in (2) to the multipatch case. The subsequent mathematical formulation has strong resemblance with our formulation, and, in fact, leads to a subproblem similar to (7). Krishnan et al. [40] propose the use of manifold optimization to solve (7), where the manifold is the product manifold of rotations. However, as mentioned earlier, manifold methods generally do not offer guarantees on convergence (to the global minimum) and stability. Moreover, the manifold in (7) is not connected. Therefore, any local method will fail to attain the global optimum of (7) if the initial guess is on the wrong component of the manifold.
It is exactly at this point that we depart from [40], namely, we propose to relax (7) into a tractable semidefinite program (SDP). This was motivated by a long line of work on the use of SDP relaxations for nonconvex (particularly NPhard) problems. See, for example, [43, 23, 66, 47, 12, 41], and these reviews [60, 48, 70]. Note that for , (7) is a quadratic Boolean optimization, similar to the MAXCUT problem. An SDPbased algorithm with randomized rounding for solving MAXCUT was proposed in the seminal work of Goemans and Williamson [23]. The semidefinite relaxation that we consider in Section 2 is motivated by this work. In connection with the present work, we note that provably stable SDP algorithms have been considered for low rank matrix completion [12], phase retrieval [13, 62], and graph localization [37].
We note that a special case of the registration problem considered here is the socalled generalized Procrustes problem [28]. Within the pointpatch framework just introduced, the goal in Procrustes analysis is to find that minimize
(8) 
In other words, the goal is to achieve the best possible alignment of the patches through orthogonal transforms. This can be seen as an instance of the global registration problem without the translations (), and in which is complete. It is not difficult to see that (8) can be reduced to (7). On the other hand, using the analysis in Section 2, it can be shown that (6) is equivalent to (8) in this case. While the Procrustes problem is known to be NPhard, several polynomialtime approximations with guarantees have been proposed. In particular, SDP relaxations of (8) have been considered in [47, 55, 46], and more recently in [3]. We use the relaxation of (7) considered in [3] for reasons to be made precise in Section 2.
1.5. Notations
We use upper case letters such as to denote matrices, and lower case letters such as
for vectors. We use
to denote the identity matrix of size
. We denote the diagonal matrix of size with diagonal elements as . We will frequently use block matrices built from smaller matrices, typically of size , where is the dimension of the ambient space. For some block matrix , we will use to denote its th block, and to denote its th entry. In particular, if each block has size , thenWe use to mean that is positive semidefinite, that is, for all . We use to denote the group of orthogonal transforms (matrices) acting on , and to denote the fold product of with itself. We will also conveniently identify the matrix with an element of where each . We use to denote the Euclidean norm of ( will usually be clear from the context, and will be pointed out if this is not so). We denote the trace of a square matrix by . The Frobenius and spectral norms are defined as
The Kronecker product between matrices and is denoted by [24]. The allones vector is denoted by (the dimension will be obvious from the context), and denotes the allzero vector of length with at the th position.
1.6. Organization
In the next section, we present the semidefinite relaxation of the leastsquares registration problem described in the introduction. For reference, we also present the closely related spectral relaxation that was already considered in [40, 68, 25]. Exact recovery questions are addressed in section 3, followed by a randomized test in section 4. Stability analysis for the spectral and semidefinite relaxations are presented in section 5. Numerical simulations can be found in section 6, and a discussion of certain open questions in section 7.
2. Spectral and Semidefinite Relaxations
The minimization of (6) involves unconstrained variables (global coordinates and patch translations) and constrained variables (the orthogonal transformations). We first solve for the unconstrained variables in terms of the unknown orthogonal transformations, representing the former as linear combinations of the latter. This reduces (6) to a quadratic optimization problem over the orthogonal transforms of the form (7).
In particular, we combine the global coordinates and the translations into a single matrix:
(9) 
Similarly, we combine the orthogonal transforms into a single matrix,
(10) 
Recall that we will conveniently identify with an element of .
Using , and properties of the trace, we obtain
(11) 
where
(12)  
The matrix is the combinatorial graph Laplacian of [14], and is of size . The matrix is of size , and the size of the block diagonal matrix is .
The optimization program now reads
The fact that is nonconvex makes nonconvex. In the next few subsections, we will show how this nonconvex program can be approximated by tractable spectral and convex programs.
2.1. Optimization over translations
Note that we can write as
That is, we first minimize over the free variable for some fixed , and then we minimize with respect to .
Fix some arbitrary , and set . It is clear from (11) that is quadratic in . In particular, the stationary points of satisfy
(13) 
The Hessian of equals , and it is clear from (12) that . Therefore, is a minimizer of .
If is connected, then is the only vector in the null space of [14]. Let be the MoorePenrose pseudoinverse of , which is again positive semidefinite. It can be verified that
(14) 
If we right multiply (13) by , we get
(15) 
where is some global translation. Conversely, if we right multiply (15) by and use the facts that and , we get (13). Thus, every solution of (13) is of the form (15).
Substituting (15) into (11), we get
(16) 
where
(17) 
Note that (16) has the global translation taken out. This is not a surprise since is invariant to global translations. Moreover, note that we have not forced the orthogonal constraints on as yet. Since for any and , it necessarily follows from (16) that . We will see in the sequel how the spectrum of dictates the performance of the convex relaxation of (16).
In analogy with the notion of stress in rigidity theory [26], we can consider (6) as a sum of the “stress” between pairs of patches when we try to register them using rigid transforms. In particular, the th term in (16) can be regarded as the stress between the (centered) th and th patches generated by the orthogonal transforms. Keeping this analogy in mind, we will henceforth refer to as the patchstress matrix.
2.2. Optimization over orthogonal transforms
The goal now is to optimize (16) with respect to the orthogonal transforms, that is, we have reduced to the following problem:
This is a nonconvex problem since lives on a nonconvex (disconnected) manifold [1]. We will generally refer to any method which uses manifold optimization to solve and then computes the coordinates using (15) as “Global Registration over Euclidean Transforms using Manifold Optimization” (GRETMANOPT).
2.3. Spectral relaxation and rounding
Following the quadratic nature of the objective in , it is possible to relax it into a spectral problem. More precisely, consider the domain
That is, we do not require the blocks in to be orthogonal. Instead, we only require the rows of to form an orthogonal system, and each row to have the same norm. It is clear that is a larger domain than that determined by the constraints in . In particular, we consider the following relaxation of :
This is precisely a spectral problem in that the global minimizers are determined from the spectral decomposition of . More precisely, let
be eigenvalues of
, and letbe the corresponding eigenvectors. Define
(18) 
Then
(19) 
Due to the relaxation, the blocks of are not guaranteed to be in . We round each block of to its “closest” orthogonal matrix. More precisely, let . For every , we find such that
As noted earlier, this has a closedform solution, namely where is the SVD of . We now put the rounded blocks back into place and define
(20) 
In the final step, following (15), we define
(21) 
The first columns of are taken to be the reconstructed global coordinates.
We will refer to this spectral method as the “Global Registration over Euclidean Transforms using Spectral Relaxation” (GRETSPEC). The main steps of GRETSPEC are summarized in Algorithm 1. We note that a similar spectral algorithm was proposed for angular synchronization by Bandeira et al. [4], and by Krishnan et al. [40] for initializing the manifold optimization.
The question at this point is how are the quantities and obtained from GRETSPEC related to the original problem ? Since is obtained by relaxing the blockorthogonality constraint in , it is clear that if the blocks of are orthogonal, then and are solutions of , that is,
We have actually found the global minimizer of the original nonconvex problem in this case.
Observation 1 (Tight relaxation using GretSpec).
If the blocks of the solution of are orthogonal, then the coordinates and transforms computed by GRETSPEC are the global minimizers of .
If some the blocks are not orthogonal, the rounded quantities and are only an approximation of the solution of .
2.4. Semidefinite relaxation and rounding
We now explain how we can obtain a tighter relaxation of using a semidefinite program, for which the global minimizer can be computed efficiently. Our semidefinite program was motivated by the line of works on the semidefinite relaxation of nonconvex problems [43, 23, 60, 12].
Consider the domain
That is, while we require the columns of each block of to be orthogonal, we do not force the nonconvex rank constraint . This gives us the following relaxation
(22) 
Introducing the variable , (22) is equivalent to
This is a standard semidefinite program [60] which can be solved using software packages such as SDPT3 [58] and CVX [29]. We provide details about SDP solvers and their computational complexity later in Section 2.5.
Let us denote the solution of by , that is,
(23) 
By the linear constraints in , it follows that . If , we need to round (approximate) it by a rank matrix. That is, we need to project it onto the domain of . One possibility would be to use random rounding that come with approximation guarantees; for example, see [23, 3]. In this work, we use deterministic rounding, namely the eigenvector rounding which retains the top eigenvalues and discards the remaining. In particular, let be the eigenvalues of , and be the corresponding eigenvectors. Let
(24) 
We now proceed as in the GRETSPEC, namely, we define and from as in (20) and (21). We refer to the complete algorithm as “Global Registration over Euclidean Transforms using SDP” (GRETSDP). The main steps of GRETSDP are summarized in Algorithm 2.
Similar to Observation 1, we note the following for GRETSDP.
Observation 2 (Tight relaxation using GretSdp).
If the rank of the solution of is exactly , then the coordinates and transforms computed by GRETSDP are the global minimizers of .
If , the output of GRETSDP can only be considered as an approximation of the solution of . The quality of the approximation for can be quantified using, for example, the randomized rounding in [3]. More precisely, note that since is blockdiagonal, (22) is equivalent (up to a constant term) to
where . Bandeira et al. [3] show that the orthogonal transforms (which we continue to denote by ) obtained by a certain random rounding of satisfy
where is the optimum of the unrelaxed problem (7) with , and is the expected average of the singular values of a random matrix with entries iid . It was conjectured in [3] that is monotonically increasing, and the boundary values were computed to be ( was also reported here [48]) and . We refer the reader to [3] for further details on the rounding procedure, and its relation to previous work in terms of the approximation ratio. Empirical results, however, suggest that the difference between deterministic and randomized rounding is small as far as the final reconstruction is concerned. We will therefore simply use the deterministic rounding.
2.5. Computational complexity
The main computations in GRETSPEC are the Laplacian inversion, the eigenvector computation, and the orthogonal rounding. The cost of inverting when is dense is . However, for most practical applications, we expect to be sparse since every point would typically be contained in a small number of patches. In this case, it is known that the linear system can be solved in time almost linear in the number of edges in [57, 61]. Applied to (14), this means that we can compute in time (up to logarithmic factors). Note that, even if is dense, it is still possible to speed up the inversion (say, compared to a direct Gaussian elimination) using the formula [33, 50]:
The speed up in this case is however in terms of the absolute run time. The overall complexity is still , but with smaller constants. We note that it is also possible to speed up the inversion by exploiting the bipartite nature of [33], although we have not used this in our implementation.
The complexity of the eigenvector computation is , while that of the orthogonal rounding is . The total complexity of GRETSPEC, say, using a lineartime Laplacian inversion, is (up to logarithmic factors)
The main computational blocks in GRETSDP are identical to that in GRETSPEC, plus the SDP computation. The SDP solution can be computed in polynomial time using interiorpoint programming [67]. In particular, the complexity of computing an accurate solution using interiorpoint solvers such as SDPT3 [58] is . It is possible to lower this complexity by exploiting the particular structure of . For example, notice that the constraint matrices in have at most one nonzero coefficient. Using the algorithm in [30], one can then bring down the complexity of the SDP to . By considering a penalized version of the SDP, we can use firstorder solvers such as TFOCS [5] to further cut down the dependence on and to , but at the cost of a stronger dependence on the accuracy. The quest for efficient SDP solvers is currently an active area of research. Fast SDP solvers have been proposed that exploit either the lowrank structure of the SDP solution [11, 38] or the simple form of the linearity constraints in [64]. More recently, a sublinear time approximation algorithm for SDP was proposed in [22]. The complexity of GRETSDP using a lineartime Laplacian inversion and an interiorpoint SDP solver is thus
For problems where the size of the SDP variable is within , we can solve in reasonable time on a standard PC using SDPT3 [58] or CVX [29]. We use CVX for the numerical experiments in Section 6 that involve smalltomoderate sized SDP variables. For larger SDP variables, one can use the lowrank structure of to speed up the computation. In particular, we were able to solve for SDP variables of size up to using SDPLR [11] that exploits this lowrank structure .
3. Exact Recovery
We now examine conditions on the membership graph under which the proposed spectral and convex relaxations can recover the global coordinates from the knowledge of the clean local coordinates (and the membership graph). More precisely, let be the true coordinates of a point cloud in . Suppose that the point cloud is divided into patches whose membership graph is , and that we are provided the measurements
(25) 
for some and . The patchstress matrix is constructed from and the clean measurements (25). The question is under what conditions on can be recovered by our algorithm? We will refer to this as exact recovery.
To express exact recovery in the matrix notation introduced earlier, define
and
Then, exact recovery means that for some and ,
(26) 
Henceforth, we will always assume that is connected (clearly one cannot have exact recovery otherwise).
Conditions for exact recovery have previously been examined by Zha and Zhang [68] in the context of tangentspace alignment in manifold learning, and later by Gortler et al. [25] from the perspective of rigidity theory. In particular, they show that the socalled notion of affine rigidity is sufficient for exact recovery using the spectral method. Moreover, the authors in [68, 25] relate this notion of rigidity to other standard notions of rigidity, and provide conditions on a certain hypergraph constructed from the patch system that can guarantee affine rigidity. The purpose of this section is to briefly introduce the rigidity results in [68, 25] and relate these to the properties of the membership graph (and the patchstress matrix ). We note that the authors in [68, 25] directly examine the uniqueness of the global coordinates, while we are concerned with the uniqueness of the patch transforms obtained by solving and . The uniqueness of the global coordinates is then immediate:
Proposition 3 (Uniqueness and Exact Recovery).
If and have unique solutions, then (26) holds for both GRETSPEC and GRETSDP.
At this point, we note that if a patch has less than points, then even when are the unique set of coordinates that satisfy 25, we cannot guarantee and to be unique. Therefore, we will work under the mild assumption that each patch has at least nondegenerate points, so that the patch transforms are uniquely determined from the global coordinates.
We now formally define the notion of affine rigidity. Although phrased differently, it is in fact identical to the definitions in [68, 25]. Henceforth, by affine transform, we will mean the group of nonsingular affine maps on . Affine rigidity is a property of the patchgraph and the local coordinates . In keeping with [25], we will together call these the patch framework and denote it by .
Definition 4 (Affine Rigidity).
Let be such that, for affine transforms ,
The patch framework is affinely rigid if is identical to up to a global affine transform.
Since each patch has points, we now give a characterization of affine rigidity that will be useful later on.
Proposition 5.
A patch framework is affinely rigid if and only if for any such that we must have for some nonsingular .
Before proceeding to the proof, note that and are solutions of and (this was the basis of Proposition 3), and the objective in either case is zero. Indeed, from (25), we can write . Since is connected,
(27) 
Using (27), it is not difficult to verify that . Moreover, it follows from (25) that . Therefore,
(28) 
Using an identical line of reasoning, we also record another fact. Let where each . Suppose there exists and such that
(29) 
Then satisfies
(30) 
and .
of Proposition 5.
For any such that , letling
we have (29). By the affine rigidity assumption, we must then have for some nonsingular and . Since each patch contains nondegenerate points, it follows that .
Note that implies that the rows of are in the null space of . Therefore, the combined facts that and for some nonsingular is equivalent to saying that null space of is within the row span of . The following result then follows as a consequence of (5).
Corollary 6.
A patch framework is affinely rigid if and only if the rank of is .
The corollary gives an easy way to check for affine rigidity. However, it is not clear what construction of will ensure such property. In [68], the notion of graph lateration was introduced that guarantees affine rigidity. Namely, is said to be a graph lateration (or simply laterated) if there exists an reordering of the patch indices such that, for every , and have at least nondegenerate nodes in common. An example of a graph lateration is shown in Figure 2.
Theorem 7 ([68]).
If is laterated and the local coordinates are nondegenerate then the framework is affinely rigid.
Next, we turn to the exact recovery conditions for . The appropriate notion of rigidity in this case is that of universal rigidity [27]. Just as we defined affine rigidity earlier, we can phrase universal rigidity as follows.
Definition 8 (Universal Rigidity).
Suppose that (25) holds. Let be such that, for some orthogonal and ,
We say that the patch framework is universally rigid in if for any such , we have for some orthogonal .
By orthogonal , we mean that the columns of are orthogonal and of unit norm (i.e., can be seen an orthogonal transform in by identifying as a subspace of ).
Following exactly the same arguments used to establish Proposition 5, one can derive the following.
Proposition 9.
The following statements are equivalent:
A patch framework is universally rigid in .
Let be such that for all . Then
The question then is under what conditions is the patch framework universally rigid? This was also addressed in [25] using a graph construction derived from called the body graph. This is given by , where and if and only if and belong to the same patch (cf. Figure 3). Next, the following distances are associated with :
(31) 
where , say. Note that the above assignment is independent of the choice of patch. A set of points in is said to be a realization of in if for .
It was shown in [25] that is universally rigid if and only if with distances has a unique realization in for all . Moreover, in such situation, using the distances as the constraints, an SDP relaxation was proposed in [56, 71] for finding the unique realization. We note that although the SDP in [56] has the same condition for exact recovery as , it is computationally more demanding than since the number of variables is for this SDP, instead of as in (for most applications, ). Moreover, as we will see shortly in Section 6, also enjoys some stability properties which has not been established for the SDP in [56].
Finally, we note that universal rigidity is a weaker condition on than affine rigidity.
Theorem 10 ([56], Theorem 2).
If a patch framework is affinely rigid, then it is universally rigid.
In [25], it was also shown that the reverse implication is not true using an counterexample for which the patch framework fails to be affinely rigid, but for which the body graph (a Cauchy polygon) has an unique realization in any dimension [15]. This means that GRETSDP can solve a bigger class of problems than GRETSPEC, which is perhaps not surprising.
4. Randomized Rank Test
Corollary 6 tells us by checking the rank of the patch stress matrix , we can tell whether a patch framework is affinely rigid. In this regard, the patchstress matrix serves the same purpose as the socalled alignment matrix in [68]
and the affinity matrix in
[25]. The only difference is that the kernel ofrepresents the degree of freedom of the affine transform, whereas kernel of alignment or affinity matrix directly tell us the degree of freedom of the point coordinates. As suggested in
[25], an efficient randomized test for affine rigidity using the concept of affinity matrix can be easily derived. In this section, we describe a randomized test based on patch stress matrix, which parallels the proposal in [25]. This procedure is also similar in spirit to the randomized tests for generic local rigidity by Hendrickson [31], for generic global rigidity by Gortler et al. [26], and for matrix completion by Singer and Cucuringu [54].Let us continue to denote the patchstress matrix obtained from and the measurements (25) by . We will use to denote the patchstress matrix obtained from the same graph , but using the (unknown) original coordinates as measurements, namely,
(32) 
The advantage of working with over is that the former can be computed using just the global coordinates, while the latter requires the knowledge of the global coordinates as well as the clean transforms. In particular, this only requires us to simulate the global coordinates. Since the coordinates of points in a given patch are determined up to a rigid transform, we claim the following (cf. Section 8.1 for a proof).
Proposition 11 (Rank equivalence).
For a fixed , and have the same rank.
In other words, the rank of can be used to certify exact recovery. The proposed test is based on Proposition 8.1, and the fact that if two different generic configurations are used as input in (32) (for the same ), then the patchstress matrices they produce would have the same rank. By generic, we mean that the coordinates of the configuration do not satisfy any nontrivial algebraic equation with rational coefficients [26].