With the proliferation of three-dimensional (3D) scanning devices, it is easy to acquire large volumes of 3D point clouds. Point clouds provide a direct and convenient way to describe outdoor and indoor scenes. However, the method is imposing when modeling such massive data. Automatic modeling methods typically produce excessively complex meshes, which are cumbersome for later applications. However, many applications (, simultaneous localization and mapping and level-of-detail generation) do not require overly fine models. Considering that man-made environments contain many planar structures, plane reconstruction remains a suitable choice for 3D-scene description. Furthermore, plane reconstruction or extraction is typically used as a prior step in various tasks, such as architecture modeling , place recognition , object recognition , and registration .
In man-made environments, planar structures generally conform to one of the following relationships: parallelism, orthogonality, coplanarity, and angular equality. Making full use of these relationships can considerably improve the robustness of the plane reconstruction algorithms for complex scenes . However, traditional plane extraction techniques (, region growing, Hough transform [40, 4], and random-sample consensus (RANSAC) [16, 34]) do not take advantage of these geometric constraints and suffer from a lack of robustness under high percentages of noise and outliers.
Next, we refer to the task of reconstructing planes from 3D unstructured point clouds while maintaining a geometric relationship between reconstructed planes as regularity-constrained plane reconstruction (RCPR), which is increasingly used in industrial manufacturing and architectural modeling.
A trivial method of RCPR is the detection-then-regularization strategy, which has two steps. In the first step, the method extracts planes without regard to regularity constraints. Regularity is reconstructed in the second step. This method relies heavily on the quality of the extracted planes in the first step and is easily renders local solutions. Sophisticated algorithms, on the other hand, are traditionally time-consuming, hampering practicality.
In this paper, we formulate RCPR as a global gradient minimization problem, proposing a simple but efficient algorithm to solve it. The proposed algorithm solves the problems of simultaneous plane reconstruction and regularization without losing efficiency. In fact, our algorithm is 3-1,000 times faster than extant RCPR algorithms.
2 Related work
In this section, we review works immediately related to our problem. We cover the following three main aspects: plane extraction, multimodel fitting, and regularity-constrained fitting.
Plane extraction is an old but recurrent problem. It requires many different techniques and has many variants, depending on the tasks and input data. A very common technique is region growing, often employed by more sophisticated algorithms to extract initial planes [10, 27, 29, 22]. Another popular approach is RANSAC  and variants ( [34, 33, 18, 43]
). RANSAC is a robust data estimator when facing a high percentage of outliers. However, it was originally designed to cope with a single model and was not built to maintain regularity. The randomized Hough transform technique naturally handles multiple structures. However, it cannot cope with a high percentage of outliers .
Recent planar-structure extraction techniques (, supervoxel segmentation [31, 23] and structural-scale planar-shape detection ) constrain the scale of planar structures but do not constrain their geometric relationships.
In summary, traditional plane extraction methods do not aim to maintain regularity. However, they are often the initial step in the RCPR methods.
The goal of multimodel fitting is to improve the robustness when fitting multiple models to data contaminated by noise and outliers. Multimodel fitting is typically formulated as an optimization problem. For example, Delong  and Amayo  converted the problem to a multilabeling problem and solved it via -expansion  and a primal-dual algorithm , respectively. Magri and Fusiello 
cast the multimodel fitting problem as a set-coverage problem and solved it with integer linear programming. Moreover, clustering-based methods (J-linkage and T-linkage ) and hypergraph-based methods  have gained the popularity.
In our application, because of the huge input point cloud, we favor methods having low time complexity. Unfortunately, most multimodel fitting methods do not meet this requirement.
Regularity-constrained fitting has gained popularity in recent years. Li  presented the GlobFit framework, which adopted an iterative detection-then-regularization strategy to fit primitives. It first used RANSAC to locally fit primitives and to detect the geometric relationships among the primitives. Then, it refitted these primitives using the detected constraints. Similarly, Zhou, and Neumann  adopted the same strategy to model buildings. Departing from GlobFit, Oesau  sought to progressively detect regularities rather than iterating between complete detection and regularization. Monszpart  focused on constraining the angles between pairs of planes. Their method first generated candidate planes with fitting distances. Then, it searched the optimal planes from candidates using mixed-integer linear programming. Although the method achieved good performance with cluttered scenes, it was time-consuming because of the need to solve large-scale mixed-integer programming problems. More recently, Joo  presented a fast branch-and-bound framework to estimate the optimal Manhattan frame.
3 Motivation and constraint models
RCPR’s regularity depends on the assumptions made of the real world. Here, we refer to this group of assumptions as a constraint model. Moreover, we define the constraint model as a collection of plane sets satisfying a particular constraint. Thus, the task of RCPR can be regarded as finding an element from the constraint model that best fits the given data. A typical constraint model is the Manhattan model .
A plane set, , is a Manhattan model if and only if it satisfies
is the set of unit normal vectors of. Because a plane has two orientations, we specify that the angle between the normal vector in and positive z-axis is no more than .
In practice, the Manhattan model is a very strong constraint. It can simplify a few tough problems, such as line clustering and vanishing-point estimation in two dimensional (2D) images . However, the real world usually does not conform to the Manhattan model. Therefore, Straub  proposed a mixture of Manhattan frames. The idea was to utilize multiple Manhattan frames (MMF) to represent complex scenes.
Multiple Manhattan frames model
A plane set , is an -MMF model if it can be divided into non-overlapping subsets, with each subset being a Manhattan model.
The limitation of Manhattan and MMF models is obvious. For example, they only consider parallelism and orthogonality. RAPTER  adopted a constraint model that considered more angular constraints. Because RAPTER’s constraint model is difficult to formulate, we only consider its variant, which we call the generalized Manhattan model.
Generalized Manhattan model
A plane set, , is a generalized Manhattan model, if and only if it satisfies
where is a user-defined set of constraint angles. It is easy to note that, if , the generalized Manhattan model is equivalent to the Manhattan model.
A comparison of these constraint models on a line-fitting task is illustrated in Fig. 1. We notice that the Manhattan model cannot fit this simple data well. The MMF obtains a better result but still misses one line. The generalized Manhattan model can fit the data well. However, it should introduce more angle constraints. For example, in Fig. 1, the set of constraint angles should be set to . Usually, it is difficult to know the exact constraint angles. Furthermore, more constraint angles considerably increase the complexity of regularization.
We consider, therefore, using implicit angle constraints instead of explicit angle constraints. Specifically, we introduce a parameter, , to constrain the number of different normal vectors of the reconstruction planes. Our motivation is based on a simple observation: In man-made scenes, the number of planes, , is always much more than the number of different normal vectors, . For example, in Fig. 1, we have eight lines, but only four different directions (i.e., normal vectors). We refer to our constraint model as a directional constraint (DC) model, defined as follows.
Directional constraint model
Given a plane set , we denote as the set of different normal vectors of . is a -DC model, if and only if it satisfies
Apart from the other three models, the DC model cannot maintain orthogonality. However, it gains the advantage of fewer priori parameters. In fact, the DC model only needs one priori parameter, , (i.e., the number of different normals). An example is given in Fig. 1 (d). There are eight lines but only four different directions. Therefore, we can set to obtain the correct fitting result. Moreover, by constraining the number of normal vectors, we implicitly establish the connection between planes, which helps obtain better fitting results under noise constraints and outliers (See Fig. 3).
In the next section, we discuss how to reconstruct a set of planes satisfying the DC model.
4.1 Problem formulation
Given a point cloud, , equipped with normal vectors, , our task is to find a plane set,
, that best fits the input points while satisfying the DC model. If the input point cloud has no normal vector information, we use the principal component analysis algorithm to estimate their normal vectors.
Our method comprises two steps. First, we reconstruct the normal vector for each point while maintaining the constraints of DC model. Second, the desired planes can be easily obtained by grouping nearby points having the same reconstructed normal. Next, we focus on the first step, because the second step is easily implemented. More specifically, we consider the following problem:
where, is the set of reconstructed normal vectors; is the energy that measures how well fits . Here, we adopt energy for , owing to its performance under high noise. Moreover, the constraint term, , can be relaxed by introducing a regularization parameter . Therefore, Eq. 4 is rewritten as
where is the number of points; is the neighboring set of the -th point; and is the index of for the -th point. Note that Eq. 5 can also satisfy the DC model if we choose the appropriate value.
In Eq. 5, the first term, , is used to guide the output normal vectors as close as possible to the input. The second term, , is used to constrain the sparsity in the local sense. It guides the normal vector of each point close to its neighboring normals. The third term, , is used to constrain the global sparsity. Two parameters, and , are used to balance the influence of two constraint terms. Eq. 5 can be further divided into the following two subproblems:
If we limit , then Eq. 7 can be rewritten as:
We notice that Eq. 6 is an gradient minimization problem and that Eq. 8 is a subset selection problem. This provides a simple notion: we can optimize Eq. 5 by alternately solving two subproblems. By reviewing the state-of-the-art minimization  and subset selection techniques , we can provide a simple but efficient algorithm called, ”Global-”, to solve the problem.
4.2 Global- algorithm
The framework of the Global- algorithm is given in Algorithm 1, comprising the following two steps.
To solve subproblem 8, we adopt greedy unconstrained submodular optimization. Let, . is a submodular function satisfying for any . Therefore, is an unconstrained submodular maximization (USM) problem. We can use the linear-time algorithm proposed in  to solve it.
However, the complexity of using Algorithm  directly is , which is unacceptable. We notice that, after performing region-fusion-based minimization, the normal vectors, , are divided into several disjoint regions. Each region shares the same normal vector. Therefore, can be simplified to
where is the set of regions; is the normal vector of the -th region; and is the set of normal vectors consisting of , .
Additionally, we can ignore the regions whose number of points is less than a threshold, . can be explained as the minimal support points for a plane (i.e., the filtered regions are treated as outliers). Thus, the time complexity of our subset selection step is reduced to . In practice, it can assume .
More details are described in Algorithm 2.
has a better approximate rate of 1/2. However, we choose the deterministic algorithm, because it is more stable. Furthermore, with a heuristic sequence (i.e., descending order by cardinality), our solution is usually better than the theoretical upper bound.
The Global- algorithm can be seen as an extension of gradient minimization algorithms. The traditional gradient minimization methods only consider regularity between local neighbors. Our algorithm introduces global regularity by limiting the number of different normal vectors. It establishes implicit relationships between the disjoint regions. In Section 5, we demonstrate that these global relationships are very useful, especially in man-made environments.
For a post-processing, the constrained planes can be constructed from disjoint regions. Additionally, the normal vector of each plane is set to the reconstruction normal vector of the corresponding region.
4.3 Difference to the multilabeling problem
Eq. 5 looks like the multilabeling problem of [12, 1]. However, they are quite different. First, the energy formulation in [12, 1] only constrained the number of models (planes in our context). In contrast, Eq. 4 constrains the number of plane normals, because, it constrains the size of the label set. Second, in the multi-labeling problems, the label set, (or in our formulation), is usually known. For example, Pearl  must first compute an initial set of models or labels via sampling. Our algorithm is designed to solve the problem without an initial label set. In other words, the size of the initial label set in our problem is much larger than the label size in multilabeling problems. Therefore, an efficient algorithm is desired.
Our method was implemented in C++ and run on Linux Ubuntu 16.04 with one core of Intel i7 CPU (2.5 GHz) and 8-GB memory.
5.1 Parameter setting
There are four parameters adopted in our method. They are , the minimal number of points that support a plane; , the number of neighbors for each point; , the local regularization for normal vectors in minimization; and , the global regularization for normal vectors. and are common parameters, and their settings have been fully studied. Here, we set and for the following experiments.
Fig. 2 demonstrates the effects of and . We use the same color to draw the points with the same normal vector. In Fig. 2(b–d), we first set and increase the value of . One can observe that, as the increases, the number of regions decreases. However, the normal vectors for each region are different (i.e., equals the number of regions). In Fig. 2(e-h), we set and increase the value of . The number of regions and the different normal vectors decrease simultaneously. When reaches 10,000, only the three most frequently normal are preserved, satisfying the Manhattan model.
In practice, for data having high noise, we choose higher values for and . One can also simply set equal to the one in . Additionally, if the user has a prior knowledge of the scene (the value of ), then one can choose an appropriate value.
5.2 2D line fitting
We first study the robustness of the proposed method on the 2D line-fitting problem with respect to seq-RANSAC, J-linkage , T-linkage , and Pearl . We implement seq-RANSAC, J-linkage, and T-linkage ourselves, whereas the implementation of Pearl is obtained from .
As shown in Fig. 3, we generate eight lines, four of which are parallel, and four of which form a square. Each line comprises of 100 inliers, contaminated by Gaussian noise and outliers of different percentages. When we say 50% outliers, it means that the number of outliers is half of the entire set. Note that the ground truth lines only have three different directions. Our method can benefit from this constraint.
The results are collected in Fig. 3, where is the root-mean-square (RMS) error of the angles between the reconstruction of normal vectors and ground truth normals. It is defined as
where represents the ground truth inliers.
As noise and outliers increase, Seq-RANSAC, J-linkage, T-linkage, and Pearl miss the ground-truth line segments, and generate more false positives. J-linkage and T-linkage aggregate the preference set without exploiting the regularity of the data. Moreover, J-linkage and T-linkage rely on the selection of the initial set. The tendency of decreasing the percentage of inliers affects the performance of J-linkage and T-linkage. Pearl solves the line-fitting problem by presenting an optimization framework. However, it only constrains the number of fitted lines and does not constrain the number of directions. It also relies on the selection of the initial set.
Our method achieves the best in all situations. The reason can be ascribed to the constraint on the number of directions and the independence of the initial set. Note that, the number of directions is a non-local constraint, which similar to non-local denoising methods . When outliers reach 80% (20% inliers), our method generates two false positives and misses one line segment. However, the directions of the fitted lines are correct, and the number of different directions (including false positives) remains three. Moreover, our method has the best time performance, significantly faster than the other three methods.
5.3 Plane reconstruction on 3D manual point sets
We further study the robustness of the proposed method on 3D data with respect to RAPTER , piecewise planar surface reconstruction (PPSR) , and Pearl . We obtain the implementation of RAPTER from . Here, we use the region-growing technique of  to extract the initial planes for Pearl, leading to better reconstruction results.
We first demonstrate the performance of these methods on a noisy 3D point set. As shown in Fig. 4, we sampled 100,000 points from a dodecahedron and perturbed each point by adding Gaussian noise, where is the diagonal length of the axis-aligned bounding box of the original point cloud.
RAPTER is susceptible to Gaussian noise, and, because it relies on the initial planes obtained from region growing, its results will be less than satisfactory when region growing fails. PPSR improves the performance of traditional region growing by carefully selecting seed points. However, it still generates too many redundant planes. Pearl  obtains a much better result but loses accuracy at the boundaries. Benefiting from the robustness of optimization and parallel faces constraints, our method estimates the correct model. Furthermore, our method is much faster than the others.
Empire State Building with outliers
We next test our method with RAPTER on the Empire State Building with different level of outliers. The ground truth model is downloaded from . We sample 1,000,000 points from ground-truth model and add 250,000 and 1,000,000 outliers, respectively.
In addition to , we evaluate the accuracy of the reconstruction planes by computing the RMS of the distance from the ground-truth points, , to the reconstruction planes, , i.e.,
where, denotes the index of the plane closest to .
in Eq. 11 is used to penalize the insufficient reconstruction of planes. However, it does not penalize redundant planes, which is critical for point clouds with outliers. Therefore, for each inlier point, , we calculate its projection point, , on the reconstruction planes. Then, we evaluate the distance from to the nearest point using ground truth points, i.e.,
Note that can penalize the inaccurate and redundant reconstruction of planes simultaneously.
As shown in the black boxes in Fig. 5, the normal vector distribution of the original point clouds is very noisy. Nevertheless, both methods achieved visually good plane reconstruction results. As a defect, neither method can produce a complete plane reconstruction at the top part of the building (see close-up views in the yellow boxes). The reason is that the small plane is more susceptible to outliers, resulting in incorrect reconstruction planes. Despite that, we can observe that RAPTER produces several planes with inaccurate directions; whereas our method produces relatively few but more accurate planes. In the close-up views in the red boxes, we also observe that our method outperforms RAPTER in details.
Quantitative comparison results are summarized in Table 1. Compared to RAPTER, our method achieves better and but worse . The normal vectors reconstructed by our method are significantly more accurate than RAPTER. This is also indicated in the black boxes of Fig. 5. Furthermore, RAPTER is very time consuming, because it needs to solve a large-scale mixed-integer programming system. In contrast, our method is approximately 2,000 times faster.
5.4 Plane reconstruction on real-world point clouds
To quantitatively evaluate the performance of the proposed method in practice, two real-world dataset are considered. In the following experiments, we set and . The parameters for sequential RANSAC and Pearl are fine-tuned.
We first test our method on the NYU-depth dataset . This dataset was originally designed for object detection. We use it to quantitatively evaluate the performance of plane segmentation. A subset of 218 images, mainly containing planar objects, is selected from the dataset. The subset includes kitchen, bedroom, and bathroom scenes. Scenes containing too many non-planar objects, such as offices and stores, are excluded, because our method is not designed for non-planar scenes. Two standard segmentation metrics, global consistency error (GCE), and local consistency error (LCE) from , are introduced here for evaluation.
The quantitative results are summarized in Table 2, where two typical scenes are shown in Fig. 6. It can be observed that our method is significantly superior to sequential RANSAC and Pearl in terms of visual and quantitative results.
We further test our method on indoor laser-scanning point clouds 111https://www.ifi.uzh.ch/en/vmml/research/datasets.html, including five datasets: UZH IfI (12 data), UZH Irchel (10 data), ETH (18 data), Rooms detection datasets (9 data), and Full 3D (8 data). Owing to the memory limitations of PEARL, the input data was down-sampled by performing Poisson-disk sampling  using a resolution .
The quantitative comparison results are shown in Fig. 7 and some typical reconstruction results are shown in Fig. 8. Owing to the lack of ground truth, only and are evaluated here. Nevertheless, we can easily observe that our method is superior to the other two methods. PEARL has a high computational complexity and has difficulty maintaining small-scale planar structures. RANSAC-based methods tend to merge unconnected planes, making it difficult to maintain the edges of local structures. The proposed method not only maintains the local structure, it also guarantees the global constraints. Therefore, we achieve the best results.
In this paper, we presented a fast algorithm for RCPR problems. Our algorithm was based on a constraint model, requiring less prior knowledge compared to traditional RCPR algorithms. We showed that, by considering the constraints, plane reconstruction results could significantly improved, especially for data with high-level outliers and noise. Furthermore, our algorithm is efficient, between 3 and 1,000-times faster than the existing RCPR algorithms.
-  P. Amayo, P. Piniés, L. M. Paz, and P. Newman. Geometric multi-model fitting with a convex relaxation algorithm. In Computer Vision and Pattern Recognition. IEEE, 2018.
-  M. Arikan, M. Wimmer, and S. Maierhofer. O-snap:optimization-based snapping for modeling architecture. ACM Transactions on Graphics, 32(1):1–15, 2013.
-  J. C. Bazin, Y. Seo, C. Demonceaux, and P. Vasseur. Globally optimal line clustering and vanishing point estimation in Manhattan world. In Computer Vision and Pattern Recognition, pages 638–645. IEEE, 2012.
-  D. Borrmann, J. Elseberg, L. Kai, and A. Nüchter. The 3D Hough transform for plane detection in point clouds: A review and a new accumulator design. 3D Research, 2(2):32:1–32:13, 2011.
-  Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. In International Conference on Computer Vision, pages 377–384. IEEE, 2002.
-  R. Bridson. Fast poisson disk sampling in arbitrary dimensions. In ACM SIGGRAPH 2007 Sketches, SIGGRAPH ’07, New York, NY, USA, 2007. ACM.
-  A. Buades, B. Coll, and J. M. Morel. A non-local algorithm for image denoising. In Computer Vision and Pattern Recognition, pages 60–65. IEEE, 2005.
-  N. Buchbinder, M. Feldman, J. Naor, and R. Schwartz. A tight linear time (1/2)-approximation for unconstrained submodular maximization. In Foundations of Computer Science, pages 649–658, 2012.
-  A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging & Vision, 40(1):120–145, 2011.
-  A. L. Chauve, P. Labatut, and J. P. Pons. Robust piecewise-planar 3d reconstruction and completion from large-scale unstructured point data. In Computer Vision and Pattern Recognition, pages 1261–1268. IEEE, 2010.
J. M. Coughlan and A. L. Yuille.
Manhattan world: compass direction from a single image by Bayesian inference.In International Conference on Computer Vision, volume 2, pages 941–947. IEEE, 1999.
-  A. Delong, A. Osokin, H. N. Isack, and Y. Boykov. Fast approximate energy minimization with label costs. International Journal of Computer Vision, 96(1):1–27, 2012.
-  E. Elhamifar and M. C. D. P. Kaluza. Online summarization via submodular and convex optimization. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1818–1826, 2017.
-  H. Fang, F. Lafarge, and M. Desbrun. Planar shape detection at structural scales. In Computer Vision and Pattern Recognition, Salt Lake City, United States, 2018. IEEE.
-  E. Fernandez-Moral, W. Mayol-Cuevas, V. Arevalo, and J. Gonzalez-Jimenez. Fast place recognition with plane-based maps. In International Conference on Robotics and Automation, pages 2719–2724. IEEE, 2013.
-  M. A. Fischler and R. C. Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):726–740, 1981.
-  K. Joo, T. Oh, J. Kim, and I. S. Kweon. Robust and globally optimal Manhattan frame estimation in near real time. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1–1, 2018.
-  S. Korman and L. Roee. Latent RANSAC. In Computer Vision and Pattern Recognition, pages 6693–6702. IEEE, 2018.
-  F. Lafarge and P. Alliez. Surface reconstruction through point set structuring. Computer Graphics Forum, 32(2):225–234, 2013.
-  F. Lafarge and P. Alliez. Surface reconstruction through point set structuring. http://www-sop.inria.fr/members/Florent.Lafarge/benchmark/surface_reconstruction/reconstruction.html, 2013.
-  Y. Li, X. Wu, Y. Chrysathou, A. Sharf, D. Cohen-Or, and N. J. Mitra. Globfit: Consistently fitting primitives by discovering global relations. ACM Transaction on Graphics, 30(4):52:1–52:12, 2011.
-  Y. Lin, C. Wang, B. Chen, D. Zai, and J. Li. Facet segmentation-based line segment extraction for large-scale point clouds. IEEE Transactions on Geoscience & Remote Sensing, 55(9):4839–4854, 2017.
-  Y. Lin, C. Wang, D. Zhai, W. Li, and J. Li. Toward better boundary preserved supervoxel segmentation for 3D point clouds. ISPRS Journal of Photogrammetry and Remote Sensing, 143:39–47, 2018.
-  L. Magri and A. Fusiello. T-linkage: A continuous relaxation of J-Linkage for multi-model fitting. In Computer Vision and Pattern Recognition, pages 3954–3961. IEEE, 2014.
-  L. Magri and A. Fusiello. Multiple models fitting as a set coverage problem. In Computer Vision and Pattern Recognition, pages 3318–3326. IEEE, 2016.
-  A. Monszpart, N. Mellado, G. J. Brostow, and N. J. Mitra. RAPter implementation. http://geometry.cs.ucl.ac.uk/projects/2015/regular-arrangements-of-planes/, 2015.
-  A. Monszpart, N. Mellado, G. J. Brostow, and N. J. Mitra. RAPter: rebuilding man-made scenes with regular arrangements of planes. Acm Transactions on Graphics, 34(4):103, 2015.
-  R. M. H. Nguyen and M. S. Brown. Fast and effective gradient minimization by region fusion. In IEEE International Conference on Computer Vision, pages 208–216, 2015.
-  S. Oesau, F. Lafarge, and P. Alliez. Planar shape detection and regularization in tandem. Computer Graphics Forum, page 14, 2015.
-  S. Oesau, F. Lafarge, and P. Alliez. Object classification via planar abstraction. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, III-3:225–231, 2016.
-  J. Papon, A. Abramov, M. Schoeler, and F. Wörgötter. Voxel cloud connectivity segmentation - supervoxels for point clouds. In Computer Vision and Pattern Recognition, pages 2027–2034. IEEE, 2013.
M. Polak, H. Zhang, and M. Pi.
An evaluation metric for image segmentation of multiple objects.Image and Vision Computing, 27(8):1223–1227, 2009.
-  R. Raguram, O. Chum, M. Pollefeys, J. Matas, and J. M. Frahm. USAC: a universal framework for random sample consensus. IEEE Transactions on Pattern Analysis & Machine Intelligence, 35(8):2022–2038, 2013.
-  R. Schnabel, R. Wahl, and R. Klein. Efficient RANSAC for point‐cloud shape detection. Computer Graphics Forum, 26(2):214–226, 2010.
-  N. Silberman, D. Hoiem, P. Kohli, and R. Fergus. Indoor segmentation and support inference from RGBD images. In European Conference on Computer Vision, pages 746–760, 2012.
-  J. Straub, G. Rosman, O. Freifeld, J. J. Leonard, and J. W. Fisher. A mixture of Manhattan frames: Beyond the Manhattan world. In Computer Vision and Pattern Recognition, pages 3770–3777. IEEE, 2014.
-  R. Toldo and A. Fusiello. Robust multiple structures estimation with J-Linkage. In European Conference on Computer Vision, pages 537–547. IEEE, 2008.
-  O. Veksler and A. Delong. Pearl implementation. http://vision.csd.uwo.ca/code/, 2012.
-  H. Wang, G. Xiao, Y. Yan, and D. Suter. Searching for representative modes on hypergraphs for robust geometric model fitting. IEEE Transactions on Pattern Analysis & Machine Intelligence, PP(99):1–1, 2018.
-  L. Xu, E. Oja, and P. Kultanen. A new curve detection method: Randomized Hough transform (RHT). Pattern Recognition Letters, 11(5):331–338, 1990.
-  Q. Y. Zhou and U. Neumann. 2.5D building modeling by discovering global regularities. In Computer Vision and Pattern Recognition, pages 326–333. IEEE, 2012.
-  Z. Zhou, H. Jin, and Y. Ma. Robust plane-based structure from motion. In Computer Vision and Pattern Recognition, pages 1482–1489. IEEE, 2012.
-  M. Zuliani, C. S. Kenney, and B. S. Manjunath. The multiRANSAC algorithm and its application to detect planar homographies. In International Conference on Image Processing, volume 3, pages III–153. IEEE, 2005.