have already provided ingenious designs in feature extraction[31, 55], overlapping image detection [1, 16, 25, 37], feature matching and verification , and bundle adjustment [13, 35, 57]. However, the large-scale accurate and consistent camera registration problem has not been completely solved, not to mention in a parallel fashion.
To fit a whole camera registration problem into a single computer, previous works [1, 16, 25, 43, 46] generally drastically discard the connectivities among cameras and tracks by first building a skeletal geometry of iconic images  and registering the remaining cameras with respect to the skeletal reconstruction. The other approaches [23, 34, 40, 49, 51] generate exclusive camera clusters for partial reconstruction and finally merge them together. Such losses of camera-to-camera connectivities remarkably decrease the accuracy and consistency of the final reconstruction. Instead, this work tries to preserve the camera-to-camera connectivities and their corresponding tracks for a highly accurate and consistent reconstruction. We propose an iterative camera clustering algorithm that splits the original SfM problem into several smaller sub-problems in terms of clusters of cameras with overlapping. We then exploit this scalable framework to solve the whole SfM problem, including track generation, local SfM, 3D point triangulation and bundle adjustment far exceeding the memory of a single computer in a parallel scheme.
To obtain the global camera poses from partial sparse reconstructions, the hybrid SfM methods [3, 49] directly use similarity transformations to roughly merge clusters of cameras together and possibly lead to inconsistent camera poses across clusters. Others [14, 23, 29, 40, 51] hierarchically merge camera pairs and triplets and are sensitive to the order of the merging process. Given that the camera-to-camera connectivities are preserved by our clustering algorithm at all possible, we instead apply the accurate and robust relative poses from incremental SfM [1, 39, 42, 45, 56] to the global motion averaging framework [2, 5, 7, 8, 9, 10, 17, 18, 19, 22, 32, 38, 44], and obtain the global camera poses.
The contributions of our approach are three-fold. First, we introduce a highly scalable framework to handle SfM problems exceeding the memory of a single computer. Second, a camera clustering algorithm is proposed to guarantee that sufficient camera-to-camera connectivities and corresponding tracks are preserved in camera registration. Finally, we present a hybrid SfM method that uses relative motions from incremental SfM to globally average the camera poses and achieve the state-of-the-art accuracy evaluated on benchmark data-sets . To the best of my knowledge, ours is the first pipeline able to reconstruct highly accurate and consistent camera poses from more than one million high-resolution images in a parallel manner.
2 Related Works
Based on an initial camera pair, the well-known incremental SfM method  and its derivations [1, 39, 42, 56] progressively recover the pose of the “next-best-view” by carrying out perspective-three-point (P3P)  combined with RANSAC  and non-linear bundle adjustment 
to effectively remove outlier epipolar geometry and feature correspondences. However, frequent intermediate bundle adjustment leads to incredible time consumption and drifted optimization convergence, especially on large-scale data-sets. In contrast, the global SfM methods[2, 5, 7, 8, 9, 10, 17, 18, 19, 22, 32, 38, 44] solve all the camera poses simultaneously from the available relative poses, the computation of which is highly parallel, and can effectively avoid drifting errors. Compared with incremental SfM methods, global SfM methods are, however, more sensitive to possible erroneous epipolar geometry despite the various delicate designs of epipolar geometry filters [10, 20, 24, 26, 34, 41, 53, 54, 58, 59].
In this paper, we embrace the advantages of both incremental and global SfM methods and exploit a hybrid SfM formulation. The previous hybrid methods [14, 23, 29, 40, 51] are limited to small-scale or sequential data-sets. Havlena  form the final 3D model by merging atomic 3D models from camera triples together, while the merging process is not robust depending solely on common 3D points. Bhowmick 
directly estimate the similarity transformations to combine camera clusters but produce possibly inconsistent camera poses across clusters. The work in incrementally merges multiple cameras while suffering from severe drifting errors. In contrast, we apply the robust relative poses from partial reconstruction by local incremental SfM to the global motion averaging framework and provide highly consistent and accurate camera poses. The work in  optimizes the relative poses by solving a single global optimization problem rather than multiple local problems, and suffers from scalability in very large-scale data-sets.
To tackle the scalability problem of large-scale SfM, previous works generally exploit a skeletal  or simplified graph [1, 16, 25, 43] of iconic images . Although millions of densely sampled Internet images can be roughly registered, numerous geometry connectivities are discarded. Therefore, such approaches can hardly guarantee a highly accurate and consistent reconstruction in our scenario consisted of uniformly captured high-resolution images. The hybrid SfM pipelines [3, 23] employing exclusive clusters of cameras lose a large number of connectivities among cameras and tracks during the cluster partition as well. Instead, our proposed camera clustering algorithm produces clusters of cameras with overlapping guaranteeing that sufficient camera-to-camera connectivities and corresponding tracks are validated and preserved in camera registration and consequently achieve superior reconstruction accuracy and consistency.
3 Scalable Formulation
We start with a given set of images , their corresponding SIFT  features and matching correspondences where is a set of inlier feature correspondences verified by epipolar geometry  between two images and . Each image is associated with a camera . The target of this paper is then to compute the global camera poses of all the cameras with projection matrices denoted by .
3.2 Camera Clustering
As the problem of SfM, in particular camera registration, scales up, the following two problems emerge. First, the problem size gradually exceeds the memory of a single computer. Second, the high degree parallelism of our distributed computing system can hardly be fully utilized. We therefore introduce a camera clustering algorithm to split the original SfM problem into several smaller manageable sub-problems in terms of clusters of cameras and associated images. Specifically, our goal of camera clustering is to find camera clusters such that all the SfM operations of each cluster can be fitted into a single computer for efficient processing (size constraint) and that all the clusters have sufficient overlapping cameras with adjacent clusters to guarantee a complete reconstruction when their corresponding partial reconstructions are merged together in motion averaging (completeness constraint).
3.2.1 Clustering Formulation
In order to encode the relationships between all the cameras and associated tracks, we introduce a camera graph , in which each node represents a camera , each edge with weight connects two different cameras and . In the subsequent scalable SfM, both local incremental SfM and bundle adjustment  encourage cameras with great numbers of common features to be grouped together for a robust geometry estimation. We therefore define the edge weight as the number of feature correspondences, namely . Our target is then to partition all the cameras denoted by a graph into a set of camera clusters denoted by while satisfying the following size and completeness constraints.
We encourage the number of cameras of each camera cluster to be small and of similar size. First, each camera cluster should be small enough to be fit into a single computer for efficient local SfM operations. Particularly for local incremental SfM, a comparatively small-scale problem can effectively avoid redundant time-consuming intermediate bundle adjustment  and possible drifting. Second, a balanced problem partition stimulates a fully utilization of the distributed computing system. The size constraint is therefore defined as
where is the upper bound of the number of cameras of a cluster. We can see from Figure 3 that both the average relative rotation and translation errors computed from local incremental SfM in a cluster first remarkably decrease and then stabilize as the number of cameras in a cluster increases. The acceptable number of cameras in a cluster is therefore in a large range and we choose for the trade-off between accuracy and efficiency.
The completeness constraint is introduced to preserve camera-to-camera connectivities, which provides relative poses for motion averaging to generate global camera poses. However, a complete preserving of camera-to-camera connectivities introduces many repeated cameras in different clusters and the size constraint can hardly be satisfied . We therefore define the completeness ratio of a camera cluster as which quantifies the degree cameras covered in one camera cluster are also covered by other camera clusters. It limits the number of repeated cameras and guarantees that all the clusters have sufficient overlapping cameras with adjacent clusters for a complete reconstruction. Then, we have
As shown in Figure 4, a large completeness ratio encourages less loss of camera-to-camera connectivities while results in more duplicated cameras in different clusters. Balancing the trade-off between accuracy and efficiency, we choose . Here, less than of camera-to-camera connectivities are discarded and approximately 1.8 times of the original number of cameras are reconstructed in local SfM. In contrast, exclusive camera clustering () leads to a loss of of camera-to-camera connectivities.
3.2.2 Clustering Algorithm
We propose a two-step algorithm to solve the camera clustering problem. A sample output of this algorithm is illustrated in Figure 5.
1. Graph division
We guarantee the size constraint by recursively splitting a camera cluster violating the size constraint into smaller components. Starting with the camera graph , we iteratively apply normalized-cut algorithm , which guarantees an unbiased vertex partition, to divide any sub-graph not satisfying the size constraint into two balanced sub-graphs and , until that no sub-graphs violate the size constraint. Intuitively, camera pairs with great numbers of common features have high edge weights and are less likely to be cut.
2. Graph expansion
We enforce the completeness constraint by introducing sufficient overlapping cameras between adjacent camera clusters. More specifically, we first sort the edges discarded in graph division by edge weight in descending order, and iteratively add the edge and associate vertices and randomly to one of its connected sub-graphs and if the completeness ratio of the subgraph is smaller than . Here, denotes the sub-graph containing vertex . Such process is iterated until no additional edges can be added to any of the sub-graph. It is noteworthy that the completeness constraint is not difficult to satisfy after adding a small subset of discarded edges and associated vertices.
The size constraint may be violated after graph expansion, and we iterate between graph division and graph expansion until both constraints are satisfied.
3.3 Camera Cluster Categorization
The camera clusters from the clustering algorithm are divided into two categories, namely independent and interdependent camera clusters. We define the final camera clusters from our clustering algorithm as interdependent camera clusters since they share overlapping cameras with adjacent clusters. Such interdependent clusters are used in subsequent parallel local incremental SfM. Accordingly, we define all the fully exclusive camera clusters before graph expansion as independent camera clusters which are used in the following parallel 3D point triangulation and parallel bundle adjustment. We also leverage the independent camera clusters to build a hierarchical camera cluster tree , in which each leaf node corresponds to an independent camera cluster and each non-leaf node is associated with an intermediate camera cluster during the recursive binary graph division. The hierarchical camera cluster tree is an important structure in the subsequent parallel track generation. Next, we can base on the camera clusters from our clustering algorithm to implement a scalable SfM pipeline.
4 Scalable Implementation
4.1 Track Generation
The first step of scalable SfM is to use the pair-wise feature correspondences to generate globally consistent tracks across all the images, and the problem is solved by a standard Union-Find  algorithm. However, as the size of the input images scales up, it gradually becomes impossible to concurrently load all the feature and associate match files into the memory of a single computer for track generation. We therefore base on the hierarchical camera cluster tree to perform track generation and avoid caching all the features and correspondences into memory at once. In detail, we define as the node in the th level of , and and are respectively the left and right child of . For the track generation sub-problem associated with sibling leaf nodes and , we load all their features and correspondences into memory, generate the tracks corresponding to , reallocate the memory of features and correspondences, and save the tracks associated with into storage. As for the two sibling non-leaf nodes and , we only load the correspondences and tracks associated with both nodes, merge them, and save the tracks corresponding to into storage. Such processes are iteratively performed from the bottom up until the globally consistent tracks with respect to the root node of are obtained. All the track generation processes associated with each level of are handled in parallel under a standard framework of MapRedeuce .
4.2 Local Incremental SfM
For the cameras and corresponding tracks of every interdependent camera cluster denoted by the sub-graph , we perform local incremental SfM in parallel. Local incremental SfM is vital to the subsequent motion averaging in two aspects. First, RANSAC  based filters and repeated partial bundle adjustment  can remove erroneous epipolar geometry and feature correspondences. Second, incremental SfM considers robust -view () pose estimation [28, 36] and produces superior accurate and robust relative rotations and translations than the generally adopted essential matrix based [2, 5, 18, 38]
and trifocal tensor based methods[26, 34] even for the camera pairs with weak association, large angle of views, and great scale variation. Figure 6 and the statistics of the benchmark data-sets  ( and ) in Table 2 confirm the statement above.
4.3 Motion Averaging
Now, all the relative motions of camera pairs with feature correspondences from local incremental SfM are used to compute the global camera poses. The work in  is first adopted for efficient and robust global rotation averaging.
4.3.1 Translation Averaging
Translation averaging is challenging for two reasons. First, it is difficult to discard erroneous epipolar geometry resulted from noisy feature correspondences. Second, an essential matrix can only encode the direction of a relative translation . Thanks to local incremental SfM, the majority of erroneous epipolar geometry is filtered, and the only problem remained is to solve the scale ambiguity.
The work in  first globally averages the scales of all the relative translations and perform a convex optimization to solve scale-aware translation averaging. Özyesil  obtain the convex “least unsquared deviations” formulation by introducing a complicated quadratic constraint. Given that all the relative translations from one camera cluster are up to the same scale factor , we instead formulate our translation averaging as a convex problem by solving the camera positions and cluster scales simultaneously. Obviously, the scale factors computed in terms of clusters are more robust than the pair-wise scales [10, 38] in terms of relative poses, especially for the camera pairs with weak association.
With the global rotations computed from  fixed, a linear equation of camera positions can be obtained as:
where is a relative translation between two cameras and estimated in the th cluster associated with a scale . Equation 3 can be rewritten as: . Then we form the representations of all the cluster scales and camera positions as and respectively, and we have:
Here, is a matrix with an appropriate location of replaced by , and otherwise. is a matrix with appropriate locations of and replaced by and respectively, and otherwise. Then, we can collect all such linear equations from the available camera-to-camera connectivities into the following single linear equation system:
where and are sparse matrices made by stacking all the associate matrices and respectively.
|Data-set||# images||Average epipolar error [pixels]||Number of connected camera pairs||Number of 3D points|
After removing the gauge freedom by setting and , we can obtain the positions of all the cameras by solving the following robust convex optimization problem that is more robust to outliers than methods and converges rapidly to a global optimum,
Since the baseline length is encoded by the changes of cluster scales, our translation averaging algorithm can effectively handle the scale ambiguity, especially for collinear camera motion, and is much well-posed than the essential matrix based approaches [5, 18, 38, 54], which only consider the directions of relative translations and are limited to the parallel rigid graph .
4.4 Bundle Adjustment
For each independent camera cluster, we triangulate  their corresponding 3D points with sufficient visible cameras () from their feature correspondences validated by local incremental SfM based on the averaged global camera geometry. Then, we follow the state-of-the-art algorithm proposed by Eriksson  for distributed bundle adjustment. Since this work  declares to have no restriction on the partitions of cameras, we refer to the independent camera clusters with their associate cameras, tracks and projections as the sub-problems of the objective function of bundle adjustment.
Given the same global camera rotations from  and relative translations from local SfM, Figure 7 verifies that our translation averaging algorithm recovers more accurate camera positions than the state-of-the-art translation averaging methods [10, 34, 38, 48, 54]. Although the optimal solution to no loss of relative motions compared with the original camera graph can hardly be obtained in our clustering algorithm, the statistical comparison shown in Table 2 still demonstrates the superior accuracy of camera poses from our pipeline over the state-of-the-art SfM approaches [10, 34, 48, 56] on the benchmark data-set .
Figure 8 shows the comparison with the hybrid SfM methods [3, 49] using exclusive camera clusters on the data-sets  consisted of sequential images with close-loop. We regard our independent camera clusters as the clusters adopted in [3, 49]. We can see that our global method with interdependent camera clusters successfully guarantees close-loop while those [3, 49] with exclusive camera clusters fail.
The statistical comparison with the hybrid SfM methods [3, 49] are shown in Table 1. To measure the consistency of camera poses, we use the epipolar error that is the median distance between the features and corresponding epipolar lines computed from the feature correspondences of all the camera pairs, the number of camera pairs connected by 3D points, and the number of final 3D points. Since our clustering algorithm introduces sufficient camera connectivities for a fully constrained global motion averaging rather than directly merging exclusive camera clusters [3, 49], the epipolar error of our approach is only of that of the work [3, 49], the number of connected camera pairs is times of that of the work [3, 49], and we generate times more 3D points than the work in [3, 49]. Table 1 also provides the results of our approach with different complements ratio. We can see that a larger completeness ratio, namely more camera-to-camera connectivities, guarantees a more accurate and complete sparse reconstruction.
|Data-set||Wu ||Cui ||Moulon ||Sweeney ||Ours|
|Datasets||Accuracy [meters]||Time [seconds]|
|# images||1DSfM ||Colmap ||Cui ||Sweeney ||Theia ||Ours||1DSfM ||Colmap ||Cui ||Sweeney ||Theia ||Ours|
|Piazza del Popolo||350||308||2.2||332||1.2||336||1.4||302||1.8||326||1.0||334||0.5||191||249||246||336||126||191||72||101||46||61||72||16||93|
|Tower of London||572||414||1.0||450||0.7||458||1.2||409||0.9||456||1.4||458||1.0||606||648||542||665||488||558||92||246||130||154||320||75||410|
We implement our approach in C++ and perform all the experiments on a distributed computing system consisted of 10 computers each of which has 6-Core (12 threads) Intel 3.40 GHz processors and 128 GB memory. All the computers are deployed on a scalable network file system similar to Hadoop File System. We implement a multicore bundle adjustment solver similar to PBA  to solve all the non-linear optimization problems, and a solver like  to solve Equation 6. We also utilize Graclus  to handle the normalized-cut problem.
The statistics of the comparisons of the benchmark data-sets  with absolute measurements of camera poses between the state-of-the-art methods [10, 34, 48, 56] and our proposed method are shown in Table 2. Since the number of cameras of the largest benchmark data-set CastleP30 is only 30, we set rather than adopted by our pipeline to force that valid camera clusters can be generated. Specifically, we can see that the average errors of relative rotations (), relative translations (), and corresponding camera positions () from our algorithm are all obviously smaller than the work in [10, 34, 48, 56].
Table 3 shows the statistical comparisons with the state-of-the-art SfM pipelines [10, 42, 50, 48, 54] on the Internet data-set. We can see that our approach achieves the best accuracy measured by the median camera position errors (in meters) after bundle adjustment in 8 out of 13 data-sets. Moreover, we register the most cameras in 4 out of 13 data-sets. Among these methods [10, 42, 50, 48, 54], Theia SfM  is the most efficient. We can therefore conclude that our SfM pipeline achieve slightly better accuracy and its efficiency is comparable to the state-of-the-art methods [10, 42, 50, 48, 54] on the data-sets captured in the wild.
|Data-set||# images||Resolution||Clustering time [minutes]||Pipeline time [hours]||Peak memory [GB]|
|City A||1210106||50 Mpixel||164.8k||23867||25.24||18.84||46.88||59.02||34.62||75.26||56.04||275.74||2933.76||39.81||10159.62||34.62||39.81||0.53|
|City B||138200||24 Mpixel||73.0k||2721||6.62||4.61||11.71||5.73||3.62||7.34||6.24||23.43||207.76||4.59||666.92||16.47||4.59||0.63|
|City C||91732||50 Mpixel||170.1k||1723||5.12||3.17||8.62||2.64||2.30||4.27||7.76||18.10||162.50||3.04||492.39||12.33||3.04||0.62|
|City D||36480||36 Mpixel||96.4k||635||2.01||1.25||3.57||1.11||1.21||1.71||3.31||7.64||55.70||1.21||176.57||4.87||1.21||0.67|
|Data-set||Resolution [Mpixels]||# registered cameras||# tracks||Avg. track length||Avg. reproj. error [pixels]|
The statistics of the input city-scale data-sets are shown in Table 4. The image resolution ranges from to megapixels and the average number of detected features of each image ranges from 73.0K to 170.1K. We can see that the estimated peak memory of the largest City-A data-set is 2.9TB, 39.81GB, and 10.2TB in track generation, motion averaging and bundle adjustment respectively if handle by the standard SfM pipeline  in a single computer, which obviously runs out of memory of our servers with 128GB memory. The same goes for the other standard SfM pipelines [42, 48, 56]. However, our pipeline can even recover 1.21 million accurate and consistent camera poses and 1.68 billion sparse 3D points of the largest City-A data-set. The corresponding peak memory dramatically drops to 34.62GB and 0.53GB in track generation and bundle adjustment respectively. In Figure 10, we further provide the visual results of the city-scale data-sets containing both mesh and textured models with delicate details to qualitatively demonstrate the high accuracy of the finally recovered camera poses. As shown in Table 5, we fit the whole City-D data-set to the standard SfM pipeline [34, 42, 48, 56] by resizing images. We can see that down-sampling images leads to an obviously smaller number of registered cameras.
We test the Internet data-set  on a single computer to make a fair comparison on running time, and Table 3 shows that our efficiency is comparable to the works in [26, 38, 48, 54]. As for the city-scale data-sets, we note in Table 4 that the running time of track generation and local incremental SfM grows linearly as the number of images increases, while the running time of bundle adjustment, the complexity of which is given cameras and 3D points even in a distributed manner, and motion averaging that can only be handled in a single computer gradually dominates as the number of images drastically increases. Even for the City-B data-set, our parallel computing system composed of 10 computers can successfully reconstruct 138 thousand cameras and 100 million sparse 3D points within one day. Notably, because of the concise design of our clustering algorithm, the range of its running time on the city-scale data-sets is from 3.57 to 11.71 minutes, which is extremely efficient compared with the time cost of the whole SfM pipeline.
Thanks to the fully scalable formulation of our SfM pipeline in terms of camera clusters, the peak memory of track generation of our pipeline is only 2.1%-8.7% of the standard pipeline [10, 34, 45, 48, 56], and the peak memory of bundle adjustment of our approach is even 0.1-3.8‰ of the standard pipeline. However, since our motion averaging formulation (Section 4.3) still solves all the camera poses considering available relative motions at once, it is limited by the memory of a single computer. We are therefore interested to exploit our scalable formulation to solve large-scale motion averaging problems in a scalable and parallel manner, and leave this for future study.
In this paper, we propose a parallel pipeline able to handle accurate and consistent SfM problems far exceeding the memory of a single computer. A graph-based camera clustering algorithm is first introduced to divide the original problem into sub-problems while preserving sufficient connectivities among cameras for a highly accurate and consistent reconstruction. A hybrid SfM method embracing the advantages of both incremental and global SfM methods is subsequently proposed to merge partial reconstructions into a globally consistent reconstruction. Our pipeline is able to handle city-scale SfM problems containing one data-set with 1.21 million high-resolution images, which runs out of memory in the available approaches, in a highly scalable and parallel manner with superior accuracy and consistency over the state-of-the-art methods.
-  S. Agarwal, Y. Furukawa, N. Snavely, I. Simon, B. Curless, S. M. Seitz, and R. Szeliski. Building rome in a day. Commun. ACM, 54(10):105–112, 2011.
-  M. Arie-Nachimson, S. Z. Kovalsky, I. Kemelmacher-Shlizerman, A. Singer, and R. Basri. Global motion estimation from point matches. In 3DIMPVT, 2012.
-  B. Bhowmick, S. Patra, and A. Chatterjee. Divide and conquer: Efficient large-scale structure from motion using graph partitioning. In ACCV, 2014.
-  F. Bourse, M. Lelarge, and M. Vojnovic. Balanced graph edge partition. In SIGKDD, 2014.
-  M. Brand, M. Antone, and S. Teller. Spectral solution of large-scale extrinsic camera calibration as a graph embedding problem. In ECCV, 2004.
-  E. Candès and J. Romberg. -magic: Recovery of sparse signals via convex programming, 2005.
-  L. Carlone, R. Tron, K. Daniilidis, and F. Dellaert. Initialization techniques for 3d slam: a survey on rotation estimation and its use in pose graph optimization. In ICRA, 2015.
-  A. Chatterjee and V. M. Govindu. Efficient and robust large-scale rotation averaging. In ICCV, 2013.
-  Z. Cui, N. Jiang, C. Tang, and P. Tan. Linear global translation estimation with feature tracks. In BMVC, 2015.
-  Z. Cui and P. Tan. Global structure-from-motion by similarity averaging. In ICCV, 2015.
-  J. Dean and S. Ghemawat. Mapreduce: Simplified data processing on large clusters. Commun. ACM, 51(1):107–113, Jan. 2008.
I. S. Dhillon, Y. Guan, and B. Kulis.
Weighted graph cuts without eigenvectors a multilevel approach.PAMI, 29(11):1944–1957, 2007.
-  A. Eriksson, J. Bastian, T.-J. Chin, and M. Isaksson. A consensus-based framework for distributed bundle adjustment. In CVPR, 2016.
M. Farenzena, A. Fusiello, and R. Gherardi.
Structure-and-motion pipeline on a hierarchical cluster tree.In ICCV Workshops, 2009.
-  M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM, 24(6):381–395, June 1981.
-  J.-M. Frahm, P. Fite-Georgel, D. Gallup, T. Johnson, R. Raguram, C. Wu, Y.-H. Jen, E. Dunn, B. Clipp, S. Lazebnik, and M. Pollefeys. Building rome on a cloudless day. In ECCV, 2010.
-  T. Goldstein, P. Hand, C. Lee, V. Voroninski, and S. Soatto. Shapefit and shapekick for robust, scalable structure from motion. In ECCV, 2016.
-  V. M. Govindu. Combining two-view constraints for motion estimation. In CVPR, 2001.
-  V. M. Govindu. Lie-algebraic averaging for globally consistent motion estimation. In CVPR, 2004.
-  V. M. Govindu. Robustness in motion averaging. In ACCV, 2006.
R. Hartley and A. Zisserman.
Multiple view geometry in computer vision. Cambridge university press, 2003.
-  R. I. Hartley, J. Trumpf, Y. Dai, and H. Li. Rotation averaging. IJCV, 103(3):267–305, 2013.
-  M. Havlena, A. Torii, and T. Pajdla. Efficient structure from motion by graph optimization. In ECCV, 2010.
-  J. Heinly, E. Dunn, and J.-M. Frahm. Correcting for Duplicate Scene Structure in Sparse 3D Reconstruction. In ECCV, 2014.
-  J. Heinly, J. L. Schönberger, E. Dunn, and J. M. Frahm. Reconstructing the world* in six days. In CVPR, 2015.
-  N. Jiang, Z. Cui, and P. Tan. A global linear method for camera pose registration. In ICCV, 2013.
-  B. Klingner, D. Martin, and J. Roseborough. Street view motion-from-structure-from-motion. In ICCV, 2013.
-  L. Kneip, D. Scaramuzza, and R. Siegwart. A novel parametrization of the perspective-three-point problem for a direct computation of absolute camera position and orientation. In CVPR, 2011.
-  M. Lhuillier and L. Quan. A quasi-dense approach to surface reconstruction from uncalibrated images. PAMI, 27(3):418–433, 2005.
-  X. Li, C. Wu, C. Zach, S. Lazebnik, and J.-M. Frahm. Modeling and recognition of landmark image collections using iconic scene graphs. In ECCV, 2008.
-  D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91–110, 2004.
-  D. Martinec and T. Pajdla. Robust rotation and translation estimation in multiview reconstruction. In ICPR, 2007.
-  P. Moulon and P. Monasse. Unordered feature tracking made fast and easy. In CVMP, 2012.
-  P. Moulon, P. Monasse, and R. Marlet. Global fusion of relative motions for robust, accurate and scalable structure from motion. In ICCV, 2013.
-  K. Ni, D. Steedly, and F. Dellaert. Out-of-core bundle adjustment for large-scale 3d reconstruction. In ICCV, 2007.
-  D. Nistér. An efficient solution to the five-point relative pose problem. PAMI, pages 756–770, 2004.
-  D. Nistér and H. Stewenius. Scalable recognition with a vocabulary tree. In CVPR, 2006.
-  O. Özyesil and A. Singer. Robust camera location estimation by convex programming. In CVPR, 2015.
-  M. Pollefeys, L. Van Gool, M. Vergauwen, F. Verbiest, K. Cornelis, J. Tops, and R. Koch. Visual modeling with a hand-held camera. IJCV, 59(3):207–232, 2004.
-  B. Resch, H. P. Lensch, O. Wang, M. Pollefeys, and A. S. Hornung. Scalable structure from motion for densely sampled videos. In CVPR, 2015.
-  R. Roberts, S. N. Sinha, R. Szeliski, and D. Steedly. Structure from motion for scenes with large duplicate structures. In CVPR, 2011.
-  J. L. Schönberger and J.-M. Frahm. Structure-from-motion revisited. In CVPR, 2016.
-  J. L. Schönberger, F. Radenović, O. Chum, and J. M. Frahm. From single image query to detailed 3d reconstruction. In CVPR, 2015.
-  S. N. Sinha, D. Steedly, and R. Szeliski. A multi-stage linear approach to structure from motion. In ECCV-workshop RMLE, 2010.
-  N. Snavely, S. M. Seitz, and R. Szeliski. Photo tourism: exploring image collections in 3d. SIGGRAPH, 2006.
-  N. Snavely, S. M. Seitz, and R. Szeliski. Skeletal graphs for efficient structure from motion. In CVPR, 2008.
-  C. Strecha, W. von Hansen, L. V. Gool, P. Fua, and U. Thoennessen. On benchmarking camera calibration and multi-view stereo for high resolution imagery. In CVPR, 2008.
-  C. Sweeney. Theia multiview geometry library: Tutorial & reference. http://theia-sfm.org.
-  C. Sweeney, V. Fragoso, T. Höllerer, and M. Turk. Large scale sfm with the distributed camera model. In 3DV, 2016.
-  C. Sweeney, T. Sattler, T. Hollerer, M. Turk, and M. Pollefeys. Optimizing the viewing graph for structure-from-motion. In ICCV, 2015.
-  R. Toldo, R. Gherardi, M. Farenzena, and A. Fusiello. Hierarchical structure-and-motion recovery from uncalibrated images. CoRR, abs/1506.00395, 2015.
-  B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon. Bundle adjustment - a modern synthesis. In LNCS, 2000.
-  K. Wilson and N. Snavely. Network principles for sfm: Disambiguating repeated structures with local context. In ICCV, 2013.
-  K. Wilson and N. Snavely. Robust global translations with 1dsfm. In ECCV, 2014.
-  C. Wu. SiftGPU: A GPU implementation of scale invariant feature transform (SIFT). http://cs.unc.edu/~ccwu/siftgpu, 2007.
-  C. Wu. Towards linear-time incremental structure from motion. In 3DV, 2013.
-  C. Wu, S. Agarwal, B. Curless, and S. M. Seitz. Multicore bundle adjustment. In CVPR, 2011.
-  C. Zach, A. Irschara, and H. Bischof. What can missing correspondences tell us about 3d structure and motion? In CVPR, 2008.
-  C. Zach, M. Klopschitz, and M. Pollefeys. Disambiguating visual relations using loop constraints. In CVPR, 2010.