Complete Scene Reconstruction by Merging Images and Laser Scans

04/21/2019
by   Xiang Gao, et al.
0

Image based modeling and laser scanning are two commonly used approaches in large-scale architectural scene reconstruction nowadays. In order to generate a complete scene reconstruction, an effective way is to completely cover the scene using ground and aerial images, supplemented by laser scanning on certain regions with low texture and complicated structure. Thus, the key issue is to accurately calibrate cameras and register laser scans in a unified framework. To this end, we proposed a three-step pipeline for complete scene reconstruction by merging images and laser scans. First, images are captured around the architecture in a multi-view and multi-scale way and are feed into a structure-from-motion (SfM) pipeline to generate SfM points. Then, based on the SfM result, the laser scanning locations are automatically planned by considering textural richness, structural complexity of the scene and spatial layout of the laser scans. Finally, the images and laser scans are accurately merged in a coarse-to-fine manner. Experimental evaluations on two ancient Chinese architecture datasets demonstrate the effectiveness of our proposed complete scene reconstruction pipeline.

READ FULL TEXT VIEW PDF

page 3

page 4

page 5

page 6

page 7

page 8

page 9

04/10/2018

Weighted simplicial complex reconstruction from mobile laser scanning using sensor topology

We propose a new method for the reconstruction of simplicial complexes (...
04/24/2020

Optic disc and fovea localisation in ultra-widefield scanning laser ophthalmoscope images captured in multiple modalities

We propose a convolutional neural network for localising the centres of ...
02/21/2018

Sensor-topology based simplicial complex reconstruction

We propose a new method for the reconstruction of simplicial complexes (...
10/23/2019

A Maximum Likelihood Approach to Extract Polylines from 2-D Laser Range Scans

Man-made environments such as households, offices, or factory floors are...
06/27/2021

Geometric Processing for Image-based 3D Object Modeling

Image-based 3D object modeling refers to the process of converting raw o...
08/02/2022

Non-Line-of-Sight Tracking and Mapping with an Active Corner Camera

The ability to form non-line-of-sight (NLOS) images of changing scenes c...
08/03/2017

A Unified View-Graph Selection Framework for Structure from Motion

View-graph is an essential input to large-scale structure from motion (S...

I Introduction

There are two key issues in 3D reconstruction of large-scale architectural scenes: accuracy and completeness. Though many scene reconstruction pipelines focus on the issue of accuracy, they pay less attention to the reconstruction completeness. The common pipelines can achieve good reconstruction completeness in scenes with relatively simple structures. However, when the architectural scene is complicated, e.g. ancient Chinese architecture, the reconstruction completeness can hardly be guaranteed. In order to reconstruct an accurate and complete 3D model (point cloud or surface mesh) of the large-scale and complicated architectural scene, both global structures and local details of the scene need to be surveyed. Currently, there are two frequently used surveying ways for scene reconstruction, image based [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] and laser scanning based reconstruction [12, 13, 14, 15]. These two ways are complementary in flexibility and accuracy.

The image based reconstruction method is convenient and flexible. The up-to-date image collection equipment is portable and with high resolution, which is appropriate for complete coverage of large-scale scenes. However, the results of existing image based methods heavily depend on several external factors, e.g. illumination variation, textural richness and structural complexity. As a result, there are inevitable errors in the image based reconstruction results, especially in the low textured, low lighting, or complicated structured regions.

The laser scanning based reconstruction method possesses high accuracy and is robust to adverse conditions. However, in order to get a complete coverage of large-scale scenes, multi-viewpoint scanning and registration is required. Usually, the laser scans are coarsely registered with the help of man-made targets, which are manually placed in the scene, and are further finely registered by the iterative closest point (ICP) [16] method. Thus, to achieve a complete scene reconstruction, plenty of laser scans are required, which is time-consuming and inefficient with the currently cumbersome scanning equipment.

In order to generate a complete scene reconstruction by merging images and laser scans, a straightforward way is to treat images and laser scans equally. Specifically, architectural scene models are obtained from these two kinds of data respectively at first and merged together by ground control points (GCPs) [17] or using ICP method [18, 19] afterwards. However, this is non-trivial because the point clouds generated from images and laser scans have significant differences in density, accuracy, completeness, etc. which results in inevitable errors in registration. In addition, the laser scanning locations need to be carefully selected to guarantee the scanning overlap for their self-registration.

In this paper, a more effective data collection and scene reconstruction pipeline is proposed, which takes both the data collection efficiency, and the reconstruction accuracy and completeness into consideration. Our pipeline uses images as primacy to completely cover the scene, and uses laser scans as supplement to deal with low textured, low lighting, or complicated structured regions. It mainly contains three steps: 1) image capturing, 2) laser scanning, and 3) image and laser scan merging. The images are captured to completely cover the scene and to generate structure-from-motion (SfM) points. Based on the SfM result, the laser scanning locations are automatically planned. Finally, the images and laser scans are merged in a coarse-to-fine manner to generate an accurate and complete scene reconstruction. The advantages of this framework are: 1) Neither overlaps between laser scans nor man-made targets for registration are mandatory as the laser scans are only served as supplements of the images; 2) By integrating laser scans into the image based reconstruction framework, the reconstruction accuracy and completeness is increased in turn. To our knowledge, we are the first to merge ground and aerial images and terrestrial laser scans for reconstructing accurate and complete outdoor and indoor scenes.

The main contributions of this paper are threefold: 1) A novel reconstruction pipeline using images as primacy and laser scans as supplement, which takes both the data collection efficiency, and the reconstruction accuracy and completeness into account; 2) A fully automatic laser scanning location planning algorithm considering textural richness, structural complexity of the scene, and spatial layout of the laser scans; and 3) A coarse-to-fine image and laser scan merging method, by which an accurate and complete scene reconstruction is generated.

Ii Related Work

There are three main categories of works related to ours: 1) image based reconstruction, 2) laser scanning based reconstruction, and 3) scene reconstruction using both images and laser scans.

Ii-a Image Based Reconstruction

Reconstructing scenes from images is the most obvious way as it is the closest way to that of human perceiving the real world. Image based reconstruction has many advantages, e.g. easy to obtain, store and distribute, low-cost, and flexible.

The pipeline of image based reconstruction goes as follows. First, feature detection is performed for individual image and feature matching is performed for image pair [20]. When performing feature matching, usually vocabulary tree [21, 22] is used to index target images with high similarity, and fast library for approximate nearest neighbors (FLANN) [23] is employed to search approximately nearest feature neighbors. By this way, the efficiency of image matching procedure is largely improved. Then, SfM procedure [2, 3, 4, 5]

is performed on the pair-wise point matches to estimate the camera poses and triangulate the sparse scene points. Next, multi-view stereo (MVS)

[7, 8, 9] is performed based on registered cameras to get dense point cloud. And finally, image based surface reconstruction [10, 11] is performed on the point cloud to obtain detailed surface mesh. Though with many advantages, the image based method is vulnerable to illumination variation, low textures and complicated structures. What is more, inevitable mismatching and error accumulation usually lead to scene drifting.

In addition, there are several methods proposed planning camera network either in off-line [24, 25] or on-line [26] scheme, which are mainly used for aerial image capturing. These methods focus on how to completely cover the scenes with minimum image overlap and flight time. However, in this paper, we do not seek for the optimal image capturing locations but only try to properly cover the scene with ground and aerial images.

Ii-B Laser Scanning Based Reconstruction

Compared with the image based reconstruction methods, the laser scanning based ones acquire the scene structures through active vision. As a result, it possesses several advantages, e.g. higher accuracy and less dependency on the external circumstances.

However, due to limitation in scanning viewpoint and inconvenience in data collection, the completeness of scene coverage for laser scanning based methods is hard to guarantee. As a result, several methods are proposed achieving a complete scene reconstruction from laser point clouds. Self-similar structures [12] or simple building blocks [13] are exploited to reconstruct complete scenes (buildings or facades) from incomplete laser scans. Other methods [14, 15] reconstruct scenes from laser scans based on Manhattan-world assumption. Though quite impressive reconstructions could be achieved by the methods above, they either require user interaction [12, 13] or based on strong assumption [14, 15], which makes their scalability poor.

There are several LiDAR based simultaneous localization and mapping (SLAM) techniques [27, 28, 29] which obtain laser points of the scenes. By taking the advantages of SLAM techniques and mobile laser scanner, e.g. Velodyne LiDAR in [27, 29], they possess higher efficiency and lower cost compared with the large-scale laser scanner based method. However, these methods only generate laser points with rather low spatial resolution, thus they are not suitable for reconstructing large-scale architectural scenes with complicated structures, especially for the ancient Chinese architecture considered here.

In addition, in order to completely cover the scene with as few laser scans as possible, several methods are proposed dealing with the issue of optimal terrestrial laser scanner network design [30, 31, 32, 33]. These methods are based on existing 2D building map [30, 32, 33] or 3D object model [31]. When performing optimization, several factors are considered. For example, range and incidence angles constraints [30], sufficient overlap and surface topography [31] between laser scans, or multi-scale and hierarchical viewpoint planning [33]. However, in this paper, laser scans are served as supplements of the images and their locations are planned based on the SfM result. As a result, textural richness and structural complexity of the scene are considered when performing laser scanning location planning here. By this way, accurate and complete reconstruction could be achieved.

Ii-C Reconstruction Using Images and Laser Scans

There are several methods reconstructing scenes using both images and laser scans. However, the purposes of involving these two kinds of data are different for different systems.

Ground images Aerial images
Capturing device A Canon EOS 5D Mark III on a GigaPan Epic Pro A Sony NEX-5R on a Microdrones MD4-1000
images per image capturing location flight paths
Capturing mode Pitch: (step: ) flight path for nadir images
Yaw: (step: ) flight paths for oblique images
Focal length mm mm
Image resolution px px px px
TABLE I: Details of Image Capturing.

Some works propose registering 2D images with 3D laser scans by utilizing low level (point or line) [34, 35, 36] or high level (plane) [37] features, by which the 3D laser points can be textured from the registered 2D images. Based on the registered 2D images and 3D laser scans, Li et al. [38] propose fusing images and laser points by leveraging their respective advantages to get a complete, textured and regularized urban facade reconstruction. In addition, in the communities of photogrammetry [39]

, computer vision

[40, 41] and computer graphics [42], several benchmarks contain both images and laser scans are proposed for reconstruction method evaluation. However, the laser scans are mostly served as ground truths which are relatively independent to the images. There are several methods [17, 18, 19] which have similar motivation with ours, i.e. integrating images and laser scans for complete scene reconstruction. These methods are based on 3D-3D registration, which is performed using either GCPs [17] or ICP algorithm [18, 19]. In comparison, our approach is based on image synthesis and matching. There is no 3D-level large dissimilarity in density, accuracy and completeness, thus a more accurate merging is achieved by our proposed method.

Iii Proposed Method

Fig. 1: Schematic diagram of our proposed complete scene reconstruction pipeline. It mainly contains three steps: 1) image capturing, 2) laser scanning, and 3) coarse-to-fine image and laser scan merging. See text for more details.

The pipeline of our proposed complete scene reconstruction method is illustrated in Fig. 1. Our method mainly contains three steps: 1) Image capturing. To completely cover large-scale scenes, multi-view and multi-scale image capturing is performed, i.e. images are captured from air, ground, outdoor, and indoor. Then, the captured images are matched and feed into a SfM pipeline to generate SfM points. 2) Laser scanning. Based on the SfM result, laser scanning locations are automatically planned by considering the following three factors: textural richness, structural complexity of the scene and spatial layout of the laser scans. Subsequently, in order to merge images and laser scans, ground-view and aerial-view images are synthesized from the laser points and are matched with the captured images. 3) Image and laser scan merging. Images and laser scans are merged in a coarse-to-fine manner. The laser scans are coarsely aligned to the SfM points individually at first. Then, images and laser scans are finely merged via a generalized bundle adjustment (BA) with the help of these cross-domain point matches. These three steps are detailed in the following sections respectively.

Iii-a Image Capturing

Fig. 2: Data collection equipments in our experiments. First column: a Canon EOS 5D Mark III mounted on a GigaPan Epic Pro for ground image capturing; Second column: a Sony NEX-5R mounted on a Microdrones MD4-1000 for aerial image capturing; Third column: a Leica ScanStation P30 Scanner for terrestrial laser scanning.

Multi-view and Multi-scale Image Capturing To ensure the complete coverage of the architectural scenes, in this paper, images are captured in two ways: 1) close-range ground images with fine connectivity for outdoor and indoor scenes coverage; 2) large-scale aerial images for entire scene and architectural roof capturing. Some image capturing details of the large-scale ancient Chinese architectural scenes in our experiments are illustrated in Table I and Fig. 2. The ground (outdoor and indoor) images are captured station by station, which makes it convenient to plan the image capturing locations and efficient to perform the image capturing process. In addition, in order to properly cover the outdoor and indoor scenes from ground viewpoint, the ground image capturing stations are equally spaced in the scene. Specifically, we coarsely grid the ground plane of the scenes of interest and the grid centers are used as image capturing locations. The side length of the grid is set to m in this paper. These locations are marked on the ground during the image capturing process, which would be used in the laser scanning step.

Structure-from-Motion and Surface Meshing After image capturing, the collected images are feed into a SfM pipeline [5] to calibrate camera poses and generate spatial points of SIFT features [20]. In order to merge all captured (outdoor, indoor, and aerial) images into a unified SfM process, ground-to-aerial and outdoor-to-indoor point matches should be generated. However, obtaining these two kinds of point matches are both non-trivial, due to 1) the large viewpoint and scale differences between ground and aerial images (cf. Fig. 3), and 2) the limited view overlapping between outdoor and indoor images. In this paper, the scenes captured by aerial images, outdoor images, and indoor images are reconstructed respectively at first, and merged afterwards.

Fig. 3: An example of ground-to-aerial image feature matching result. First row: point matches between the cropped aerial image (left) and synthetic aerial-view image (right), where the blue segments link the point matches; second row: original aerial and ground image pair with large viewpoint and scale differences.

In recent years, several methods have been proposed integrating ground and aerial data for localization and reconstruction [43]. In this paper, we follow the pipeline proposed in [44] to merge ground and aerial SfM points. Specifically, for a pair of ground and aerial images, aerial-view image is synthesized from the captured ground image and is matched with the captured aerial image (cf. Fig. 3). The image synthesis is performed by leveraging the co-visible mesh which is generated from ground SfM points using the method [45]. Then, the ground and aerial SfM points are merged via cross-view bundle adjustment. In addition, outdoor and indoor scene merging is also a difficult problem. Recent approach [46] tackles this problem by leveraging the windows, which is not suitable for all building types. e.g. ancient Chinese architecture. Here the outdoor and indoor SfM points are merged with the help of the point matches between the outdoor and indoor images near the doors. The image feature matching result of a pair of outdoor and indoor images near the door is shown in Fig. 4, which indicates that enough outdoor-to-indoor point matches are generated for outdoor and indoor scene merging. As the image based reconstruction is with scale ambiguity, in order to plan the laser scanning locations and merge the SfM and laser points, the real scale of the merged (outdoor-indoor-aerial) SfM points should be recovered. Here, it is roughly recovered via the built-in GPS of the cameras, by which the SfM points and cameras are geo-referenced. Then, surface reconstruction [45] is performed on the merged SfM points to get surface mesh of the scene, which is used for laser scanning location planning in the following section.

Fig. 4: An example of outdoor-to-indoor image feature matching result, where the blue segments link the point matches.

Iii-B Laser Scanning

Laser Scanning Location Planning Given the surface mesh reconstructed from the merged SfM points, we try to plan the laser scanning locations with full automation. As the purpose of involving laser scans is to obtain a more accurate and complete scene reconstruction, during scanning location planning, the following three factors should be considered: 1) textural richness, 2) structural complexity of the scene, and 3) spatial distribution of the laser scans. The first two factors mean that the scenes with low textures and complicated structures should be complemented by laser scanning in priority. The third factor means that the laser scanning locations should evenly distributed and not overlap much with each other, in order to save time and cost. Following these three factors, we proposed a method to automatically plan the laser scanning locations.

In order to plan the laser scanning locations, we first obtain several potential laser scanning locations. Then, the scanning location planning becomes a -

integer linear programming problem: Selecting some potential locations as the actual scanning locations (labeled as

) and discarding others (labeled as ). The potential laser scanning locations can be simply determined as follows. The ground plane is detected and divided into grids at first. Then, the grid centers are used as the potential scanning locations [30, 32, 33]. However, in this paper, the potential scanning locations are selected as the capturing locations of the ground images, which has two advantages: 1) The image capturing locations are carefully selected to properly cover the scene, thus their subset is appropriate for performing laser scanning as well. 2) During the merging of images and laser scans in the following section, the point matches between the captured ground images and the ground-view images synthesized from the laser scans are required, thus scanning at the image capturing locations benefits the image synthesis and matching procedure.

Fig. 5: (a). A ground outdoor image. (b)-(c). SfM points and surface mesh with the similar viewpoint of (a). (d)-(f). Regions corresponding to the rectangles in (c). They are representative regions with relatively (d) rich and (e) low textures but simple structures (flat walls) and regions with relatively (f) complicated structures (bracket sets). The colored triangles in (d)-(f) are the facet examples for computation.

After obtaining the potential laser scanning locations, the actual scanning locations are selected among the potential ones with the help of the surface mesh generated from SfM points. Specifically, we evenly cast rays, in this paper, at each potential scanning location [47]. The rays casting from the -th location intersect the surface mesh with facets using CGAL111https://www.cgal.org/. They are denoted as:

(1)

where is the number of potential scanning locations, and is the -th intersected facet at the -th location. These facets are used to indicate the textural richness and structural complexity of the scenes around those potential scanning locations. Specifically, for each facet , we obtain some facets whose distances to are less than , m in this paper. The distance between two facets here is defined by the Euclidean distance between the two facet centers. Then, the areas of the obtained facets, including that of , are summed up and denoted as . The value of is used to indicate the textural richness and structural complexity of the scene near : The larger the value of is, the lower the texture and the more complicated the structure of the scene near is (cf. Fig. 5). From Fig. 5 we can see that: 1) In the regions with lower textures, the SfM points are sparser and the mesh facets are larger; 2) In the regions with more complicated structures, more facets are generated to cover the structural complexity. The values of for the facets in Fig. 5d-f are m, m, and m respectively. The reasons of larger in Fig. 5e and Fig. 5f lie in that the area of the facet is much larger in Fig. 5e; and more facet areas are summed up to in Fig. 5f.

Then, we use

(2)

to indicate the textural richness and structural complexity of the scene around the -th potential scanning location.

In addition, in order to indicate the overlap between the -th and -th potential scanning location, the intersection over union (IoU) of their intersected facet sets are used, which is denoted as:

(3)

As the planned laser scanning locations with more even distribution are preferred, the potential locations with less IoUs between each other should be selected in priority. As a result, we formulate the problem of laser scanning location planning as follows:

(4)

where is the optimization variable, means the -th potential scanning location is selected, otherwise ; is a threshold to bound the coverage of laser scanning and is set to in this paper.

However, the problem defined in Eq. 4 is - integer linear programming problem, which is NP-hard. In this paper, we approximately solve the optimization problem by a greedy algorithm and select one potential scanning location at a time [30, 33]. The algorithm is detailed in the following.

0:   defined in Eq. 2 and defined in Eq. 3
0:  Selected indices of the potential scanning locations Initialization:
1:  Select the first scanning location by Eq. 5, Iteration:
2:  while the condition defined in Eq. 8 satisfies do
3:     Select one scanning location by Eq. 7,
4:  end while
Algorithm 1 Laser Scanning Location Planning

The first scanning location is selected as the one with largest :

(5)

which means the -th potential scanning location is the first selected one. Suppose after times selection, the indices of the selected potential scanning locations are denoted as , while the other ones are denoted as , which means

(6)

Then, during the -th selection, the -th index in is selected as by the following optimization:

(7)

which means . With the selection going on, more and more potential scanning locations are selected to cover the scene. The selection is stopped by the truncation condition in Eq. 4:

(8)

Our proposed automatic laser scanning location planning algorithm is summarized in Algorithm 1. Note that laser scanning is performed at the planned locations. As a result, the generated laser points are in the same coordinate system with the geo-referenced SfM points, which makes the following image synthesis and matching procedure straightforward.

Fig. 6: (a). Schematic diagram of the virtual cube for ground-view image synthesis, where the blue pyramid denotes one of the virtual cameras. (b). An example of ground-view synthetic image. (c). Depth map of (b).
Fig. 7: An example of synthetic-to-ground image feature matching result, where the blue segments link the point matches.

Image Synthesis and Matching After laser scanning location planning, terrestrial laser scanning is performed at those selected scanning locations. In our experiments, a Leica ScanStation P30 Scanner is used (cf. Fig. 2). Like most up-to-date laser scanners, P30 obtains an extremely large number of accurate spatial points with RGB information. In order to merge image and laser scans, we synthesize images from laser points and match them with the captured ones. In this paper, we not only synthesize the ground-view images, which is similar to that in [41], but also synthesize the aerial-view images. By matching the synthetic and aerial images, more constraints could be obtained for the following image and laser scan merging procedure.

Fig. 8: An example of synthetic-to-aerial image feature matching result. First row: enlarged synthetic-to-aerial image pair of the green rectangles in the second row to illustrate the feature point matches, which are denoted by the blue segments; second row: original synthetic and aerial image pair.

Ground-view Image Synthesis For the points generated from each laser scan, images are synthesized by projecting them onto the faces of a virtual cube whose center coincides with the scanning origin (c.f. Fig. 6a). The RGBs of synthetic image pixels are the RGBs of the laser points projecting onto them (c.f. Fig. 6b). The cube faces together with the cube center constitute virtual cameras with orthogonal orientations, which can be seen as a generalized camera model [48, 49]. Both width and height of the ground-view synthetic image are set to the height of the captured ground image, i.e. px in this paper.

Aerial-view Image Synthesis We follow the method proposed in [50] to select proper aerial images and synthesize images of these selected aerial viewpoints from laser points. For each laser scan, aerial images are selected to properly cover the laser scan with relatively even distribution. Then, the visible laser points are projected to the selected aerial images with their (intrinsic and extrinsic) camera calibration parameters to synthesize aerial-view images.

As the (ground-view and aerial-view) images are synthesized by point projection, nearest neighbor interpolation is performed to deal with the inevitable missing pixels. In addition, as the 2D-3D correspondences between the synthetic image pixels and the laser points are required in next steps, the depth maps of the synthetic images are generated as well (

c.f. Fig. 6c).

Subsequently, SIFT feature matching between the synthetic and captured images is performed. The ground-view synthetic images are matched with the captured ground images with near locations and similar view directions (less than m and respectively in this paper), while the aerial-view synthetic images are matched with the one they are synthesized to. In addition, as the depth near the edge of the synthetic image is unreliable, the feature points near the Canny edges [51] of the synthetic images are discarded before image matching. Examples of synthetic-to-ground and synthetic-to-aerial image matching results are given in Fig. 7 and Fig. 8 respectively.

Iii-C Image and Laser Scan Coarse-to-Fine Merging

Coarse Registration The laser scans obtained in Sec. III-B are coarsely registered to the SfM points one by one as follows. For the -th laser scan, the similarity transformation between its scanning points and the SfM points is denoted as . The 3D point correspondences for estimating the similarity transformation are converted from the synthetic-to-ground and synthetic-to-aerial 2D point matches obtained in the last section. The similarity transformation is estimated by a RANSAC-like algorithm [52], where a least square solver [53] is inserted. During the RANSAC procedure in this paper, there are random samples of the minimal subset ( pairs of 3D point correspondences) and the distance threshold is set to m. The synthetic-to-ground and synthetic-to-aerial 3D point correspondence inliers between the -th laser scan and the SfM points are denoted as and respectively.

Fine Merging After coarsely registering the laser scans to the SfM points, the (outdoor-indoor-aerial) camera poses, merged SfM points, and the laser scan alignments (similarity transformations) are jointly optimized by a generalized bundle adjustment (BA) to finely merge the images and laser scans. The reasons of performing this further optimization are two-fold: 1) The SfM points may be not accurate and even with scene drift, especially for the large-scale scenes; 2) The ground and aerial SfM points may be not accurately merged by the method [44]. By integrating the SfM result and laser scans into a global optimization, the accuracies of the above two issues are both increased. The BA procedure here is called a generalized one because the camera poses and laser scan alignments are simultaneously optimized by minimizing both 2D-3D reprojection errors and 3D-3D space errors.

The camera poses, merged SfM points, and the laser scan alignments are simultaneously optimized as follows:

(9)

where are the parameters to be optimized. and

are the rotation matrix and translation vector of the

-th camera; is the -th merged SfM point; is the uniform scale of all laser scans, and and are the rotation matrix and translation vector of the -th laser scan. The reason of estimating a uniform scale in Eq. 9 lies in that the scale of SfM points recovered via the built-in GPS of the cameras in Sec. III-A is not accurate enough. In order to achieve a more accurate merging of images and laser scans, the scale ratio between the geo-referenced SfM points and the laser points should be estimated, by which the scale of SfM points can be accurately recovered.

Fig. 9: Examples of captured images and merged SfM points of NCT (left) and FGT (right).

The reprojection error term in Eq. 9 is defined as:

(10)

where is the observed projection of in the -th image; is the intrinsic parameter matrix of the -th camera, which is kept unchanged during the optimization, as it is considered accurately calibrated during the SfM procedure in Sec. III-A; is the projection function; is the covariance matrix of , which is relevant to the local feature scale of .

The ground space error term and the aerial spatial error term in Eq. 9 are respectively defined as:

(11)

where and are the covariance matrices of and respectively, which are relevant to the distances from the laser points to the scanning origins. The reason of involving Mahalanobis norms in Eq. 10 and Eq. 11 is to eliminate the imbalance in dimension and noise level between the reprojection and space error terms.

In addition, in Eq. 9

is the Huber loss function, which is introduced to deal with the inevitable mismatching and noise; and

in Eq. 9 is a balancing factor which controls the weights of the constraints defined in Eq. 10 and Eq. 11. The optimization problem in Eq. 9 is solved by Ceres Solver222http://ceres-solver.org/. Note that when , the optimization problem in Eq. 9 is mainly to minimize the (2D-3D) reprojection errors and approaches a standard BA problem; and when

, the optimization problem is mainly to minimize the (3D-3D) space errors and approaches a laser scan registration problem. A heuristic approach of adaptively setting the value of

is described and evaluated in the experimental section.

Iv Experimental Evaluation

Dataset NCT FGT
Covering area m m
# ground outdoor images
# ground indoor images
Ground outdoor image capturing time min min
Ground indoor image capturing time min min
Outdoor-indoor ratio of images
# aerial images
# planned outdoor laser scans
# planned indoor laser scans
Outdoor laser scanning time min min
Indoor laser scanning time min min
TABLE II: Meta-data of NCT and FGT.

In this section, our proposed complete scene reconstruction pipeline is evaluated. We perform experiments on two ancient Chinese architecture datasets, Nan-chan Temple (NCT) and Fo-guang Temple (FGT). They are typical ancient Chinese temple compounds, consisting of one or more main halls and a number of surrounding smaller temples. The indoor scenes of the main halls are usually complicated in structure and low in lighting. As a result, they are suitable objects for the research topic in this paper. We first captured images and generated SfM points as described in Sec. III-A. Then, we performed laser scanning at the planned scanning locations to obtain laser points using the method in Sec. III-B. The meta-data of the two datasets is detailed in Table II. Note that for ground (outdoor and indoor) images, images are captured at each capturing location (cf. TABLE I), which means there are , , , and image capturing locations for NCT outdoor and indoor scenes, and FGT outdoor and indoor scenes, respectively. In addition, the acquisition time of (outdoor and indoor) ground images and laser points is listed in Table II, either. The average acquisition time for ground outdoor and indoor image capturing is about min and min per station, while for outdoor and indoor laser scanning is about min and min per station. The longer data acquisition time for indoor scenes is due the longer exposure time for scenes with lower lighting.

Iv-a Image Capturing Results

We followed the pipeline described in Sec. III-A to capture images and generate SfM points of NCT and FGT respectively. The number of captured images, including ground outdoor, ground indoor, and aerial ones, for both NCT and FGT is shown in Table II. The examples of captured images and reconstructed SfM points are illustrated in Fig 9. We can see from the figure that ground and aerial SfM points are well merged. However, in the regions of low textures, low lighting, or complicated structures of both NCT and FGT, there are only very few points in the merged SfM points. As a result, it is necessary to perform laser scanning to obtain a more accurate and complete architectural scene model.

Iv-B Laser Scanning Results

Fig. 10: The influence of the value of to the number of planned laser scans on NCT and FGT.

During planning the laser scanning locations by the the method proposed in Sec. III-B, the parameter in Eq. 4 and Eq. 8 bounds the coverage of laser scanning, thus it directly determines the number of planned laser scans. Here, we perform experiments on NCT and FGT to demonstrate the influence of the value of on the number of planned laser scans. The results are shown in Fig. 10. From the figure we can see that as the value of getting larger, the number of planned laser scans increases accordingly. In this paper, is set to to balance data collection efficiency and reconstruction results.

Fig. 11: Laser scanning location planning results on NCT and FGT. First row: result on NCT; second row: result on FGT. First column: merged ground outdoor (green) and indoor (red) SfM points; second column: outdoor (green) and indoor (red) potential laser scanning locations; third column: outdoor (green) and indoor (red) planned laser scanning locations.

By setting to , we planned the laser scanning locations and performed scanning. The number of planned (outdoor and indoor) laser scanning locations is given in Table II. Note that the outdoor-indoor ratio of images is larger than that of laser scans for both NCT and FGT. As the potential laser scanning locations, i.e. the ground image capturing stations, are equally spaced (cf. Sec. III-A), the smaller outdoor-indoor ratio of laser scans means that the density of planned indoor laser scanning locations is larger than that of outdoor ones. That is because compared with the outdoor scenes, the indoor scenes have more complicated structures and lower textures. As a result, relatively more indoor scanning locations are automatically selected from the potential ones by our proposed laser scanning location planning method. Fig. 11 shows the laser scanning location planning results on NCT and FGT. From the figure we can see that the planned scanning locations are evenly and sparsely distributed throughout the architectural scenes.

Fig. 12: (a) and (b). SfM and laser points of the region marked by blue rectangle in top left corner of Fig. 11. (c) and (d). SfM and laser points of the region marked by the blue rectangle in bottom left corner of Fig. 11. (e) and (f). SfM and laser points of the region marked by magenta rectangle in top left corner of Fig. 11. (g) and (h). SfM and laser points of the region marked by the magenta rectangle in bottom left corner of Fig. 11.

In addition, to demonstrate the effectiveness of our proposed laser scanning location planning method, we select two regions for both NCT and FGT, where image based reconstruction method achieves relatively good result in one of them (magenta rectangles in the left column of Fig. 11), but not in the other (blue rectangles in the left column of Fig. 11). The SfM points and laser points are illustrated in Fig. 12. We can see that there are only a few noisy SfM points at the regions in the first row of Fig. 12. That is because these regions are with low textures (e.g. flat walls) and complicated structures (e.g. indoor painted sculptures and outdoor bracket sets). As for the regions in the second row of Fig. 12, the SfM points are denser and more accurate due to relatively simple structures and rich textures. As a result, by our proposed laser scanning planning method, the architectural scenes with low textures and complicated structures could be effectively covered by planned laser scans and a more accurate and complete architectural scene model could be obtained.

Fig. 13: Qualitative results of image and laser scan merging on NCT and FGT. First row: long-shots of NCT; from left to right: (outdoor-indoor-aerial) SfM points, (outdoor-indoor) laser points, merged SfM and laser points (red for laser points, green for aerial SfM points, and blue for ground SfM points), surface mesh generated from merged points. Second row: image examples and close-ups of the surface mesh with similar viewpoints on NCT; left two: an outdoor region of the green square at the top right corner of the figure; right two: an indoor region of the blue square at the top right corner of the figure. Third and fourth rows: the results on FGT similar to those of the first and second rows.

Iv-C Image and Laser Scan Merging Results

We merged images and laser scans in a coarse-to-fine manner according to the pipeline described in Sec. III-C. The qualitative and quantitative merging results are given respectively in the following.

Iv-C1 Qualitative Results

The qualitative results on NCT and FGT are shown in Fig. 13. In order to give a better visualization, we performed surface reconstruction on the merged SfM and laser points using the method [45]. We can see from the long-shots that the images and laser scans are well merged. In addition, the close-ups indicate the accurate and complete scene reconstruction is achieved in the regions with low textures and complicated structures. These qualitative results demonstrate the effectiveness of our proposed image and laser scan merging method.

Iv-C2 Quantitative Results

In this section, our proposed image and laser scan merging method is quantitatively evaluated. First, a quantitative measure is introduced for merging accuracy evaluation. Based on the measure, the settings of an important parameter during merging, , are assessed; and then the proposed method is quantitatively compared with two state-of-the-arts: Knapitsch et al. [42] and Schöps et al. [41].

Quantitative Measure As it is difficult to define an exact measure to quantitatively access the merging accuracy, we use an approximate measurement method for the quantitative evaluation. Specifically, we first manually obtain several corresponding points on SfM point cloud and laser point cloud respectively. For both NCT and FGT, pairs of reference points ( pairs for outdoor and pairs for indoor), which are evenly distributed in the scenes, are obtained. After image and laser scan merging, each pair of reference point is ideally coincident. Then, the root-mean-square (RMS) value of the distances between all pairs of reference points is used to quantitatively measure the accuracy of image and laser scan merging (the less, the better).

NCT mm mm mm mm mm mm mm
FGT mm mm mm mm mm mm mm
TABLE III: Image and laser scan merging accuracies (root-mean-square errors) on NCT and FGT with different ratios of initial space error cost to initial reprojection error cost: .
Method Baseline: Coarse Knapitsch et al. [42] Schöps et al. [41] Ours: Fine
NCT mm mm mm mm
FGT mm mm mm mm
TABLE IV: Image and laser scan merging accuracies (root-mean-square errors) on NCT and FGT with different comparative methods.

Parameter Settings Though the imbalance in dimension and noise level between the reprojection and space error terms is eliminated by involving Mahalanobis norm in Eq. 10 and Eq. 11, there is still another imbalance factor, i.e. the imbalance in the magnitude of observations, which is caused by the large difference in magnitude between the captured-to-captured image point matches and the synthetic-to-captured image point matches. This imbalance factor influences the image and laser scan merging accuracy largely and in this paper we deal with this issue by involving the balancing factor ( in Eq. 9). Here, we propose an adaptive way of determining the value of .

As described in Sec. III-C, the optimization problem in Eq. 9 simultaneously optimizes the camera poses and laser scan alignments. Intuitively, when the reprojection error cost approximately equals the space error cost, which means when:

(12)

the optimization problem in Eq. 9 achieves a good balance between camera calibration and laser scan registration, and could get a good merging of images and laser scans. To verify this, we respectively define the initial reprojection error cost and the initial space error cost as:

(13)

The initial error cost means the cost is computed from the initial guesses of the parameters to be optimized in Eq. 9, which are obtained from SfM and coarse laser scan registration processes and are relatively accurate. Let denotes the initial cost ratio , and the image and laser scan merging accuracies with different on NCT and FGT are shown in Table III. Note that the value of is proportional to that of .

We can see from Table III that for both NCT and FGT, the image and laser scan merging accuracies get higher at first and lower later as gets larger. Only when the value of in a proper range (), could the SfM and laser point cloud merging achieves high accuracy, which validates the above assumption. As a result, is set to the value, by which the initial space error cost equals the initial reprojection error cost in this paper:

(14)

Comparison Results Then, we quantitatively compared our proposed image and laser scan merging method with Knapitsch et al. [42] and Schöps et al. [41]. The comparative results are shown in Table IV. Coarse is the merging accuracy after laser scan coarse registration, while Fine is the merging accuracy after image and laser scan fine merging. In [42], the dense points generated from images are registered to the laser scans using an extension ICP to similarity transformations (including scale) [54]. Note that the merging accuracy of [18, 19] would not be higher than that of [42], as their point cloud merging methods are similar in principle. In [41], based on the coarse registration, laser scan alignments are optimized first using point-to-plane ICP [55] and camera poses are then refined by fixing the laser scans using an extended version of the dense image alignment approach [56].

From the table we can see that compared with the baseline (Coarse), the increase in merging accuracy of our method (Fine) is larger than both Knapitsch et al. [42] and Schöps et al. [41]. That is because: 1) For [42], as the density and noise level between the points generated from images and laser scans is extremely large, thus it is hard to achieve an accurate registration between these two kinds of points; 2) For [41], its merging accuracy is highly dependent on the results of the ICP for laser scan alignment optimization. However, the laser scans in our scene reconstruction pipeline are only served as supplement, thus the coverage between each adjacent laser scan pair is quite limited. As a result, for our NCT and FGT datasets, ICP would not achieve a highly accurate registration of the laser scans to help to significantly improve the image and laser scan merging accuracy.

In addition, in order to demonstrate the efficiency advantage in data collection of our proposed pipeline over laser scanning based reconstruction method. We roughly compare the time we used with that of laser scanning based reconstruction method. From Table II we can know that the time we used for capturing ground images and scanning laser points on NCT and FGT are about min and min, respectively. However, for laser scanning based reconstruction method, suppose that the coverage threshold is set to . As shown in Fig. 10, for NCT, outdoor and indoor laser scans are planned; while for FGT, outdoor and indoor laser scans are planned. Thus, the time used for laser scanning is about min and min on NCT and FGT, respectively. Note that the time for equipment handling, which is much longer for the laser scanner compared with the digital camera, is not included in the reported time above. As a result, our proposed method is much more efficient in data collection compared with laser scanning based method.

Iv-D Scene Reconstruction Results

Finally, to demonstrate the effectiveness of our scene reconstruction method, we compare it with image based reconstruction method and laser scanning based method. Here, we do not perform the comparison on the whole scenes of NCT and FGT but only on a part of them. For NCT, its indoor scene is used for comparison; while for FGT, the outdoor scene around a hall inside it, named Great East Hall (GEH), is used for comparison.

In order to compare the reconstruction results, the results of laser scanning based method are served as ground truths, and the results of image based method and our method are compared against them. When performing the comparison, the aerial SfM points for the results of both image based method (obtained in Sec. IV-A) and our method (obtained in Sec. IV-C) are eliminated, as there is no airborne laser scan.

Fig. 14: Constructed ground truths for indoor scene of NCT (left) and outdoor scene of GEH (right).

To construct ground truths, the laser scans are accurately registered by using man-made targets and software provided by Leica. However, if only the planned laser scans are involved here, it is hard to achieve an accurate registration due to the limited scanning overlaps; and the registered laser scans are insufficient to serve as the ground truths because of the limited scene coverage. As a result, beside the planned laser scans, we additionally perform laser scanning at some other locations. For the indoor scene of NCT, the number of laser scans is increased from to ; while for the outdoor scene of GEH, the number of laser scans is increased from to . The constructed ground truths are shown in Fig. 14.

After ground truth construction, we follow the method in [42] to evaluate the reconstruction results. Specifically, both the reconstructed and ground-truth point clouds are resampled using a uniform voxel grid at first, whose size is and m in this paper. Then, the precision , the recall

, and the comprehensive measure F-score

are computed and served as measures for reconstruction evaluation. The definition of and can be found in [42]. The evaluation results based on the above three measures are shown in Table V. We can see from the table that compared with the image based method, our method achieves better performances in both accuracy (precision) and completeness (recall). In addition, the F-score of our method is close to , which means our method achieves comparable reconstruction results in both accuracy and completeness compared with the laser scanning based method.

Method Image based method Our method
NCT indoor
GEH outdoor
TABLE V: Reconstruction comparison of image based and our methods against laser based method in precisionrecallF-score.

V Conclusion

In this paper, we propose a novel pipeline for architectural scene reconstruction by utilizing two different sources of complementary data, images and laser scans, to achieve a good balance between data acquisition efficiency and reconstruction accuracy and completeness. The images are used as primacy to completely cover the scene, while the laser scans are served as supplement to deal with low textured, low lighting, or complicated structured regions. Our pipeline contains three main steps: image capturing, laser scanning, and image and laser scan merging, by which an accurate and complete scene reconstruction is achieved. Experimental results on our two ancient Chinese architecture datasets demonstrate the effectiveness of each main step of our proposed pipeline. In the future, we intend to merge the points scanned from the handy equipment, e.g. Kinect, to our pipeline to obtain more complete and detailed reconstruction in complicated architectural scene.

References

  • [1] A. Chatterjee and V. M. Govindu, “Efficient and robust large-scale rotation averaging,” in IEEE International Conference on Computer Vision (ICCV), 2013, pp. 521–528.
  • [2] Z. Cui and P. Tan, “Global structure-from-motion by similarity averaging,” in IEEE International Conference on Computer Vision (ICCV), 2015, pp. 864–872.
  • [3] H. Cui, S. Shen, W. Gao, and Z. Hu, “Efficient large-scale structure from motion by fusing auxiliary imaging information,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3561–3573, 2015.
  • [4] J. L. Schönberger and J. M. Frahm, “Structure-from-motion revisited,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4104–4113.
  • [5] H. Cui, X. Gao, S. Shen, and Z. Hu, “HSfM: Hybrid structure-from-motion,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2393–2402.
  • [6] H. Cui, S. Shen, and Z. Hu, “Tracks selection for robust, efficient and scalable large-scale structure from motion,” Pattern Recognition, vol. 72, no. 12, pp. 341 – 354, 2017.
  • [7] Y. Furukawa and J. Ponce, “Accurate, dense, and robust multiview stereopsis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 8, pp. 1362–1376, 2010.
  • [8] S. Shen, “Accurate multiple view 3D reconstruction using patch-based stereo for large-scale scenes,” IEEE Transactions on Image Processing, vol. 22, no. 5, pp. 1901–1914, 2013.
  • [9] S. Shen and Z. Hu, “How to select good neighboring images in depth-map merging based 3D modeling,” IEEE Transactions on Image Processing, vol. 23, no. 1, pp. 308–318, 2014.
  • [10] B. Ummenhofer and T. Brox, “Global, dense multiscale reconstruction for a billion points,” International Journal of Computer Vision, vol. 125, no. 1, pp. 82–94, 2017.
  • [11] J. Park, S. N. Sinha, Y. Matsushita, Y. W. Tai, and I. S. Kweon, “Robust multiview photometric stereo using planar mesh parameterization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 8, pp. 1591–1604, 2017.
  • [12] Q. Zheng, A. Sharf, G. Wan, Y. Li, N. J. Mitra, D. Cohen-Or, and B. Chen, “Non-local scan consolidation for 3D urban scenes,” in ACM SIGGRAPH, 2010, pp. 94:1–94:9.
  • [13] L. Nan, A. Sharf, H. Zhang, D. Cohen-Or, and B. Chen, “Smartboxes for interactive urban reconstruction,” in ACM SIGGRAPH, 2010, pp. 93:1–93:10.
  • [14] C. A. Vanegas, D. G. Aliaga, and B. Benes, “Automatic extraction of manhattan-world building masses from 3D laser range scans,” IEEE Transactions on Visualization and Computer Graphics, vol. 18, no. 10, pp. 1627–1637, 2012.
  • [15] M. Li, P. Wonka, and L. Nan, “Manhattan-world urban reconstruction from point clouds,” in European Conference on Computer Vision (ECCV), 2016, pp. 54–69.
  • [16] P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239–256, 1992.
  • [17] P. Bastonero, E. Donadio, F. Chiabrando, and A. Spanò, “Fusion of 3D models derived from TLS and image-based techniques for CH enhanced documentation,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. II-5, pp. 73–80, 2014.
  • [18] M. Russo and A. M. Manferdini, “Integration of image and range-based techniques for surveying complex architectures,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. II-5, pp. 305–312, 2014.
  • [19] C. Altuntas, “Integration of point clouds originated from laser scaner and photogrammetric images for visualization of complex details of historical buildings,” ISPRS International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XL-5/W4, pp. 431–435, 2015.
  • [20] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.
  • [21] D. Nister and H. Stewenius, “Scalable recognition with a vocabulary tree,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, 2006, pp. 2161–2168.
  • [22]

    J. L. Schönberger, T. Price, T. Sattler, J.-M. Frahm, and M. Pollefeys, “A vote-and-verify strategy for fast spatial verification in image retrieval,” in

    Asian Conference on Computer Vision (ACCV), 2016, pp. 321–337.
  • [23]

    M. Muja and D. G. Lowe, “Scalable nearest neighbor algorithms for high dimensional data,”

    IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 11, pp. 2227–2240, 2014.
  • [24] C. Hoppe, A. Wendel, S. Zollmann, K. Pirker, A. Irschara, H. Bischof, and S. Kluckner, “Photogrammetric camera network design for micro aerial vehicles,” in Computer Vision Winter Workshop (CVWW), 2012.
  • [25] M. Roberts, S. Shah, D. Dey, A. Truong, S. Sinha, A. Kapoor, P. Hanrahan, and N. Joshi, “Submodular trajectory optimization for aerial 3d scanning,” in IEEE International Conference on Computer Vision (ICCV), 2017, pp. 5334–5343.
  • [26] R. Huang, D. Zou, R. Vaughan, and P. Tan, “Active image-based modeling with a toy drone,” in IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 6124–6131.
  • [27] J. Zhang and S. Singh, “LOAM: Lidar odometry and mapping in real-time,” in Robotics: Science and Systems, 2014.
  • [28] W. Hess, D. Kohler, H. Rapp, and D. Andor, “Real-time loop closure in 2D LIDAR SLAM,” in IEEE International Conference on Robotics and Automation (ICRA), 2016, pp. 1271–1278.
  • [29] T. Shan and B. Englot, “LeGO-LOAM: Lightweight and ground-optimized lidar odometry and mapping on variable terrain,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 4758–4765.
  • [30] S. Soudarissanane and R. Lindenbergh, “Optimizing terrestrial laser scanning measurement set-up,” ISPRS International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XXXVIII-5/W12, pp. 127–132, 2011.
  • [31] D. Wujanz and F. Neitzel, “Model based viewpoint planning for terrestrial laser scanning from an economic perspective,” ISPRS International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XLI-B5, pp. 607–614, 2016.
  • [32]

    F. Jia and D. Lichti, “A comparison of simulated annealing, genetic algorithm and particle swarm optimization in optimal first-order design of indoor TLS networks,”

    ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. IV-2/W4, pp. 75–82, 2017.
  • [33] F. Jia and D. D. Lichti, “An efficient, hierarchical viewpoint planning strategy for terrestrial laser scanner networks,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. IV-2, pp. 137–144, 2018.
  • [34] L. Liu and I. Stamos, “A systematic approach for 2D-image to 3D-range registration in urban environments,” in IEEE International Conference on Computer Vision (ICCV), 2007, pp. 1–8.
  • [35] Z. Bila, J. Reznicek, and K. Pavelka, “Range and panoramic image fusion into a textured range image for culture heritage documentation,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. II-5/W1, pp. 31–36, 2013.
  • [36] B. Sirmacek, R. C. Lindenbergh, and M. Menenti, “Automatic registration of Iphone images to laser point clouds of the urban structures using shape features,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. II-5/W2, pp. 265–270, 2013.
  • [37] I. Stamos and P. K. Alien, “Automatic registration of 2-D with 3-D imagery in urban environments,” in IEEE International Conference on Computer Vision (ICCV), 2001, pp. 731–736.
  • [38] Y. Li, Q. Zheng, A. Sharf, D. Cohen-Or, B. Chen, and N. J. Mitra, “2D-3D fusion for layer decomposition of urban facades,” in IEEE International Conference on Computer Vision (ICCV), 2011, pp. 882–889.
  • [39] F. Nex, M. Gerke, F. Remondino, H.-J. Przybilla, M. Bäumker, and A. Zurhorst, “ISPRS benchmark for multi-platform photogrammetry,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. II-3/W4, pp. 135–142, 2015.
  • [40] C. Strecha, W. von Hansen, L. V. Gool, P. Fua, and U. Thoennessen, “On benchmarking camera calibration and multi-view stereo for high resolution imagery,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008, pp. 1–8.
  • [41] T. Schöps, J. L. Schönberger, S. Galliani, T. Sattler, K. Schindler, M. Pollefeys, and A. Geiger, “A multi-view stereo benchmark with high-resolution images and multi-camera videos,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2538–2547.
  • [42] A. Knapitsch, J. Park, Q.-Y. Zhou, and V. Koltun, “Tanks and temples: Benchmarking large-scale scene reconstruction,” ACM Transactions on Graphics, vol. 36, no. 4, pp. 78:1–78:13, 2017.
  • [43] X. Gao, S. Shen, Z. Hu, and Z. Wang, “Ground and aerial meta-data integration for localization and reconstruction: A review,” Pattern Recognition Letters, 2018.
  • [44] X. Gao, S. Shen, Y. Zhou, H. Cui, L. Zhu, and Z. Hu, “Ancient chinese architecture 3D preservation by merging ground and aerial point clouds,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 143, no. 9, pp. 72–84, 2018.
  • [45] H. H. Vu, P. Labatut, J. P. Pons, and R. Keriven, “High accuracy and visibility-consistent dense multiview stereo,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 5, pp. 889–901, 2012.
  • [46] A. Cohen, J. L. Schönberger, P. Speciale, T. Sattler, J.-M. Frahm, and M. Pollefeys, “Indoor-outdoor 3D reconstruction alignment,” in European Conference on Computer Vision (ECCV), 2016, pp. 285–300.
  • [47] M. Deserno, “How to generate equidistributed points on the surface of a sphere,” P.-If Polymerforshung (Ed.), p. 99, 2004.
  • [48] R. Pless, “Using many cameras as one,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2003, pp. II–587–93.
  • [49] G. H. Lee, F. Faundorfer, and M. Pollefeys, “Motion estimation for self-driving cars with a generalized camera,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 2746–2753.
  • [50] X. Gao, L. Hu, H. Cui, S. Shen, and Z. Hu, “Accurate and efficient ground-to-aerial model alignment,” Pattern Recognition, vol. 76, no. 4, pp. 288–302, 2018.
  • [51] J. Canny, “A computational approach to edge detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679–698, 1986.
  • [52] M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.
  • [53] S. Umeyama, “Least-squares estimation of transformation parameters between two point patterns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 4, pp. 376–380, 1991.
  • [54] B. Lin, T. Tamaki, B. Raytchev, K. Kaneda, and K. Ichii, “Scale ratio ICP for 3D point clouds with different scales,” in IEEE International Conference on Image Processing (ICIP), 2013, pp. 2217–2221.
  • [55] Y. Chen and G. Medioni, “Object modelling by registration of multiple range images,” Image and Vision Computing, vol. 10, no. 3, pp. 145–155, 1992.
  • [56] Q.-Y. Zhou and V. Koltun, “Color map optimization for 3D reconstruction with consumer depth cameras,” ACM Transactions on Graphics, vol. 33, no. 4, pp. 155:1–155:10, 2014.