On-device Scalable Image-based Localization

02/10/2018 ∙ by Ngoc-Trung Tran, et al. ∙ Singapore University of Technology and Design 0

We present the scalable design of an entire on-device system for large-scale urban localization. The proposed design integrates compact image retrieval and 2D-3D correspondence search to estimate the camera pose in a city region of extensive coverage. Our design is GPS agnostic and does not require the network connection. The system explores the use of an abundant dataset: Google Street View (GSV). In order to overcome the resource constraints of mobile devices, we carefully optimize the system design at every stage: we use state-of-the-art image retrieval to quickly locate candidate regions and limit candidate 3D points; we propose a new hashing-based approach for fast computation of 2D-3D correspondences and new one-many RANSAC for accurate pose estimation. The experiments are conducted on benchmark datasets for 2D-3D correspondence search and on a database of over 227K Google Street View (GSV) images for the overall system. Results show that our 2D-3D correspondence search achieves state-of-the-art performance on some benchmark datasets and our system can accurately and quickly localize mobile images; the median error is less than 4 meters and the processing time is averagely less than 10s on a typical mobile device.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Estimating accurately the camera pose is a fundamental requirement of many applications, including robotics, augmented reality, autonomous vehicle navigation and location recognition. Usage of visual/image sensors (e.g., camera) is advantageous when developing such localization system because they provide rich information about the scene. While sensor data obtained from GPS (Global Positioning System), WiFi and Bluetooth can also be used, they have their limitations. The accuracy of GPS sensors is highly dependent on the surrounding environments. GPS-based localization would perform poorly in downtown areas and urban canyons, e.g., the localization error can be up to 30m or more [chen-cvpr-2011]. Moreover, GPS information is often unavailable in indoor locations. Due to its sensitivity to magnetic disturbances, GPS can also be denied/lost or easily hacked, and thus is not suitable for secure applications. While localization systems using WiFi and Bluetooth can be considered, they are not always available in outdoor environments. Therefore, it is important to investigate image-based localization systems that do not require GPS/Bluetooth/WiFi support.

State-of-the-art methods for image-based localization [li-eccv-2012, svarm-cvpr-2014, sattler-pami-2016] leverage the 3D models of the scene. These 3D models are often pre-built from image datasets by using advanced Structure-from-Motion (SfM) [snavely-siggraph-2006]. These 3D model based localization methods are memory and computational intensive. It is challenging to employ them on resource-constrained mobile devices [zhou:2018].

The main goal of our work is to research a large-scale localization system that runs entirely on a mobile device. We address following main challenges: constrained memory and computational resources of a mobile device, requirements of high localization accuracy and extensive localization coverage. Previous work has not addressed all these challenges in a single solution. In particular, previous work has focused on improving accuracy [li-eccv-2010, li-eccv-2012, svarm-cvpr-2014, sattler-pami-2016]. Other work has proposed systems on mobile devices but they require client-server communication due to high computational requirements [arth-ismar-2011, ventura-tvcg-2014, middelberg-eccv-2014]. Some work has researched on-device systems but they cover only small areas due to memory usages [lim-cvpr-2012]. To address all the challenges, our paper makes novel contributions in both system design and component image processing algorithms.

Contributions in system design: To address the above challenges, we propose a new system design that leverages the advantages of image retrieval and 3D model-based localization.

  • In previous work [majdik-iros-2013, zhang-3dpvt-2006, zamir-eccv-2010]

    , image retrieval has been applied for localization. The issue with these approaches are localization accuracy. In particular, the location of the query is estimated through the geometric relationship between the queries and the retrieved images. The accuracy depends on the performance of image retrieval. While some recent work has applied deep learning for image retrieval

    [hoang:2017, do:2017, do:2016a, do:2016b], applying them for resource-constrained mobile devices is challenging. We have compared the accuracy of image retrieval based localization and the results suggest that the accuracy could be inadequate (see Fig. 19 in our experiments).

  • 3D model methods can achieve good localization accuracy [li-eccv-2012, sattler-pami-2016]. However, these methods are not scalable: the memory requirement of storing the 3D point cloud of a large area is enormous. Furthermore, it is difficult to maintain a large 3D model: updates in the city (e.g., newly constructed building) require substantial effort to re-build a large 3D model even with recent advances in Structure-from-Motion (SfM) [snavely-siggraph-2006, wu-3dv-2013].

Our proposed system design leverages the scalability advantage of image retrieval and accuracy of 3D model-based localization. We propose to divide the region into sub-regions and construct 3D sub-models for the sub-regions. Sub-models are small and easier to be constructed, and multiple sub-models can be constructed in parallel. Individual sub-models can be updated without re-training other sub-models. Given a query image, in our proposed system, we apply image retrieval to identify the related sub-models. Then 2D-3D correspondence search is used for these sub-models. Note that only the related sub-models need to be transferred into internal memory for processing, thus internal memory requirement is small. Note that the work in [arth-ismar-2009] also partitions data/models into smaller parts. However, their work requires GPS/WiFi or manual inputs to identify the relevant partitions.

Contributions in algorithms: Furthermore, we make two main contributions in reducing the processing time and improving the accuracy of 2D-3D correspondence search. First, we propose a cascade hashing based search and re-ranking using Product Quantization (PQ). Second, we propose a new one-many (1-M) RANSAC. The motivation of our 1-M RANSAC is as follows: Building facade usually has many repetitive elements (e.g., windows, balconies). These repetitive elements are similar in appearance, and the corresponding local descriptors are almost identical. This complicates feature correspondence search. In particular, the correct correspondences may not be in the top rank, and they are mistakenly rejected when using conventional techniques (See Fig. 4 for some examples). This is an important issue for image-based localization. The goal of our proposed 1-M RANSAC is to reduce rejection of correct correspondences which are not in top rank, while requiring similar computational complexity as conventional RANSAC.

Overall, through extensive experiments on workstations and mobile devices, we demonstrate that our proposed image-based localization system is faster, requires less memory, and is more accurate, comparing to other state-of-the-art.

In addition, we demonstrate our system on street view images of Google Street View (GSV) [anguelov-joc-2010]. GSV images can be potentially leveraged for practical applications that require extensive coverage of many cities in the world. We investigate the potential of using GSV dataset for localization, and this is important for practical localization systems. While there exists a number of prior works building their systems on GSV [majdik-iros-2013, liu-acmm-2012, taneja-accv-2014, agarwal-iros-2015], our work is different and focuses on camera pose estimation of images in a large-scale dataset using mobile devices. Note that GSV is a challenging dataset for pose estimation: common issues include low sampling rate, distortion, co-linear cameras, wide baseline, obstacle objects (trees, vehicles) and query images taken using different devices at different timing and conditions (distortion, illumination). Nevertheless, our results on a large GSV image dataset show that, via our proposed system design, new hashing-based cascade 2D-3D search and new one-many RANSAC, we can achieve a median error of less than 4m with average processing time less than 10s on a typical mobile device.

Ii Related Works

Ii-a Image based Localization

Early works of image-based localization can be divided into two main categories: retrieval based approach and 3D model-based approach (or direct search approach). Retrieval based methods [zhang-3dpvt-2006, nister-cvpr-2006, zamir-eccv-2010, majdik-iros-2013, qian-tmm-2017] are closely related to image retrieval by matching query features against geo-tagged database images. This matching will result in a set of similar images according to the query. The query pose [zhang-3dpvt-2006, majdik-iros-2013], GPS (Global Positioning System) [zamir-eccv-2010, li-tmm-2013] or POI (Places of Interest) [qian-ieee-2017, qian-tip-2018] can be inferred from those references. This approach depends highly on the accuracy of image retrieval as it does not utilize the geometric information of 3D models. Unlike the retrieval based methods, the model-based approach directly performs the 2D-3D matching between the 2D features of the query image and the 3D points of the 3D model. A 3D model, which is a set of 3D points, is constructed from the given set of 2D images by using modern Structure-from-Motion (SfM) approaches e.g. [snavely-siggraph-2006]

. This approach achieves more reliable results than the retrieval-based approach because it imposes stronger geometric constraints. Preferably, it holds more information about the 3D structure of the scene. Consequently, the camera pose can be computed from 2D-3D correspondences by RANSAC within Direct Linear Transform (DLT) algorithm

[hartley2003multiple] inside.

The representative works of 3D model based approach [irschara-cvpr-2009, li-eccv-2010, sattler-iccv-2011, sattler-eccv-2012, li-eccv-2012]. [irschara-cvpr-2009] use SfM models as the basis for localization. First, it performs image retrieval and then computes 2D-3D matches between 2D features in the query and 3D points visible in top retrieved images. Synthetic views of 3D points are generated to improve image registration. [li-eccv-2010] compresses the 3D model and prioritizes its 3D points (given the prior knowledge from visibility graph) in 3D-2D correspondence search and this allows the ”common” views to be localized quickly. [sattler-iccv-2011] proposes the efficient prioritization scheme to stop the 2D-3D direct search early when it has detected enough number of correspondences. [sattler-eccv-2012, li-eccv-2012] proposes two-directional searches from 2D image features to 3D points and vice versa, this approach can recover some matches lost due to the ratio test.

A recent trend in 3D model-based localization shifts the task of finding correct correspondences from the matching step to pose estimation step through leveraging of geometric cues. [svarm-cvpr-2014]

proposes an outliers filter with the assumption of a known direction of gravitational vector and the rough estimate of the ground plane in a 3D model. Consequently, the pose estimation problem can be cast into a 2D registration problem. Following the same setup like

[svarm-cvpr-2014], [zeisl-iccv-215] proposes a filtering strategy which is based on Hough voting in linear complexity. To reduce the computational time of the method, the authors exploit the verification step by using local feature geometry, such as the viewing-direction constraints or the scale and the orientation of 2D local features to reject false early matches before the voting. [camposeco-cvpr-2017] proposes a two-point formulation to estimate the absolute camera position. This solver combines the triangulation constraint of viewing direction and toroidal constraints as the camera is known to lie on the surface of torus.

Rather than explicitly estimating the camera pose from 2D-3D matching, recent works have applied deep learning for this problem [kendall-cvpr-2015, melekhov-iccvw-2017, kendall-cvpr-2017, walch-iccv-2017]

. They directly learn to regress the camera pose (e.g. 6 Degree-of-Freedom (DOF)) from images. However, this approach may require further research to achieve the comparable accuracy of camera pose estimation as the 3D model-based approach. Besides, applying them for resource-constrained mobile devices is challenging.

Ii-B On-device systems

All 3D model-based methods require a massive amount of memory to store SIFT descriptors. Due to memory constraints, loading a large 3D model on memory to perform the correspondence search is impractical. Some earlier works tried to build localization systems that run on mobile devices. [arth-ismar-2009] keeps the 3D model out-of-core and manually divides it into multiple segments that fit into the memory capability of a mobile phone. However, this work is confined to small workspaces and requires the initial query image location with the support of WiFi, GPS,… or manual inputs need to be provided. The work is extended for outdoor localization [arth-ismar-2011], but prior knowledge of coarse location or relevant portions of pre-partitioned databases downloaded from wireless network is still needed. [ventura-tvcg-2014] and [middelberg-eccv-2014] employ the client-server architectures. These methods first estimate the camera pose on devices, and further improve the pose estimation by aligning it with the global model to avoid the drift. While [ventura-tvcg-2014] keeps part of the global model on device’s memory to speed up the matching, [middelberg-eccv-2014] reconstructs its own map of the scene and uses the global pose received from an external server to align to this map. [lim-cvpr-2012] use Harris-corner detectors and extract two binary features for tracking and 2D-3D matching. It avoids excess computation via matching over a small batch of tracked keypoints only. [lynen-rss-2015] implements a fast pose estimation and tracking entirely on a device. This work uses Inverted Multi-Index (IMI) [babenko-cvpr-2012] for compressing and indexing 3D keypoints that allows storage of the 3D model into device memory. However, using this scheme may eliminate 3D points which are necessary to localize many difficult queries.

Ii-C Using Street View images for localization

One of the difficulties in developing a large-scale image-based localization is data collection where ground-truth data, e.g. camera pose or GPS, in real-world are required. Several on-device systems [arth-ismar-2011, ventura-tvcg-2014, middelberg-eccv-2014, lim-cvpr-2012, lynen-rss-2015] have to collect their own dataset for experiments which are usually confined to small areas. Mining images from online photo collections like Flickr [snavely-siggraph-2006] is an attractive solution. However, this undertaking is challenging due to noisy distortions distributed in the real world images. In addition, the coverage of images is often in popular places, e.g. city landmarks. [chen-cvpr-2011] approached by using cameras-mounted surveying vehicles to harness the street-level data in San Francisco. They published a dataset containing 150k high-resolution panoramic images of San Francisco to the community. [majdik-iros-2013] uses GSV images to localize UAV by generating virtual views and matching images with strong viewpoint changes. [taneja-accv-2014] performs tracking of vehicles inside the structure of the street-view graph by a Bayesian framework. This system requires compasses measurements and fixed cameras within many assumptions of video capturing conditions. [agarwal-iros-2015] tracks the pose of a camera from a short stream of images and geo-register the camera by including GSV images into the local reconstruction of the image stream. Nearby panoramic images are determined by image retrieval with restrictions of locations inferred by GPS or cellular networks in the surrounding 1km area.

Iii Proposed System

We first provide an overview of our proposed design for on-device large-scale localization system to overcome the constraints of memory and computation on a typical mobile device. Then, we discuss our main contribution of the 2D-3D correspondence search in speeding up the system.

Iii-a On-device localization system

Fig. 1: Overview of our proposed system with three main components. Image retrieval (IR) identifies reference images that are similar to the query image. The retrieved images indicate relevant 3D models. Then, camera pose is calculated by aligning the query image to these 3D models using cascade search and one-many (1-M) RANSAC.

We design our system in a hierarchical structure: we first divide the scene into smaller parts or segments, then we index them using image-retrieval method to quickly find possible segments of the scene where the query image belongs, and finally, we localize the camera pose of the query by these selected segments using 3D model-based approach. Our proposed system design aims to overcome the constraints of memory and computation (while preserving competitive accuracy) when using a large-scale dataset on a typical mobile device. We demonstrate our overall system via a large collection of GSV images for urban localization. Our system has three main components (Fig. 1): (i) The first component is the set of 3D models to represent the scene. Instead of representing the entire 3D scene by a single model, we divide the scene into smaller segments and construct small 3D models from those segments. (ii) The second component uses image retrieval to identify similar images (or references) given a query as well as the 3D model candidates from those references. In this work, we apply the image-retrieval method proposed in [jegou-cvpr-2014], this method is memory-efficient, fast and accurate. (iii) The third component is the 2D-3D correspondence search and geometric verification. We propose a new cascade search and the one-many RANSAC to improve localization accuracy and reduce latency. These will be discussed in more detail. In this work, we apply SIFT [lowe-ijcv-2004] features as the input for both image retrieval and 2D-3D correspondence search, as SIFT has been demonstrated to be reliable and efficient in various applications: 3D reconstruction, image retrieval, and image-based localization. Note that other features can be used for our proposed pipeline.

Iii-A1 Scene representation using small 3D models

We demonstrate our overall system on a collection of Google Street View (GSV) [anguelov-joc-2010] images. GSV is a very large image dataset. Constructing a single, large 3D model from such a large-scale dataset is computationally expensive. Moreover, it could be difficult to load such a large 3D model into the internal memory of mobile devices. In addition, representing the scene by a single model is inflexible: It is rather difficult to update a large model when some region of the city changes (e.g. newly constructed buildings). Therefore, in our work, we divide the scene into smaller segments and build small 3D models for individual segments (Fig. 16). Reconstruction of small 3D models can be performed in parallel, and this reduces the processing time needed build the scene models. Moreover, provided that the corresponding small 3D models can be correctly identified, localization using small 3D models can achieve better accuracy as there exists less number of distracting 3D points. Furthermore, localization time can be reduced using small 3D models. We use 8-10 consecutive GSV placemarks to define a segment of the scene. As we sample 60 street view images per placemark, there are 480-600 images for a segment. These numbers are determined through experiments in Section IV-B1. We use SIFT to detect keypoints for image datasets and Incremental SfM [snavely-siggraph-2006, wu-3dv-2013] to reconstruct a 3D model from the images of a segment. See examples of our 3D models in Fig. 2. Note that instead of the original SIFT descriptors of these 3D models, their hash code and quantized representation are stored. This reduces the memory requirement and speeds up the search. It will be discussed in Section III-B.

Fig. 2: Examples of 3D models reconstructed by SfM.

Iii-A2 Model indexing by image retrieval

We also use the image retrieval (IR) in our framework. However, in contrast to the image retrieval based approach whose localization is sensitive to the resulted list, we use IR to identify the list of 3D models for localizing the query image. In our case IR serves as the coarse search to limit the searching space for the second step (2D-3D correspondence search).

Let be the images in dataset. If the image was used to reconstruct 3D model , we set , otherwise . Given a query image , image retrieval seeks top similar images from the dataset, namely . is a candidate model if : , . Note that IR may identify multiple candidate models () for localizing the query image. In this case, the camera pose is estimated using the 3D model with the maximum number of 2D-3D correspondences (Section III-B). The summary of image retrieval is as follows: First, we extract SIFT features [lowe-ijcv-2004] and embed them into high dimensional using Triangulation Embedding (T-embedding) [jegou-cvpr-2014]. As a result, each image has a fixed-length T-embedding feature as a discriminative vector representation. We set the feature size to 4096. To reduce the memory requirement and improve the search efficiency, we apply Product Quantization (PQ) with Inverted File (IVFADC) [jegou-pami-2011] to the T-embedding features. Details can be found in [jegou-pami-2011, jegou-cvpr-2014]. Note that the PQ codes are compact. As a result, we can fit the entire PQ codes of 227K reference images into the RAM of a mobile device. Processing time for IR is less than 1s (GPU acceleration) for 227K reference images on a mobile device. Note that 227K images correspond to approximately 15km road distance coverage.

Using IR to index 3D models is memory efficient because only a few models are processed each time. On the other hand, performing 2D-3D correspondence search is more expensive due to matching between the query and models. This leads to our proposed idea of correspondence search which aims to reduce this computational complexity.

Iii-B Fast 2D-3D correspondence search

Our proposed method for 2D-3D correspondence search, namely Cascade Correspondence Search (CCS), consists of two parts: (i) an efficient 2D-3D matching that seeks top ranked list of nearest neighbors in cascade manner and (ii) a fast and effective RANSAC which helps to boost accuracy through exploitation of inliers from a large number of correspondences.

Fig. 3: The pipeline of our cascade search. It consists of three main steps: coarse search (16-bit LUT), refined search (128-bit) and precise search (16-byte). SIFT descriptors (128 bytes) are compressed into 128-bit binary vectors. These vectors are used in the coarse search to quickly identify a short list of candidates. These candidates are then examined in the precise search with PQ. Precise search identifies correspondences for the next step, i.e., RANSAC.

Iii-B1 Cascade search for 2D-3D matching

Our method leverages the efficient computation of Hamming distance. We follow the Pigeonhole Principle on binary code [norouzi-cvpr-2012] to further accelerate the search. The key idea is the following [norouzi-cvpr-2012]: A binary code , comprising bits, is partitioned into disjoint sub-binary vectors, , each has bits. For convenience, we assume that is divisible by . When two binary codes and differ at most bits, then, at least, one of sub-binary vectors, for example , must differ at most bits. Formally, it can be written:

(1)

where is the Hamming distance.

The pipeline of our proposed 2D-3D matching method is shown in Fig. 3. The method includes three main steps: coarse search, refined search, and precise search. Two first steps are to quickly filter out a shorter list of candidates from 3D points’ descriptors, the last step to precisely determine the top-ranked list. Let be the feature dimension of SIFT descriptors. Given a 3D model and its points’ descriptors, each descriptor is pre-mapped into binary vector in Hamming space : , where is the transformation matrix, which can be learned via objective minimization:

(2)

where is Frobenius norm.

are matrices of all point descriptors of 3D model (one descriptor per matrix’s column) and its binarized code after transformation respectively. We solve the optimization problem by ITQ

[gong-cvpr-2011]. Given the learned hash function, all descriptors of the model are mapped into binary vectors and we store those vectors instead of the original SIFT descriptors.

Coarse search: We follow the principle (1) to create a LUT (Lookup Table) based data structure for fast search. We split binary vector into sub-vectors of bits (). In our work, we only select candidates differ at most bits from the query (). In other words, a candidate’s binary vector is potentially matched to the query’s iff at least one of their sub-vectors are exactly the same. For training, we create LUTs, where for the sub-vector , and each LUT comprises of buckets. One bucket links to a point-id list of 3D points that are assigned to buckets according to their binary sub-vectors. For searching, a query descriptor is first mapped into Hamming space and was divided into sub-binary vectors as above. And then looking up into the to find a certain bucket that matches the binary code of . This results in the point-id list :

(3)
Method(,) LUT size
ITQ(4,32) - = 64GB
ITQ(8,16) 5K = 1048KB
ITQ(16,8) = 4096B
LSH(8,16) = 1048KB
TABLE I: The trade-off between the number of candidates and the size of LUT experimented on Dubrovnik dataset.

Next, merging point-id list to have the final list of coarse search . By using LUT, the search complexity of is constant when retrieving the point-id list . This step results in a short list that contains candidates for the next search. It is important to choose appropriate values of and for the trade-off between the memory requirement of LUT and computation time (which depends on the length of that requires Hamming distance refining). As shown in Table I, we map descriptors to binary codes by using ITQ with different settings and also replace it with LSH [charikar-acm-2002] based scheme [cheng-cvpr-2014]. ITQ(,) is impractical due to over-large size requirement of LUTs. ITQ(,) results in too many candidates, which slows down the refined search. ITQ(,) is the best option, results in the short list, and requires a small amount of LUT memory (excluding the overhead memory of descriptors indexing). Using multiple lookup table using LSH [cheng-cvpr-2014] results in the longer list ( of ITQ) of candidates, which means that learning the hash mapping from data points by ITQ is more efficient than a random method LSH in our context. This is consistent with our experiments conducted later in Fig. 8.

Refined search: In this step, we use full -bit code to refine list to pick out a shorter list (). First, we compute exhaustively the Hamming distance between the -bit code of query to that of candidates. Then, candidates are re-ranked according to these distances. Computing Hamming distance is efficient because we can leverage low-level machine instructions (XOR, POPCNT). Computing Hamming distance of two 128-bit vectors is significantly faster ( 30) than the Euclidean distance of SIFT vectors and accelerates ( 4) ADC (Asymmetric Distance Computation) [jegou-pami-2011] on our machine. Furthermore, Hamming distance of -bit code has the limited range of [0, 128], which allows us to build the online LUT during the refined search. As such, selecting top candidates search is accelerated. However, the limited range prevents us to precisely rank candidates. That leads to the last step of our pipeline.

Precise search: The purpose of the precise search is to get ranked better so that we can choose the best candidate or remove outliers of matches before applying geometric verification. Furthermore, we can consider their order as an useful prior information. It plays an important role to reduce the complexity of pose estimation (discussed in Section of Geometric Verification). The approximated Euclidean distance by ADC of PQ [jegou-pami-2011] is used. The match between a query feature and a 3D point is established if the distance ratio from the query to the first and second candidates passes the ratio test [lowe-ijcv-2004]; otherwise, they are rejected as outliers. The sub-quantizers of PQ are trained once from an independent dataset, SIFT1M [lowe-ijcv-2004], and used in all experiments. In this step, we need to store PQ codes in addition to hashing code of two previous steps.

In addition to [norouzi-cvpr-2012], some form of cascade hashing search has been applied for image matching [cheng-cvpr-2014]. In this work, we apply it for 2D-3D matching and propose several improvements beyond the work of [norouzi-cvpr-2012, cheng-cvpr-2014]:

  • In our work, since the 3D models are built off-line and SIFT descriptors for 3D points are available during off-line processing, we propose to train an unsupervised data-dependent hash function to improve matching accuracy. [norouzi-cvpr-2012, cheng-cvpr-2014] make use of Locality Sensitive Hashing (LSH) [charikar-acm-2002], which has no prior assumption about the data distribution. In contrast, we apply Iterative Quantization (ITQ) [gong-cvpr-2011], in which the hash function is learned from data.

  • We use a single hash function of ITQ for mapping from 128 bytes SIFT to bits binary vector. Splitting the long bits code into short-codes of bits to construct lookup tables (LUT) for coarse search and use full bit vector for the refined search. In contrast, [cheng-cvpr-2014] created multiple lookup tables using LSH with short-codes. These tables are independent and built from random projection matrices that return the long list of candidates, hence slowing down the next step of refined search (discussed later in Table I).

  • We add the precise search layer to the hashing scheme and propose to use Product Quantization (PQ) [jegou-pami-2011], a fast and memory efficient method for precise search. Consequently, our work combines hashing and PQ in a single pipeline to leverage their strengths: Binary hash code enables fast indexing via Hamming distance-based comparison, while PQ achieves better matching accuracy. They are both compressed descriptors. Without this precise search step, accuracy is significantly reduced. However, using the original SIFT descriptor for this step [cheng-cvpr-2014] requires considerable amount of memory storage (128-byte to store a SIFT descriptor). As will be discussed, using PQ in our method can achieve similar accuracy, but our method requires only 16 bytes per descriptor. This reduces memory requirement by about 8 times as compared to the original SIFT. In our experiments, we compare our search method to the method in [cheng-cvpr-2014], and we use PQ for the last step in both methods for fair comparison.

Iii-B2 Prioritization and pose estimation

In addition to above improvements from cascade hashing search, we propose a prioritizing scheme and the fast one-many RANSAC to significantly speed up the search, while preserving competitive accuracy.

Prioritization: Finding all matches between 2D features and 3D points to infer camera pose is expensive because the query image can contain thousands of features. In practice, the method can stop early once found a sufficient number of matches [sattler-iccv-2011]. Therefore, we perform a prioritized search on descriptors of the 2D image as follows: given a query descriptor, the coarse search returned the point-id list . We first continue the refined and precise search with those query features having shorter list . A correspondence is established if the nearest candidate passes the ratio test with threshold on precise search. We stop the search once correspondences have been found. This is an important proposed technique: in our context, it is not necessary to find all 2D-3D correspondences for localization. It is sufficient for localization as long as a certain number of correspondences are found. Results show that this scheme can significantly accelerate the system (about ) and incur minimal accuracy degradation. The evaluation is demonstrated on the Dubrovnik dataset in Table III.

Pose estimation by one-many RANSAC:

One of the long-standing problems in correspondence matching is the problem of rejecting correct matches using the ratio test. The problem is more severe in image-based localization: Building facade usually has many repetitive elements (e.g., windows, balconies). These repetitive elements are similar in appearance, and the corresponding local descriptors are almost identical. Please refer to Fig. 4 for some examples. In our work, we propose to retain more potential matching candidates as a feature in the image may have multiple matching candidates in the 3D model. We propose to use the geometric constraints to filter out the outliers. However, this poses problem to conventional RANSAC as it is very computationally expensive to iterate on many pairs of candidates. In particular, we need to perform this on resource-constrained mobile devices. Our proposed one-many (1-M) RANSAC is a new solution to this problem. We use the hypothesis set to create the hypothesis model, and use the verification set to validate the model. In addition, we use the pre-verification step to quickly reject bad hypothesis models. Note you will see in the later results that on average our 1-M RANSAC can increase the number of correspondences by a factor of two or more. The details of our proposed algorithm are as follows.

Fig. 4: Conventional RANSAC, RANSAC of VisualSFM, and our 1-M RANSAC for image matching.

After 2D-3D matching, traditionally, one query descriptor has a maximum of one 3D point correspondence. Those correspondences (one-one matches) are then filtered out by geometric constraints, e.g. RANSAC within 6-DLT algorithm inside. Empirically, we made two observations: (i) ratio test tends to reject many good matches (ii) good candidates are not always highest-ranked in the list

. It is probably due to repetitive features in buildings and it is a common issue of localization in urban environment

[sattler-pami-2016, zeisl-iccv-215]. Therefore, relaxing the threshold to accept more matches (one-many matches), and filtering wrong matches by using geometric verification seem to be potential solutions. Recent works [svarm-cvpr-2014, zeisl-iccv-215] use these approaches, but their geometric solvers are too slow for practical applications.

To address this issue, we propose a fast and effective one-many RANSAC as follows: First, we relax the threshold to accept more matches and keep one-many candidates per query descriptor. We compute one-many matchings: given one query feature, we accept top candidates in list where . and are the first and second smallest distances of the query to candidates. However, processing all these matchings leads to an exponential increase in the computational time of RANSAC due to the low rate of inliers.

We avoid this issue by considering its subset to generate the hypothesis. Consequently, we propose two different sets of matchings in the hypotheses and verification stages of RANSAC. The first set contains the one-one (1-1) matchings that pass the ratio test with threshold . The second set of matchings contains one-many (1-) matchings found by the relaxed threshold as mentioned above. We propose to use the first set to generate hypotheses and the second set for verification. We found that using relaxed threshold and 1- matchings in verification can increase the number of inliers, leading to the accuracy improvement. We speed up our method by applying the pre-verification step like [chum-pami-2008], which based on Wald’s theorem of sequential probability ratio test (SPRT). This step is helpful to quickly reject the ”bad” samples before the full verification.

The details are as such: let assume that 2D-3D matching has found matches between 2D keypoint queries , and 3D points of the model , , where is the number of 2D queries. is the 2D coordinate of -th query and is the 3D coordinate of -th matches of -th query. Those matchings (verification set) found by 2D-3D matching with the relaxed threshold : , where is the first and -th distances of the candidate list to the query : . is the number of candidates matched to each query , and . Without the loss of generality, were sorted in ascendant of ADC distances from 2D-3D matching. Our hypothesis set is a subset of verification set. It has 1-1 matchings , , which passed the strict threshold : .

In our algorithm, indicates the probability that a random match is consistent with a “good” model. This probability is initialized: . indicates the probability of a match being consistent with a ”bad” model. This probability is initialized with a small value: . The probability of rejecting a ”good” sample (), where is the decision threshold (discussed later). Here, : the hypothesis that the model is ”good”, and : the alternative hypothesis that the model is ”bad”.

The details of proposed algorithm are presented in Fig. 5 and in three main steps as follows:

First, in the hypothesis step, a model sample ( correspondences) is randomized from the hypothesis set , , . indicates the minimum number of correspondences that can be used to estimate the model parameters using 6-DLT algorithm. is a 3D-2D projection matrix, which can map 3D coordinates to 2D keypoints on the image plane. The model parameters is computed from random correspondences . This model will be validated whether it is a “good” or “bad” model in the pre-verification step. Randomizing samples from the hypothesis set, which is much smaller than the verification set, allows our RANSAC running fast enough.

Second, the pre-verification step further improves the processing speed because this step can quickly validate whether the model is “good” or “bad” after a small number of iterations. Hence, if the model is considered to be a “bad” model, it will be better off re-generating new samples to avoid consuming time than continue the testing. In this step, we use correspondences from the hypothesis set for the pre-verification, is to check whether one correspondence is consistent with the estimated parameters of model . The correspondence is consistent with the model, when the Euclidean distance between the query and its 2D projection of is smaller than a threshold (eg. 4 pixels in the published code of ACS (Active Correspondence Search) [sattler-eccv-2012]). We formulate this operator by . The model is pre-verified via the likelihood ratio computed from two conditional probabilities. If (the observation is not fitted/consistent to the model), the likelihood is updated with the ratio from previous iteration. Otherwise, it is updated with the ratio . If is higher than the decision threshold , the model is likely to be ”bad” and the pre-verification stops. In contrast, if the model is likely to be “good”, testing is continued. When the model is “bad”, some parameters and may be re-computed and a new sample in the hypothesis step is re-generated.

Third, if the model is likely to be “good”, all correspondences are checked with this model to locate the inliers. This verification step projects the correspondences into the 2D image plane, and measures their Eulidean distances to the query . The correspondence passes the test if the Euclidean distance of its projection and is smaller than the threshold. We formulate the verification as follows: if there exists passes the test, is set 1, otherwise 0. In other words, if , . The total cost is used to decide whether a new model is accepted or ignored. Validating tentative matches of the query is important in our RANSAC because the lower-ranked matches of still have chances to be potentially chosen as a good correspondence. It is a minor change but improves the accuracy substantially.

Here, and are the cost (or the number of inliers) and model parameters respectively. If this cost is better than the optimal cost (minimum cost obtained from previous iterations), it is a good model. As such, and are updated and the probability , the decision threshold and the number of iterations are re-computed. The adaptive decision threshold is computed from probabilities and similar to [chum-pami-2008]. is the decision threshold to make one out of three decisions for each observation: reject a “bad” model, accept a “good” model, or continue testing. This threshold is estimated using the SPRT theorem [wald-tams-1945]. is the number of tested samples before a good ”model” is drawn and not rejected.

is computed from geometric distribution:

. It indicates that we need more iterations ( is large) for testing if the probability of accepting a “good” model is low ( is small) and/or the probability of rejecting a “good” is high ( is high), and vice versa.

1:procedure One-Many-RANSAC(, , )
2:      
3:      
4:      
5:      
6:       the number of rejected times
7:       the number of iterations
8:      while  do
9:            
10:            I. Hypothesis
11:            Select a random sample of minimum size from hypothesis set , , .
12:            Estimate model parameters fitting the sample.
13:            II. Pre-verification
14:            
15:            
16:            while  do
17:                 Let (,) or
18:                 
19:                 if  then
20:                       bad_model = true Reject sample
21:                       break
22:                 else
23:                       
24:                 end if
25:            end while
26:            if bad_model then
27:                 
28:                  Re-estimate
29:                 if  then
30:                        Update
31:                        Update
32:                 end if
33:                 continue
34:            end if
35:            III. Verification
36:            Compute cost
37:            if  then
38:                 , Update good model
39:                  Update
40:                  Update
41:                  Update
42:            end if
43:      end while
44:end procedure
Fig. 5: The algorithm of our proposed RANSAC.

In addition to 2D-3D matching, our idea can also be used for conventional image matching. For example, Fig. 4 shows that with a building of many repetitive features, the classical RANSAC and VisualSFM’s RANSAC fail, while our 1-M RANSAC still works in this case.

Iv Experimental Results

We conduct experiments to validate our CCS method and the overall system. Specifically, we adopt four benchmark datasets: Dubrovnik [li-eccv-2010], Rome [li-eccv-2010], Vienna [irschara-cvpr-2009], and Aachen [sattler-bmvc-2012], to evaluate our correspondence search method and compare it againist the state-of-the-art. These four datasets are commonly used in earlier works [li-eccv-2010, sattler-iccv-2011, li-eccv-2012] for evaluating the robustness of 2D-3D matching or 2D-3D correspondence search. Aachen images are collected different times and seasons day-by-day in the two-year period because it is important to evaluate the robustness of method against different times or seasons. We use these datasets to compare our correspondence search method to the state-of-the-art. Table II provides some information about these datasets. Then, we validate our on-device system design with the image collection of GSV. Our GSV dataset has 227K training images and 576 mobile queries. It is used to evaluate our image retrieval approach, and also the entire system (image retrieval and correspondence search (or localization)).

Experiments are conducted on our workstation: Intel Xeon Octa-core CPU E5-1260 3.70GHz, 64GB RAM, and Nvidia Tablet Shield K1. We use “mean descriptors” for each 3D point in all experiments. We have three different settings for our method: Setting 1 uses traditional RANSAC (CCS), Setting 2 uses new 1-M RANSAC scheme (CCS + R) and Setting 3 indicates our method with the new RANSAC and prioritizing scheme included (CCS + P + R). Here: CCS, P and R stand for Cascade Correspondence Search, Prioritizing, and One-Many RANSAC respectively. In the next sections, we will first evaluate our 2D-3D matching method and compare it to earlier works on benchmark datasets. Subsequently, we validate our system design on the GSV dataset.

Dataset #Cameras #3D Points #Descriptors #Queries
Dubrovnik 6044 1,886,884 9,606,317 800
Rome 15,179 4,067,119 21,515,110 1000
Aachen 3047 1,540,786 7,281,501 369
Vienna 1324 1,123,028 4,854,056 266
TABLE II: Standard datasets for the evaluation of 2D-3D correspondences search.

Iv-a Hashing based 2D-3D matching

In this section, we evaluate our cascade search method and compare it against the other search methods. We then show the computational improvements (while remaining the competitive accuracy) when it is combined with our prioritizing technique and new proposed RANSAC algorithm.

Iv-A1 Hashing-based Matching

The first experiment is used to determine a good test ratio threshold for precise search. It is conducted on the Dubrovnik and Vienna datasets. We use ADC with Inverted File [jegou-pami-2011] and the number of coarse quantizer , sub-vectors of SIFT, the number of sub-quantizers , and the number of neighboring cells visited . We use small and large to ensure that quantization does not significantly affect the overall performance. In this experiment, we fix 5000 iterations to attain the same probable result in multiple runs with RANSAC. A query image is “registered” if at least twelve inliers found, same as [li-eccv-2010]. This experiment suggests the threshold is a good option, Fig. 6.

Fig. 6: Studying the influence of test ratio thresholds on Dubrovnik and Vienna datasets. This experiment determines the good ratio threshold for precise search. Results show that threshold achieves the highest registration rate. In the figure, the horizontal axis indicates the value of thresholds, and the vertical axis indicates the percentage of registered images.
Fig. 7: The registration rate and inliers ratio according to the number of candidates of .

The second experiment to choose the good size of output from refined search. Conditions like the first experiment, except we choose the best threshold for precise search. We validate our method with a various number of candidates in . This experiment suggests that is a good option because increasing it does not significantly affect the registration rate and inliers ratio (Fig. 7).

Fig. 8: Comparison on indexing methods for PQ. Parameters for IVFADC (), IMI (), CCS (, ). The version of our CCS is Setting 1, and ‘*’ indicates our CCS ignoring the refined search.

In the third experiment on Dubrovnik dataset, we study the influence of different indexing procedures on accuracy and computation, by comparing our method against two well-known PQ-based indexing schemes. Similar to the second experiment, we compare our CCS to Inverted File (IVFADC) [jegou-pami-2011] and Inverted Multi-Index (IMI) [babenko-cvpr-2012]. We also compare to our own methods without refined search. We tune parameters of IVFADC and IMI for a fair comparison. Results in Fig. 8 demonstrate the efficiency of refined search, because removing this step slows CCS down about (approximately three times), while obtaining similar registration rate. Although IVFADC with visited cells achieves highest performances with different sizes of sub-quantizers, it is too slow. Our method outperforms IVFADC (with ) in terms of execution time and registration rate. IMI registers more queries when the number of nearest neighbors or the length of its re-ranking list (same meaning as ours) is increased. Yet it also increases processing time. Our registration rate is higher than IMI, while our running time is competitive. We try to replace ITQ by LSH [charikar-acm-2002] based scheme [cheng-cvpr-2014]. Results show that using our scheme with ITQ is faster than LSH scheme [cheng-cvpr-2014]. This is consistent with the parameter of the number of candidates reported in Fig. 8. Note that for all experiments above, we use 1-1 matchings and traditional RANSAC (Setting 1).

Method #reg. images Median Quartiles [m] #images with error Time (s)
1st Quartile 3st Quartile 18.3m 400m
Kd-tree 795 - - - - - 3.4*
Li et al. [li-eccv-2010] 753 9.3 7.5 13.4 655 -
Sattler et al. [sattler-iccv-2011] 782.0 1.3 0.5 5.1 675 13 0.28
Feng et al. [feng-ip-2016] 784.1 - - - - -
Sattler et al. [sattler-bmvc-2012] 786 - - - - -
Sattler et al. [sattler-eccv-2012] 795.9 1.4 0.4 5.3 704 9 0.25
Sattler et al. [sattler-pami-2016] 797 - - - - -
Cao et al. [cao-cvpr-2013] 796 - - - - -
Camposeco et al. [camposeco-cvpr-2017] 793 - 0.81 6.27 720 13 3.2
Zeisl et al. [zeisl-iccv-215] 798 1.69 - - 725 2 3.78
Zeisl et al. [zeisl-iccv-215]* 794 0.47 - - 749 13 -
Swarm et al. [svarm-cvpr-2014] 798 0.56 - - 771 3 5.06
Li et al. [li-eccv-2012] 800 - - - - -
Setting 1 (CCS) 781 0.93 0.34 3.77 710 12 0.62
Setting 2 (CCS + R) 796 0.89 0.31 3.67 717 17 0.62
Setting 3 (CCS + P + R) 794 1.06 0.39 4.15 711 10 0.09
TABLE III: We compare our method to the state of the art on Dubrovnik dataset. Methods marked ‘+’ reports only the processing time of outlier rejection/voting scheme, taken from original papers (ignoring the execution time of 2D-3D matching). Methods marked ‘*’ report results after bundle adjustment. Here, CCS: Cascade Correspondence Search, P: Prioritizing, R: One-Many RANSAC.
Method Rome Vienna Aachen
Kd-tree 983 221 317
Li et al. [li-eccv-2010] 924 204 -
Sattler et al. [sattler-pami-2016] 990.5 221 318
Cao et al. [cao-cvpr-2013] 997 - 329
Sattler et al. [sattler-bmvc-2012] 984 227 327
Feng et al. [feng-ip-2016] 979 - 298.5
Li et al. [li-eccv-2012] 997 - -
Setting 2 (CCS + R) 991 241 340
Setting 3 (CCS + P + R) 991 236 338
TABLE IV: The number of registered images on Rome, Vienna, and Aachen datasets.

Iv-A2 Pose estimation and prioritization

In this section, we investigate the influence of our geometric verification (Setting 2), that combines cascade search and proposed RANSAC with a fixed number of 5000 iterations. We visualize the inliers found by our method on the Dubrovnik dataset to understand the impact of the ratio test. We adopt all candidates of , , in this experiment. Fig. 9 The number of inliers per query on Dubrovnik (first row) and Vienna (second row) datasets. Left figures display the number of inliers (on first 70 queries of Dubrovnik/Vienna) found by threshold (blue), and relaxed threshold (red). Right figures are the percentage of number inliers contributed by candidates (from second order) in the list . On Vienna dataset, we increase approximately 100% of inliers as using relaxed threshold contributes about nearly 48% to the total of the number of inliers. The candidate list on Vienna dataset contributes a slightly higher number of inliers than that of Dubrovnik dataset. These explain why our method achieves better results on Vienna dataset. Fig. 10 shows inliers on one query example of Dubrovnik. For each query, the blue portion is the number of inliers found by the strict ratio , and the red portion represent the additional ones found by the relaxed threshold . On average, the relaxed threshold can increases about 65.4% of inliers from the strict threshold, and contributes about 37.2% to the total number of inliers found by our method (Setting 2). The right-hand-side of the first row is the average number of inliers contributed by 1- matchings (from the second rank). The 1- matches increase about average 15% of the number of inliers from the strict threshold of 1-1, and about 7% of the total. It means if we use threshold and 1- matchings, the method increases a significant number of inliers (). We see on the right figure that lower ranked candidates -th does not have significant impact on the total number of inliers; therefore to save on computation, we keep only matchings after the precise search.

Fig. 9: The contribution of inliers (on first 70 queries of Dubrovnik and Vienna) found by threshold (blue), and additional inliers found by relaxed threshold (red).
Fig. 10: The left figure has 160 inliers found by our Setting 1 (with ), and the right figure has 278 inliers found by our Setting 2 (with the relaxed threshold ).
Method #reg. images RANSAC (s) Reg. time (s)
Kd-tree 795 0.001 3.4
Sattler et al. [sattler-iccv-2011] 782.0 0.01 0.28
Sattler et al. [sattler-eccv-2012] 795.9 0.01 0.25
Setting 3 (CCS + P + R) 794 0.20 0.29
Setting 3 (CCS + P + Fast R) 793 0.03 0.12
TABLE V: The processing times of RANSAC and the registration times.
Method Vienna Aachen
#reg. images Reg. time (s) #reg. images Reg. time (s)
Sattler et al. [sattler-iccv-2011] 206.9 0.46 - -
Sattler et al. [sattler-eccv-2012] 220 0.27 - -
Sattler et al. [sattler-pami-2016] 221 0.17 318 0.12
Setting 3 (CCS + P + R) 236 0.35 338 0.28
Setting 3 (CCS + P + Fast R) 228 0.15 335 0.11
TABLE VI: The running times (including RANSAC) on Vienna and Aachen datasets.

Table III demonstrates the performance of our Setting 2 (). First, we see that our Setting 2 significantly outperforms our Setting 1 at both the number of registered images and errors. It confirms that using relaxed 1- candidates per query improves the performance. The registration rate and running time of Setting 2 is comparable to the state-of-the-art, however, its processing time can be further reduced by leveraging prioritizing scheme. We improve the cascade search speed with prioritized scheme (Setting 3). In the same Table III, Setting 3 obtains similar performance as the full search but it is about faster. By using the prioritizing scheme, we achieve similar accuracy but with much faster matching speed than previous works. We also perform comparisons using other standard datasets (Table IV). Our Setting 3 outperforms the state-of-the-art methods in registration rate on Vienna and Aachen datasets. In addition to that, our proposed method is more efficient with regards to memory because of the use of compressed descriptors. Note that when possible, we run the 2D-3D matching methods on our machine and measure their running times (excluding RANSAC time). This shows the potential of using relaxed and 1- matches for better accuracy. However, our version of Setting 3 (fixed 5000 iterations) used in above experiments can be further improved in term of execution time.

We accelerate it by using pre-verification step (Setting 3). It preserves competitive accuracy, but faster than RANSAC (5000 iterations) of Setting 3, as shown in Table V. As a result, the total time of Setting 3 with our fast RANSAC is faster than Setting 3, and it needs a total of only 0.12(s) to successfully register one query. As compared to others, we outperform them in terms of registration rate and execution time on Vienna and Aachen datasets (Table VI). Our proposed RANSAC (Setting 3) executes as fast as classical RANSAC on the small set of correspondences, e.g. 0.03(s) vs. 0.01(s) per Dubrovnik query in Table V.

As discussed in the next section, our model can reduce the memory requirements by the factor of about from the original SIFT model. In this experiment, we compare our model to [cao-cvpr-2014] for memory efficiency. We conduct this experiment on the Dubrovnik model ( 3D points) by using [cao-cvpr-2014] to compress this model by certain factors and use IVFADC (which achieved the best registration rate among compared PQ methods, Fig. 8) to obtain the registration on those compressed models. Fig. 11 shows that compressing Dubrovnik model to 3D points (about ), the registration rate of IVFADC drops dramatically from 796 () to 750 (). At similar compression factor (about ), our method can achieve about with Setting 1, and with our best Setting 3.

Fig. 11: The number of 3D points of compressed Dubrovnik models and the registration rate of IVFADC method on the corresponding models.

Iv-B Overall system

Iv-B1 Google Street View (GSV) Dataset

We collect GSV images at a resolution of 640640 pixels. These images have exact GPS. We collect images that cover city regions in Singapore. At each Street View place mark (a spot on the street), the 360-degree spherical view is sampled by 20 rectilinear view images (18 interval between two consecutive side view images) at 3 different elevations (5, 15 and 30). Each rectilinear view has field-of-view and is considered a pinhole camera (Fig. 12). Therefore, 60 images are sampled per placemark. The distance between two placemarks is about 10-12m. We also collect 576 query images with the accurate GPS ground-truth position. The GSV dataset for our training only supports scenes of the day, but the images are very distorted and challenging. Our mobile queries are collected with our own cameras under different conditions for a duration of several months. These conditions include the morning, the afternoon with different lighting conditions and reflective phenomenon (building glass surfaces). See our dataset and query examples in Figures 14 and 15. Our dataset covers about 15km road distance shown in Fig. 13.

Fig. 12: A panoramic image and its rectilinear views.
Fig. 13: The coverage of our 200K dataset taking over about 15km road distance (roads marked by blue lines).
Fig. 14: Examples of GSV images.
Fig. 15: Examples of query images.
Fig. 16: We represent the scene with overlapping segments, and build small 3D models for individual segments. We investigate the effect of overlapping on localization accuracy.

Overlapping of segments: We investigate overlapping between two consecutive segments. This is to ensure accurate localization for query images capturing buildings at the segment boundaries. We conducted an experiment to evaluate the localization accuracy at zero, two and four place marks overlapped. In this experiment, we used image retrieval to find the top 20 or 50 similar database images, given a query image. Results in Fig. 17 suggest that with segments overlapped at two placemarks can ensure good localization accuracy. Note that the extent of overlapping is a trade-off between accuracy and storage. Besides, a retrieved list of 20 database images achieves good accuracy-speed trade-off.

Fig. 17: The number of overlapped placemarks of two segments, and its effects on the image-retrieval top list of 20 or 50. Two placemarks can ensure good localization accuracy. The retrieved list of 20 database images achieves good accuracy-speed trade-off.
#Place marks #Images % of #queries with error 5m
8-10 480-600 90%
11-14 660-840 80%
20-25 1200-1500 60%
TABLE VII: The effect of segment size on the localization accuracy.

Coverage of each segment: As the coverage (size) of each segment increases, the percentage of overlapped place marks decreases and hence storage (3D points) redundancy decreases. However, the localization accuracy decreases as the segment coverage increases because there are more distracting 3D points in a 3D model. We conducted an experiment to determine an appropriate segment size: we reconstructed a 3D model from a number of images: 480-600 images (8-10 placemarks), 660-840 images (11-14 placemarks), and 1200-1500 images (20-25 placemarks). We applied the state-of-the-art method, Active Correspondence Search (ACS) [sattler-eccv-2012], to compute the localization accuracy using the 3D model. Table VII shows the results, which suggest that using segment with 8-10 placemarks achieves the best accuracy. The localization accuracy degrades rapidly as we increase the segment coverage for GSV dataset. Therefore, in our system, we use 8-10 consecutive GSV placemarks to define a segment. Although [arth-ismar-2009] has also proposed to divide a scene into multiple segments, their design parameters have not been studied. Moreover, their design is not memory-efficient and covers only a small workspace area. It also requires prior additional sensor data, e.g. GPS, WiFi to determine the search region. In addition, this work requires manual steps, e.g. registering individual models into a single global coordinate. On the other hand, our models are automatically reconstructed or registered, and our system can localize entirely on a mobile device at a large scale.

In order to evaluate our on-device system, we consider the robustness of image retrieval on a large-scale dataset and the localization accuracy of an overall system on our GSV image collection.

Iv-B2 Image retrieval

Image retrieval of our system finds the correct 3D models that a query is likely to belong. A query image is “success”, if top list of retrieval images match at least one correct model. For ground-truth, we manually index our set of queries to their corresponding 3D models. It is important to investigate image retrieval performance, because it significantly affects the robustness of the overall system, especially with a large-scale dataset. We follow the parameters reported in T-embedding method [jegou-cvpr-2014] as represented above and use sum-pooling. The goal is to determine the number of retrieved image that should be returned from image retrieval. has to balance between the accuracy and the number of models found. Fig. 18 shows that is an appropriate number. The horizontal axis represents the number of references resulted from image retrieval, and the vertical axis is the percentage of queries that found at least one correct model. The histogram of model numbers is visualized on the same figure. More than 80% of queries found candidate models, therefore, we practically perform 2D-3D matching with the maximum number of four models if the list results in more than this number.

Fig. 18: The accuracy of image retrieval and the histogram of the model number of threshold 20.

Iv-B3 Overall system localization

In this experiment, the localization accuracy is measured by GPS distance between ground-truth and our estimation. The results are drawn in the form of Cumulative Error Distribution (CED) curve. The horizontal axis indicates the error threshold (in meters), and the vertical axis indicates the percentage of image numbers having lower or equal errors than the threshold. We compare correspondence search methods on the system: ACS [sattler-eccv-2012] and our Setting 3. The same image retrieval component used for both that was trained on 227K images. Fig. 19 presents real-world accuracies, e.g. at the threshold of 9(m), about 90% queries are well-localized for our Setting 3, slightly worse than our Setting 3 without our fast RANSAC, and about 80% for ACS. Our CCS uses compressed SIFT descriptors, which optimized better memory requirements than ACS but achieved better performance than ACS on our dataset. Note that the camera is calibrated in this experiment. About 10% of images are completely failed ( 50 (m)) due to image retrieval, the confusion of similar buildings, or reflection of building facades. Our proposed system achieves encouraging results using GSV images: the median error of our CCS (Setting 3) is about 3.75 (m), and 72% of queries have errors less than 5 (m). In the same figure, we also evaluate the importance of using the localization part removing it from our system. The performance is drastically reduced without this part. The accuracy solely for the retrieval part is the average GPS of all retrieved images. It’s worth noting that we may estimate better GPS by using some 2D-2D matching techniques between the query image and top retrieved ones. However, the disadvantage is that we need to store original images in the database and furthermore, the fusion of matching result is not simple.

Fig. 19:

The performances of the overall system (image retrieval + 2D-3D correspondence search) tested on 576 query images. We also compare our 2D-3D matching method and ACS with the same image retrieval part. The performance of only image retrieval part also is reported, where top-k mean the GPS estimation is average of GPS of top k nearest images of the dataset according to the query image.

Iv-C Memory Analysis and On-device computation

Iv-C1 Memory consumption of Image retrieval

Our vocabulary size of T-embedding is , thus its T-embedding feature dimension is 4096. The fixed consumption of embedding and aggregating parameters is (MB). The indexing step of PQ needs approximately 129.44 (MB) to encode images, where the number of sub-vectors is and the number of sub-quantizer per sub-vector is 256. The total memory is (MB), which can easily fit into modern devices with RAM . When the size of a dataset is increased to 1M images, the total memory consumption of (MB) is still processable on the RAM.

Iv-C2 Memory consumption of 2D-3D correspondence search

Our model Original model
Look-up tables 8 4 -
Point id
Point coordinates
Descriptors
Total memory
TABLE VIII: Memory requirements (#bytes) for our model vs. original model.

CCS (Setting 3) is implemented for mobile implementation as it is fast and requires less memory. The method requires 32 bytes (128-bit (16 bytes) hash code and 16 bytes PQ code) to encode a SIFT descriptor. Using look-up tables, each one comprised of buckets. Each bucket needs a 4-byte pointer referring to one point-id list. Let be the point number of the 3D model if is large enough and small overhead memory can be ignored. tables refer to point-id with a total of points. One point-id can be represented by a 4-byte integer number. 3d point coordinates consume bytes. Our model needs a total of bytes, which is 2x more compressed than the original model (the 3D model of using SIFT descriptors) of bytes shown in Table VIII (ignoring the indexing structures of other methods that may require more memory). Our 227K images of approximately 15km road distance coverage consume about 50MB of memory in total. We can extrapolate the numbers: It is feasible to extend to 1M images which can cover about 70 (km), while consuming less than 2GB memory. We can extend the coverage further if storing 3D models on modern SD cards with large capacity. It is worth noting that the overall performance for such extensions would only affect accuracy of image retrieval, not 2D-3D correspondence search, as we use scene partition and sub-models. Also, we have trained PQ sub-quantizers from the general dataset of 1M SIFT descriptors [jegou-pami-2011], which can be used for all models. The memory requirement for PQ sub-quantizers is: (bytes) 0.13 (MB).

Although our hashing scheme needs more memory as compared to two other PQ based schemes IVFADC and IMI that require 16-byte and 24-byte codes per 3D point, whose total memory is and (bytes) respectively, it is not critical as the size of the model is small enough to be loaded once on device memory; Furthermore, all models can be stored on an external device like SD cards. Our method is more efficient than two of these methods in terms of the trade-off between time complexity and accuracy reported on the Dubrovnik dataset.

Iv-C3 On-device running time

Our system is implemented on Android device: Nvidia Tablet Shield K1, 2.2 GHz ARM Cortex A15 CPU with 2 GB RAM, NVIDIA® Tegra® K1 192 core Kepler GPU, 16GB storage. Our camera resolution is 19201080. Table IX

reports the running time for each individual steps: feature extraction, image retrieval, 2D-3D matching, and RANSAC. Since SIFT extraction is time-consuming, it is implemented using GPU. Image retrieval is also accelerated by GPU, whereas two other components used CPU. The processing time of image retrieval is acceptable and consistent with dataset size. Running time of 2D-3D matching is reported for only one model. On our dataset, the number of matches found is usually less than 100, hence the stopping early is not useful. In this case, our method obtains similar running time as ACS. In practice, a few models (

) are manipulated at a time and the latency of loading one model is low, about 0.04 (s). Therefore, it takes on average about 10 (s) in total to localize one query. The localization and pose estimation parts are based on a single CPU core, the speed of our system can be further optimized/improved with multi-core CPU and GPU in future work. Note that we calculate the codebook size when training ACS on our own models and other parameters using the same method reported in [sattler-iccv-2015].

Step Time (s)
Feature extraction (GPU) 0.67
Image retrieval (GPU) 0.82
2D-3D matching 0.55
Pose estimation 1.15
TABLE IX: Average running time for each individual step on our device.

V Conclusion

We present complete design of an entire on-device system for large-scale urban localization, by combining compact image retrieval and fast 2D-3D correspondence search. The proposed system is demonstrated via the dataset of 227K GSV images (with approximately 15km road segment). The scale of the system can be readily extended with our design. Experiment results show that our system can localize mobile queries with high accuracy. The processing time is less than 10s on a typical device. It demonstrates the potential of developing a practical city-scale localization system using the abundant GSV dataset.

We propose a compact and efficient 2D-3D correspondence search for localization by combining prioritized hashing technique and 1-M RANSAC. Our 1-M RANSAC can handle a large number of matches to achieve higher accuracy while maintaining the same execution time as traditional RANSAC. Our matching method requires 2x less memory footprint than using original models. Our matching method achieved competitive accuracy as compared to state-of-the-art methods on benchmark datasets, specifically we obtained the best performance of both processing time and registration rate on Aachen and Vienna datasets.

References