Scalable Place Recognition Under Appearance Change for Autonomous Driving

08/01/2019 ∙ by Anh-Dzung Doan, et al. ∙ 0

A major challenge in place recognition for autonomous driving is to be robust against appearance changes due to short-term (e.g., weather, lighting) and long-term (seasons, vegetation growth, etc.) environmental variations. A promising solution is to continuously accumulate images to maintain an adequate sample of the conditions and incorporate new changes into the place recognition decision. However, this demands a place recognition technique that is scalable on an ever growing dataset. To this end, we propose a novel place recognition technique that can be efficiently retrained and compressed, such that the recognition of new queries can exploit all available data (including recent changes) without suffering from visible growth in computational cost. Underpinning our method is a novel temporal image matching technique based on Hidden Markov Models. Our experiments show that, compared to state-of-the-art techniques, our method has much greater potential for large-scale place recognition for autonomous driving.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Place recognition (PR) is the broad problem of recognizing “places” based on visual inputs [26, 6]

. Recently, it has been pursued actively in autonomous driving research, where PR forms a core component in localization (i.e., estimating the vehicle pose) 

[34, 21, 4, 9, 35, 5, 7] and loop closure detection [10, 13]. Many existing methods for PR require to train on a large dataset of sample images, often with ground truth positioning labels, and state-of-the-art results are reported by methods that employ learning [21, 20, 7, 9].

To perform convincingly, a practical PR algorithm must be robust against appearance changes in the operating environment. These can occur due to higher frequency environmental variability such as weather, time of day, and pedestrian density, as well as longer term changes such as seasons and vegetation growth. A realistic PR system must also contend with “less cyclical” changes, such as construction and roadworks, updating of signage, façades and billboards, as well as abrupt changes to traffic rules that affect traffic flow (this can have a huge impact on PR if the database contains images seen from only one particular flow [10, 13]). Such appearance changes invariably occur in real life.

To meet the challenges posed by appearance variations, one paradigm is to develop PR algorithms that are inherently robust against the changes. Methods under this paradigm attempt to extract the “visual essence” of a place that is independent of appearance changes [1]. However, such methods have mostly been demonstrated on more “natural” variations such as time of day and seasons.

Another paradigm is to equip the PR algorithm with a large image dataset that was acquired under different environmental conditions [8]. To accommodate long-term evolution in appearance, however, it is vital to continuously accumulate data and update the PR algorithm. To achieve continuous data collection cost-effectively over a large region, one could opportunistically acquire data using a fleet of service vehicles (e.g., taxis, delivery vehicles) and amateur mappers. Indeed, there are street imagery datasets that grow continuously through crowdsourced videos [30, 14]. Under this approach, it is reasonable to assume that a decent sampling of the appearance variations, including the recent changes, is captured in the ever growing dataset.

Under continuous dataset growth, the key to consistently accurate PR is to “assimilate” new data quickly. This demands a PR algorithm that is scalable. Specifically, the computational cost of testing (i.e., performing PR on a query input) should not increase visibly with the increase in dataset size. Equally crucially, updating or retraining the PR algorithm on new data must also be highly efficient.

Arguably, PR algorithms based on deep learning

[7, 9] can accommodate new data by simply appending it to the dataset and fine-tuning the network parameters. However, as we will show later, this fine-tuning process is still too costly to be practical, and the lack of accurate labels in the testing sequence can be a major obstacle.

Contributions

We propose a novel framework for PR on large-scale datasets that continuously grow due to the incorporation of new sequences in the dataset. To ensure scalability, we develop a novel PR technique based on Hidden Markov Models (HMMs) that is lightweight in both training and testing. Importantly, our method includes a topologically sensitive compression procedure that can update the system efficiently, without using GNSS positioning information or computing visual odometry. This leads to PR that can not only improve accuracy by continuous adaption to new data, but also maintain computational efficiency. We demonstrate our technique on datasets harvested from Mapillary [30], and also show that it compares favorably against recent PR algorithms on benchmark datasets.

2 Problem setting

We first describe our adopted setting for PR for autonomous driving. Let be a dataset of videos, where each video

(1)

is a time-ordered sequence of images. In the proposed PR system, is collected in a distributed manner using a fleet of vehicles instrumented with cameras. Since the vehicles could be from amateur mappers, accurately calibrated/synchronized GNSS positioning may not be available. However, we do assume that the camera on all the vehicles face a similar direction, e.g., front facing. The query video is represented as

(2)

which is a temporally-ordered sequence of query images. The query video could be a new recording from one of the contributing vehicles (recall that our database is continuously expanded), or it could be the input from a “user” of the PR system, e.g., an autonomous vehicle.

Overall aims

For each , the goal of PR is to retrieve an image from that was taken from a similar location to , i.e., the FOV of the retrieved image overlaps to a large degree with . As mentioned above, what makes this challenging is the possible variations in image appearance.

In the envisioned PR system, when we have finished processing , it is appended to the dataset

(3)

thus the image database could grow unboundedly. This imposes great pressure on the PR algorithm to efficiently “internalise” new data and compress the dataset. As an indication of size, a video can have up to 35,000 images.

(a)
(b)
(c)
Figure 1: An overview of our idea using HMM for place recognition. Consider dataset and query . Figure (a)a: Because and are recorded in different environmental conditions, cannot be matched against , thus there is no connection between and . Query visits the place covered by and , and then an unknown place. Figure (b)b: Query is firstly localized against only . When it comes to the “Overlap region” at time , it localizes against both and . The image corresponding to MaxAP at every time step is returned as the matching result. Figure (c)c: A threshold decides if the matching result should be accepted, thus when visits an unseen place, the MaxAPs of and are small, we are uncertain about the matching result. Once is finished, the new place discovered by is added to the map to expand the coverage area. In addition, since is matched against both and , we can connect and .

2.1 Related works

PR has been addressed extensively in literature [26]

. Traditionally, it has been posed as an image retrieval problem using local features aggregated via a BoW representation 

[10, 13, 11]. Feature-based methods fail to match correctly under appearance change. To address appearance change, SeqSLAM [28] proposed to match statistics of the current image sequence to a sequence of images seen in the past, exploiting the temporal relationship. Recent methods have also looked at appearance transfer [31][23] to explicitly deal with appearance change.

The method closest in spirit to ours is [8], who maintain multiple visual “experiences” of a particular location based on localization failures. In their work, successful localization leads to discarding data, and they depend extensively on visual odometry (VO), which can be a failure point. In contrast to [8], our method does not rely on VO; only image sequences are required. Also, we update appearance in both successful and unsuccessful (new place) localization episodes, thus gaining robustness against appearance variations of the same place. Our method also has a novel mechanism for map compression leading to scalable inference.

A related problem is that of visual localization (VL): inferring the 6 DoF pose of the camera, given an image. Given a model of the environment, PnP [24] based solutions compute the pose using 2D-3D correspondences [34], which becomes difficult both at large scale and under appearance change [40]. Some methods address the issue with creating a model locally using SfM against which query images are localized [35]. Given the ground truth poses and the corresponding images, VL can also be formulated as an image to pose regression problem, solving simultaneously the retrieval and pose estimation. Recently, PoseNet [21]

used a Convolution Neural Network (CNN) to learn this mapping, with further improvements using LSTMs to address overfitting

[41], uncertainty prediction [19] and inclusion of geometric constraints [20]. MapNet [7] showed that a representation of the map can be learned as a network and then used for VL. A downside of deep learning based methods is their high-computational cost to train/update.

Hidden Markov Models (HMMs) [38, 33] have been used extensively for robot localization in indoor spaces [22, 2, 37]. Hansen et al. [15] use HMM for outdoor scene, but they must maintain a similarity matrix between database and query sequences, which is unscalable when data is accumulated continuously. Therefore, we are one of the first to apply HMMs to large urban-scale PR, which requires significant innovation such as a novel efficient-to-evaluate observation model based on fast image retrieval (Sec. 4.2). In addition, our method explicitly deals with temporal reasoning (Sec. 4.1), which helps to combat the confusion from perceptual aliasing problem [36]. Note also that our main contributions are in Sec. 5, which tackles PR on a continuously growing dataset .

3 Map representation

When navigating on a road network, the motion of the vehicle is restricted to the roads, and the heading of the vehicle is also constrained by the traffic direction. Hence, the variation in pose of the camera is relatively low [35, 32].

The above motivates us to represent a road network as a graph , which we also call the “map”. The set of nodes is simply the set of all images in . To reduce clutter, we “unroll” the image indices in by converting an index to a single number , hence the set of nodes are

(4)

where is the total number of images. We call an index a “place” on the map.

We also maintain a corpus that stores the images observed at each place. For now, the corpus simply contains

(5)

at each cell . Later in Sec. 5, we will incrementally append images to as the video datatset grows.

In , the set of edges connect images that overlap in their FOVs, i.e., is an edge in if

(6)

Note that two images can overlap even if they derive from different videos and/or conditions. The edges are weighted by probabilities of transitioning between places, i.e.,

(7)

for a vehicle that traverses the road network. Trivially,

(8)

It is also clear from (7) that is undirected. Concrete definition of the transition probability will be given in Sec. 5. First, Sec. 4 discusses PR of given a fixed and map.

4 Place recognition using HMM

To perform PR on against a fixed map and corpus , we model using a HMM [33]. We regard each image to be a noisy observation (image) of an latent place state , where . The main reason for using HMM for PR is to exploit the temporal order of the images in , and the high correlation between time and place due to the restricted motion (Sec. 3).

To assign a value to , we estimate the belief

(9)

where is a shorthand for . Note that the belief is a probability mass function, hence

(10)

Based on the structure of the HMM, the belief (9) can be recursively defined using Bayes’ rule as

(11)

where is the observation model, is the state transition model, and is the prior (the belief at the previous time step) [33]. The scalar is a normalizing constant to ensure that the belief sums to .

If we have the belief at time step , we can perform PR on by assigning

(12)

as the place estimate of . Deciding the target state in this manner is called maximum a posteriori (MaxAP) estimation. See Fig. 1 for an illustration of PR using HMM.

4.1 State transition model

The state transition model gives the probability of moving to place , given that the vehicle was at place in the previous time step. The transition probability is simply given by the edge weights in , i.e.,

(13)

Again, we defer the concrete definition of the transition probability to Sec. 5. For now, the above is sufficient to continue our description of our HMM method.

4.2 Observation model

Our observation model is based on image retrieval. Specifically, we use SIFT features [25] and VLAD [16]

to represent every image. Priority search k-means tree

[29] is used to index the database, but it is possible to use other indexing methods [17, 12, 3].

Image representation

For every image , we seek a nonlinear function

that maps the image to a single high-dimensional vector. To do that, given a set of SIFT features densely extracted from image

: , where is the number of SIFT features of image . K-means is used to build a codebook , where is the size of codebook. The VLAD embedding function is defined as:

(14)

where, is the nearest visual word of feature vector . To obtain a single vector, we employ sum aggregation:

(15)

To reduce the impact of background features (e.g., trees, roads, sky) within the vector , we adopt rotation and normalization (RN) [18], followed by -2 normalization. In particular, we use PCA to project from to , where . In our experiment, we set . Power-law normalization is then applied on rotated data:

(16)

where, we set .

Note that different from DenseVLAD [39] which uses whitening for post-processing, performing power-law normalization on rotated data is more stable.

Computing likelihood

We adopt priority search k-means tree [29] to index every image . The idea is to partition all data points into clusters by using K-means, then recursively partitioning the points in each cluster. For each query , we find a set of -nearest neighbor . Specifically, is mapped to vector . To search, we propagate down the tree at each cluster by comparing to cluster centers and selecting the nearest one.

The likelihood is calculated as follows:

  • Initialize , , where, we set and in our experiment.

  • For each

    • Find node , where is the inverse of corpus , which finds node storing .

    • Calculate the probability: , where is the distance between and .

    • If , then .

4.3 Inference using matrix computations

The state transition model can be stored in a matrix called the transition matrix, where the element at the -th row and -th column of is

(17)

Hence, is also the weighted adjacency matrix of graph . Also, each row of sums to one. The observation model can be encoded in a diagonal matrix , where

(18)

If the belief and prior are represented as vectors respectively, operation (11) can be summarized as

(19)

where

corresponds to uniform distribution. From this, it can be seen that the cost of PR is

.

Computational cost

Note that is a very sparse matrix, due to the topology of the graph which mirrors the road network; see Fig. 3 for an example . Thus, if we assume that the max number of non-zero values per row in is , the complexity for computing is O.

Nonetheless, in the targeted scenario (Sec. 2), can grow unboundedly. Thus it is vital to avoid a proportional increase in so that the cost of PR can be maintained.

5 Scalable place recognition based on HMM

(a)
(b)
(c)
Figure 2: An overview of our idea for scalable place recognition. Graph , where and are disjoint sub-graphs. Query video is matched against . Figure (a)a: is matched with node and (dashed green lines), due to , . Figure (b)b: is added to node and , new edges are created (blue lines) to maintain the connections between , and . Figure (c)c: Node and are combined. New edges are generated (blue lines) to maintain the connections within the graph. Note that after matching query against , our proposed culling and combining methods connect two disjoint sub-graphs and together.

In this section, we describe a novel method that incrementally builds and compresses for a video dataset that grows continuously due to the addition of new query videos.

We emphasize again that the proposed technique functions without using GNSS positioning or visual odometry.

5.1 Map intialization

Given a dataset with one video , we initialize and as per (4) and (5). The edges (specifically, the edge weights) are initialized as

where is a normalization constant. The edges connect frames that are time steps apart with weights based on a Gaussian on the step distances. The choice of can be based on the maximum velocity of a vehicle.

Note that this simple way of creating edges will ignore complex trajectories (e.g., loops). However, the subsequent steps will rectify this issue by connecting similar places.

5.2 Map update and compression

Let be the current dataset with map and corpus . Given a query video , using our method in Sec. 4 we perform PR on based on . This produces a belief vector  (19) for all .

We now wish to append to , and update to maintain computational scalability of future PR queries. First, create a subgraph for , where

(20)

(recall that there are a total of places in ), and simply follows Sec. 5.1 for .

In preparation for map compression, we first concatenate the graphs and extend the corpus

(21)

for . There are two main subsequent steps: culling new places, and combining old places.

Culling new places

For each , construct

(22)

where with is a threshold on the belief. There are two possibilities:

  • If , then is the image of a new (unseen before) place since the PR did not match a dataset image to with sufficient confidence. No culling is done.

  • If , then for each ,

    • For each such that :

      • Create new edge with weight .

      • Delete edge from .

    • .

Once the above is done for all , for those where , we delete the node in and cell in , both with the requisite adjustment in the remaining indices. See Figs. (a)a and (b)b for an illustration of culling.

Combining old places

Performing PR on also provides a chance to connect places in that were not previously connected. For example, two dataset videos and could have traversed a common subpath under very different conditions. If travels through the subpath under a condition that is simultaneously close to the conditions of and , this can be exploited for compression.

To this end, for each where is non-empty,

  • .

  • For each where and :

    • For each such that , :

      • Create edge with weight .

      • Delete edge from .

    • .

Again, once the above is done for all for which , we remove all unconnected nodes from and delete the relevant cells in , with the corresponding index adjustments. Figs. (c)c, (a)a and (c)c illustrate this combination step.

5.3 Updating the observation model

When is appended to the dataset, i.e., , all vector need to be indexed to the k-means tree. In particular, we find the nearest leaf node that belongs to. Assume the tree is balanced, the height of tree is , where , thus each needs to check internal nodes and one leaf node. In each node, it needs to find the closest cluster center by computing distances to all centers, the complexity of which is . Therefore, the cost for adding the query video is , where . Assume it is a complete tree, every leaf node contains points, thus it has leaf nodes. For each point , instead of exhaustedly scanning leaf nodes, it only needs to check nodes. Hence, it is a scalable operation.

5.4 Overall algorithm

Figure 3: Illustrating map maintenance w and w/o compression. After each query video finishes, we compress the map by culling known places in and combining old places on the map which represent the same place. Thus, the size of transition matrix is shrunk gradually. In contrast, if compression is not conducted, the size of transition matrix will continue increasing.

Algorithm 1 summarizes the proposed scalable method for PR. A crucial benefit of performing PR with our method is that map does not grow unboundedly with the inclusion of new videos. Moreover, the map update technique is simple and efficient, which permits it to be conducted for every new video addition. This enables scalable PR on an ever growing video dataset. In Sec. 6, we will compare our technique with state-of-the-art PR methods.

6 Experiments

We use a dataset sourced from Mapillary [30] which consists of street-level geo-tagged imagery; see supplementary material for examples. Benchmarking was carried out on the Oxford RobotCar [27], from which we use 8 different sequences along the same route; details are provided in supplementary material, and the sequences are abbreviated as Seq-1 to Seq-8. The initial database is populated with Seq-1 and Seq-2 from the Oxford RobotCar dataset. Seq-3 to Seq-8 are then sequentially used as the query videos. To report the 6-DoF pose for a query image, we inherit the pose of the image matched using the MaxAP estimation. Following [35], the translation error is computed as the Euclidean distance . Orientation errors , measured in degree, is the angular difference between estimated and ground truth camera rotation matrices and . Following [21, 20, 7, 42], we compare mean and median errors.

Performance with and without updating the database

0:  Threshold for transition probability, threshold for PR, initial dataset with one video.
1:  Initialize map and corpus (Sec. 5.1).
2:  Create observation model (Sec. 4.2)
3:  while there is a new query video  do
4:     Perform PR on using map , then append to .
5:     Create subgraph for (Sec. 5.2).
6:     Concatenate to , extend with (Sec. 5.2).
7:     Reduce by culling new places (Sec. 5.2).
8:     Reduce by combining old places (Sec. 5.2).
9:     Update observation model (Sec. 5.3).
10:  end while
11:  return  Dataset with map and corpus .
Algorithm 1 Scalable algorithm for large-scale PR.
No update Cull Cull+combine
Seq-3 6.59m, 3.28
Seq-4 7.42m, 4.64 5.80m, 3.24 6.01m, 3.11
Seq-5 16.21m, 5.97 15.07m, 5.89 15.88m, 5.91
Seq-6 26.02m, 9.02 18.88m, 6.24 19.28m, 6.28
Seq-7 31.83m, 17.99 30.06m, 17.12 30.03m, 17.05
Seq-8 25.62m, 22.38 24.28m, 21.99 24.26m, 21.54
No update Cull Cull+combine
Seq-3 6.06m, 1.65
Seq-4 5.80m, 1.40 5.54m, 1.39 5.65m, 1.33
Seq-5 13.70m, 1.56 13.12m, 1.52 13.05m, 1.55
Seq-6 6.65m, 1.87 5.76m, 1.75 6.60m, 1.85
Seq-7 13.58m, 3.52 11.80m, 2.81 10.87m, 2.60
Seq-8 13.28m, 4.93 7.13m, 2.31 7.15m, 2.47
Table 1: Comparison between 3 different settings of our technique. Mean (top) and median (bottom) errors of 6-DoF pose on Oxford RobotCar are reported.

We investigate the effects of updating database on localization accuracy and inference time. After each query sequence finishes, we consider three strategies: i) No update: always contains just the initial 2 sequences, ii) Cull: Update with the query and perform culling, and iii) Cull+Combine: Full update with both culling and combining nodes. Mean and median 6-DoF pose errors are reported in Table 1. In general, Cull improves the localization accuracy over No update, since culling adds appearance variation to the map. In fact, there are several cases, in which Cull+Combine produces better results over Cull. This is because we consolidate useful information in the map (combining nodes which represent the same place), and also enrich the map topology (connecting nodes close to each other through culling). Inference times per query with different update strategies are given in Table 2. Without updating, the inference time is stable at (ms/query) between sequences, since the size of graph and the database do not change. In contrast, culling operation increases the inference time by about ms/query, and Cull+Combine makes it comparable to the No update case. This shows that the proposed method is able to compress the database to an extent that the query time after assimilation of new information remains comparable to the case of not updating the database at all.

Sequences No update Cull Cull+Combine
Seq-3 4.03
Seq-4 4.56 5.05 4.82
Seq-5 4.24 5.06 4.87
Seq-6 3.81 4.03 3.72
Seq-7 3.82 4.18 3.78
Seq-8 3.77 3.91 3.68
Table 2: Inference time (ms) on Oxford RobotCar. Cull+Combine has comparable inference time while giving better accuracy (see Table 1) over No update.
Training sequences VidLoc MapNet Our method
Seq-1,2 14.1h 11.6h 98.9s
Seq-3 - 6.2h 256.3s
Seq-4 - 6.3h 232.3s
Seq-5 - 6.8h 155.1s
Seq-6 - 5.7h 176.5s
Seq-7 - 6.0h 195.4s
Table 3: Training/updating time on the Oxford RobotCar.

Map maintenance and visiting unknown regions

Figure 4: Expanding coverage by updating the map. Locations are plotted using ground-truth GPS for visualization only.

Figure 3 shows the results on map maintenance with and without compression. Without compression, size of map (specifically, adjacency matrix ) grows continuously when appending a new query video . In contrast, using our compression scheme, known places in are culled, and redundant nodes in (i.e., nodes representing a same place) are combined. As a result, the graph is compressed.

Methods Seq-3 Seq-4 Seq-5 Seq-6 Seq-7 Seq-8
VidLoc 38.86m, 9.34 38.29m, 8.47 36.05m, 6.81 51.09m, 10.75 54.70m, 18.74 47.64m, 23.21
MapNet 9.31m, 4.37 8.92m, 4.09 17.19m, 5.72 26.31m, 9.78 33.68m, 18.04 26.55m, 21.97
MapNet (update+ retrain) 8.71m, 3.31 18.44m, 6.94 28.69m, 10.02 36.68m, 19.34 29.64m, 22.86
Our method 6.59m, 3.28 6.01m, 3.11 15.88m, 5.91 19.28m, 6.28 30.03m, 17.05 24.26m, 21.54
Methods Seq-3 Seq-4 Seq-5 Seq-6 Seq-7 Seq-8
VidLoc 29.63m, 1.59 29.86m, 1.57 31.33m, 1.39 47.75m, 1.70 48.53m, 2.40 42.26m, 1.94
MapNet 4.69m, 1.67 4.53m, 1.54 13.89m, 1.17 8.69m, 2.42 12.49m, 1.71 8.08m, 2.02
MapNet (update+ retrain) 5.15m, 1.44 17.39m, 1.87 11.45m, 3.42 20.88m, 4.02 11.01m, 5.21
Our method 6.06m, 1.65 5.65m, 1.33 13.05m, 1.55 6.60m, 1.85 10.87m, 2.60 7.15m, 2.47
Table 4: Comparison between our method, MapNet and VidLoc. Mean (top) and median (bottom) 6-DoF pose errors on the Oxford RobotCar dataset are reported.

Visiting unexplored area allows us to expand the coverage of our map, as we demonstrate using Mapillary data. We set , i.e., we only accept the query frame which has the MaxAP belief . When the vehicle explores unknown roads, the probability of MaxAP is small and no localization results are accepted. Once the query sequence ends, the map coverage is also extended; see Fig. 4.

Comparison against state of the art

Figure 5: Qualitative results on the RobotCar dataset.

Our method is compared against state-of-the-art localization methods: MapNet [7] and VidLoc [9]. We use the original authors’ implementation of MapNet. VidLoc implementation from MapNet is used by the recommendation of VidLoc authors. All parameters are set according to suggestion of authors.111Comparisons against [8] are not presented due to the lack of publicly available implementation.

For map updating in our method, Cull+Combine steps are used. MapNet is retrained on the new query video with the ground truth from previous predictions. Since VidLoc does not produce sufficiently accurate predictions, we do not retrain the network for subsequent query videos.

Our method outperforms MapNet and VidLoc in terms of the mean errors (see Table 4), and also has a smoother predicted trajectory than MapNet (see Fig. 5). In addition, while our method improves localization accuracy after updating the database (See Table 1), MapNet’s results is worse after retraining (See Table 4). This is because MapNet is retrained on a noisy ground truth. However, though our method is qualitatively better than MapNet, differences in median error is not obvious: this shows that median error is not a good criterion for VL, since gross errors are ignored.

Note that our method mainly performs PR; here, comparisons to VL methods are to show that a correct PR paired with simple pose inheritance can outperform VL methods in presence of appearance change. The localization error of our method can likely be improved by performing SfM on a set of images corresponding to the highest belief.

Table 3 reports training/updating time for our method and MapNet and VidLoc. Particularly, for Seq-1 and Seq-2, our method needs around 1.65 minute to construct the k-means tree and build the graph, while MapNet and VidLoc respectively require 11.6 and 14.1 hours for training. For updating a new query sequence, MapNet needs about 6 hours of retraining the network, whilst our method culls the database and combine graph nodes in less than 5 minutes. This makes our method more practical in a realistic scenario, in which the training data is acquired continuously.

7 Conclusion

This paper proposes a novel method for scalable place recognition, which is lightweight in both training and testing when the data is continuously accumulated to maintain all of the appearance variation for long-term place recognition. From the results, our algorithm shows significant potential towards achieving long-term autonomy.

References

  • [1] R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic (2016) NetVLAD: cnn architecture for weakly supervised place recognition. In CVPR, Cited by: §1.
  • [2] O. Aycard, F. Charpillet, D. Fohr, and J. Mari (1997) Place learning and recognition using hidden markov models. In IROS, Cited by: §2.1.
  • [3] A. Babenko and V. Lempitsky (2015) Tree quantization for large-scale similarity search and classification. In CVPR, Cited by: §4.2.
  • [4] E. Brachmann, A. Krull, S. Nowozin, J. Shotton, F. Michel, S. Gumhold, and C. Rother (2017) DSAC-differentiable RANSAC for camera localization. In CVPR, Cited by: §1.
  • [5] E. Brachmann and C. Rother (2018) Learning less is more-6d camera localization via 3d surface regression. In CVPR, Cited by: §1.
  • [6] E. Brachmann and T. Sattler (2018) Visual Localization: Feature-based vs. Learned Approaches. Note: https://sites.google.com/view/visual-localization-eccv-2018/home Cited by: §1.
  • [7] S. Brahmbhatt, J. Gu, K. Kim, J. Hays, and J. Kautz (2018) Geometry-aware learning of maps for camera localization. In CVPR, Cited by: §1, §1, §2.1, §6, §6.
  • [8] W. Churchill and P. Newman (2013) Experience-based navigation for long-term localisation. The International Journal of Robotics Research. Cited by: §1, §2.1, footnote 1.
  • [9] R. Clark, S. Wang, A. Markham, N. Trigoni, and H. Wen (2017) VidLoc: a deep spatio-temporal model for 6-DoF video-clip relocalization. In CVPR, Cited by: §1, §1, §6.
  • [10] M. Cummins and P. Newman (2008) FAB-map: probabilistic localization and mapping in the space of appearance. The International Journal of Robotics Research. Cited by: §1, §1, §2.1.
  • [11] M. Cummins and P. Newman (2011) Appearance-only slam at large scale with fab-map 2.0. The International Journal of Robotics Research. Cited by: §2.1.
  • [12] M. Douze, H. Jégou, and F. Perronnin (2016) Polysemous codes. In ECCV, Cited by: §4.2.
  • [13] D. Gálvez-López and J. D. Tardos (2012) Bags of binary words for fast place recognition in image sequences. IEEE Transactions on Robotics. Cited by: §1, §1, §2.1.
  • [14] M. Haklay and P. Weber (2008) OpenStreetMap: user-generated street maps. IEEE Pervasive Computing. Cited by: §1.
  • [15] P. Hansen and B. Browning (2014) Visual place recognition using hmm sequence matching. In IROS, Cited by: §2.1.
  • [16] H. Jégou, M. Douze, C. Schmid, and P. Pérez (2010) Aggregating local descriptors into a compact image representation. In CVPR, Cited by: §4.2.
  • [17] H. Jegou, M. Douze, and C. Schmid (2011) Product quantization for nearest neighbor search. TPAMI. Cited by: §4.2.
  • [18] H. Jégou and A. Zisserman (2014) Triangulation embedding and democratic aggregation for image search. In CVPR, Cited by: §4.2.
  • [19] A. Kendall and R. Cipolla (2016) Modelling uncertainty in deep learning for camera relocalization. In ICRA, Cited by: §2.1.
  • [20] A. Kendall and R. Cipolla (2017)

    Geometric loss functions for camera pose regression with deep learning

    .
    In CVPR, Cited by: §1, §2.1, §6.
  • [21] A. Kendall, M. Grimes, and R. Cipolla (2015) Posenet: a convolutional network for real-time 6-dof camera relocalization. In CVPR, Cited by: §1, §2.1, §6.
  • [22] J. Kosecka and F. Li (2004) Vision based topological markov localization. In ICRA, Cited by: §2.1.
  • [23] Y. Latif, R. Garg, M. Milford, and I. Reid (2018) Addressing challenging place recognition tasks using generative adversarial networks. In ICRA, Cited by: §2.1.
  • [24] V. Lepetit, F. Moreno-Noguer, and P. Fua (2009) Epnp: an accurate o (n) solution to the pnp problem. IJCV. Cited by: §2.1.
  • [25] D. G. Lowe (2004) Distinctive image features from scale-invariant keypoints. IJCV. Cited by: §4.2.
  • [26] S. Lowry, N. Sünderhauf, P. Newman, J. J. Leonard, D. Cox, P. Corke, and M. J. Milford (2016) Visual place recognition: a survey. IEEE Transactions on Robotics. Cited by: §1, §2.1.
  • [27] W. Maddern, G. Pascoe, C. Linegar, and P. Newman (2017) 1 year, 1000 km: the oxford robotcar dataset. The International Journal of Robotics Research. Cited by: §6, §8.1.
  • [28] M. J. Milford and G. F. Wyeth (2012) SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights. In ICRA, Cited by: §2.1.
  • [29] M. Muja and D. G. Lowe (2014)

    Scalable nearest neighbor algorithms for high dimensional data

    .
    TPAMI. Cited by: §4.2, §4.2.
  • [30] G. Neuhold, T. Ollmann, S. R. Bulò, and P. Kontschieder (2017) The mapillary vistas dataset for semantic understanding of street scenes.. In ICCV, Cited by: §1, §1, §6.
  • [31] H. Porav, W. Maddern, and P. Newman (2018) Adversarial training for adverse conditions: robust metric localisation using appearance transfer. In ICRA, Cited by: §2.1.
  • [32] C. Rubino, A. Del Bue, and T. Chin (2018) Practical motion segmentation for urban street view scenes. In ICRA, Cited by: §3.
  • [33] S. J. Russell and P. Norvig (2016) Artificial intelligence: a modern approach. Malaysia; Pearson Education Limited,. Cited by: §2.1, §4, §4.
  • [34] T. Sattler, B. Leibe, and L. Kobbelt (2017) Efficient & effective prioritized matching for large-scale image-based localization. TPAMI. Cited by: §1, §2.1.
  • [35] T. Sattler, W. Maddern, C. Toft, A. Torii, L. Hammarstrand, E. Stenborg, D. Safari, M. Okutomi, M. Pollefeys, J. Sivic, et al. (2018) Benchmarking 6DOF outdoor visual localization in changing conditions. In CVPR, Cited by: §1, §2.1, §3, §6.
  • [36] N. Savinov, A. Dosovitskiy, and V. Koltun (2018) Semi-parametric topological memory for navigation. In ICLR, Cited by: §2.1.
  • [37] S. Thrun, W. Burgard, and D. Fox (1998) A probabilistic approach to concurrent mapping and localization for mobile robots. Autonomous Robots. Cited by: §2.1.
  • [38] S. Thrun, W. Burgard, and D. Fox (2005) Probabilistic robotics. Cited by: §2.1.
  • [39] A. Torii, R. Arandjelovic, J. Sivic, M. Okutomi, and T. Pajdla (2015) 24/7 place recognition by view synthesis. In CVPR, Cited by: §4.2.
  • [40] A. Torii, R. Arandjelovic, J. Sivic, M. Okutomi, and T. Pajdla (2015) 24/7 place recognition by view synthesis. In CVPR, Cited by: §2.1.
  • [41] F. Walch, C. Hazirbas, L. Leal-Taixe, T. Sattler, S. Hilsenbeck, and D. Cremers (2017) Image-based localization using lstms for structured feature correlation. In ICCV, Cited by: §2.1.
  • [42] P. Wang, R. Yang, B. Cao, W. Xu, and Y. Lin (2018) DeLS-3d: deep localization and segmentation with a 3d semantic map. In CVPR, Cited by: §6.

8 Supplementary Material

Abbreviation Recorded Condition Sequence length
Seq-1 26/06/2014, 09:24:58 overcast 3164
Seq-2 26/06/2014, 08:53:56 overcast 3040
Seq-3 23/06/2014, 15:41:25 sun 3356
Seq-4 23/06/2014, 15:36:04 sun 3438
Seq-5 23/06/2014, 15:14:44 sun 3690
Seq-6 24/06/2014, 14:15:17 sun 3065
Seq-7 24/06/2014, 14:09:07 sun 3285
Seq-8 24/06/2014, 14:20:41 sun 3678
Table 5: Used sequences from the Oxford RobotCar dataset.

8.1 Statistics of Oxford RobotCar dataset

The statistics information of sequences we use in Oxford RobotCar [27] is shown in Table 5.

Figure 6: Samle images from our Mapillary dataset. The database image and its corresponding query have different appearance due to changes of environmental conditions and traffic density.

8.2 Sample images of Mapillary

Sample images from our Mapillary dataset are shown in Figure 6