KFNet: Learning Temporal Camera Relocalization using Kalman Filtering

Temporal camera relocalization estimates the pose with respect to each video frame in sequence, as opposed to one-shot relocalization which focuses on a still image. Even though the time dependency has been taken into account, current temporal relocalization methods still generally underperform the state-of-the-art one-shot approaches in terms of accuracy. In this work, we improve the temporal relocalization method by using a network architecture that incorporates Kalman filtering (KFNet) for online camera relocalization. In particular, KFNet extends the scene coordinate regression problem to the time domain in order to recursively establish 2D and 3D correspondences for the pose determination. The network architecture design and the loss formulation are based on Kalman filtering in the context of Bayesian learning. Extensive experiments on multiple relocalization benchmarks demonstrate the high accuracy of KFNet at the top of both one-shot and temporal relocalization approaches. Our codes are released at https://github.com/zlthinker/KFNet.


page 5

page 7

page 12


Revisiting Temporal Modeling for Video-based Person ReID

Video-based person reID is an important task, which has received much at...

Few-Shot Video Object Detection

We introduce Few-Shot Video Object Detection (FSVOD) with three importan...

Full-Frame Scene Coordinate Regression for Image-Based Localization

Image-based localization, or camera relocalization, is a fundamental pro...

Visual Localization via Few-Shot Scene Region Classification

Visual (re)localization addresses the problem of estimating the 6-DoF (D...

A Code for Unscented Kalman Filtering on Manifolds (UKF-M)

The present paper introduces a novel methodology for Unscented Kalman Fi...

Kalman Filtering With Censored Measurements

This paper concerns Kalman filtering when the measurements of the proces...

TransNet V2: An effective deep network architecture for fast shot transition detection

Although automatic shot transition detection approaches are already inve...

Code Repositories


KFNet: Learning Temporal Camera Relocalization using Kalman Filtering (CVPR 2020 Oral)

view repo

1 Introduction

Camera relocalization serves as the subroutine of applications including SLAM [16], augmented reality [10] and autonomous navigation [48]. It estimates the 6-DoF pose of a query RGB image in a known scene coordinate system. Current relocalization approaches mostly focus on one-shot relocalization for a still image. They can be mainly categorized into three classes [14, 53]: (1) the relative pose regression (RPR) methods which determine the relative pose w.r.t. the database images [4, 30], (2) the absolute pose regression (APR) methods regressing the absolute pose through PoseNet [26] and its variants [24, 25, 64] and (3) the structure-based methods that establish 2D-3D correspondences with Active Search [51, 52] or Scene Coordinate Regression (SCoRe) [56] and then solve the pose by PnP algorithms [19, 45]. Particularly, SCoRe is widely adopted recently to learn per-pixel scene coordinates from dense training data for a scene, as it can form dense and accurate 2D-3D matches even in texture-less scenes [6, 7]. As extensively evaluated in [6, 7, 53], the structure-based methods generally show better pose accuracy than the RPR and APR methods, because they explicitly exploit the rules of the projective geometry and the scene structures [53].

Apart from one-shot relocalization, temporal relocalization with respect to video frames is also worthy of investigation. However, almost all the temporal relocalization methods are based on PoseNet [26], which, in general, even underperform the structure-based one-shot methods in accuracy. This is mainly because their accuracies are fundamentally limited by the retrieval nature of PoseNet. As analyzed in [53]

, PoseNet based methods are essentially analogous to approximate pose estimation via image retrieval, and cannot go beyond the retrieval baseline in accuracy.

In this work, we are motivated by the high accuracy of structure-based relocalization methods and resort to SCoRe to estimate per-pixel scene coordinates for pose computation. Besides, we propose to extend SCoRe to the time domain in a recursive manner to enhance the temporal consistency of 2D-3D matching, thus allowing for more accurate online pose estimations for sequential images. Specifically, a recurrent network named KFNet is proposed in the context of Bayesian learning [40]

by embedding SCoRe into the Kalman filter within a deep learning framework. It is composed of three subsystems below, as illustrated in Fig. 


  • [noitemsep,topsep=3pt, leftmargin=5mm]

  • The measurement system features a network termed SCoordNet to derive the maximum likelihood (ML) predictions of the scene coordinates for a single image.

  • The process system uses OFlowNet that models the optical flow based transition process for image pixels across time steps and yields the prior predictions of scene coordinates. Additionally, the measurement and process systems provide uncertainty predictions [43, 24] to model the noise dynamics over time.

  • The filtering system fuses both predictions and leads to the maximum a posteriori (MAP) estimations of the final scene coordinates.

Furthermore, we propose probabilistic losses for the three subsystems based on the Bayesian formulation of KFNet, to enable the training of either the subsystems or the full framework. We summarize the contributions as follows.

  • [noitemsep,topsep=3pt, leftmargin=6mm]

  • We are the first to extend the scene coordinate regression problem [56] to the time domain in a learnable way for temporally-consistent 2D-3D matching.

  • We integrate the traditional Kalman filter [23] into a recurrent CNN network (KFNet) that resolves pixel-level state inference over time-series images.

  • KFNet bridges the existing performance gap between temporal and one-shot relocalization approaches, and achieves top accuracy on multiple relocalization benchmarks [56, 61, 26, 46].

  • Lastly, for better practicality, we propose a statistical assessment tool to enable KFNet to self-inspect the potential outlier predictions on the fly.

2 Related Works

Camera relocalization. We categorize camera relocalization algorithms into three classes: the relative pose regression (RPR) methods, the absolute pose regression (APR) methods and the structure-based methods.

The RPR methods use a coarse-to-fine strategy which first finds similar images in the database through image retrieval [59, 3] and then computes the relative poses w.r.t. the retrieved images [4, 30, 49]. They have good generalization to unseen scenes, but the retrieval process needs to match the query image against all the database images, which can be costly for time-critical applications.

The APR methods include PoseNet [26] and its variants [24, 25, 64] which learn to regress the absolute camera poses from the input images through a CNN. They are simple and efficient, but generally fall behind the structure-based methods in terms of accuracy, as validated by [6, 7, 53]. Theoretically, [53] explains that PoseNet-based methods are more closely related to image retrieval than to accurate pose estimation via 3D geometry.

The structure-based methods explicitly establish the correspondences between 2D image pixels and 3D scene points and then solve camera poses by PnP algorithms [19, 45, 31]. Traditionally, correspondences are searched by matching the patch features against Structure from Motion (SfM) tracks via Active Search [51, 52] and its variants [33, 11, 34, 50]

, which can be inefficient and fragile in texture-less scenarios. Recently, the correspondence problem is resolved by predicting the scene coordinates for pixels by training random forests

[56, 62, 39] or CNNs [6, 7, 32, 8] with ground truth scene coordinates, which is referred to as Scene Coordinate Regression (SCoRe).

Besides one-shot relocalization, some works have extended PoseNet to the time domain to address temporal relocalization. VidLoc [12] performs offline and batch relocalization for fixed-length video-clips by BLSTM [54]. Coskun et al. refine the pose dynamics by embedding LSTM units in the Kalman filters [13]. VLocNet [60] and VLocNet++ [46] propose to learn pose regression and the visual odometry jointly. LSG [67] combines LSTM with visual odometry to further exploit the spatial-temporal consistency. Since all the methods are extensions of PoseNet, their accuracies are fundamentally limited by the retrieval nature of PoseNet, following the analysis of [53].

Temporal processing. When processing time-series image data, ConvLSTM [65] is a standard way of modeling the spatial correlations of local contexts through time [63, 36, 29]. However, some works have pointed out that the implicit convolutional modeling is less suited to discovering the pixel associations between neighboring frames, especially when pixel-level accuracy is desired [22, 42]. Therefore, in later works, the optical flow is highlighted as a more explicit way of delineating the pixel correspondences across sequential steps [44]. For example, [44, 21, 29, 57, 42] commonly predict the optical flow fields to guide the feature map warping across time steps. Then, the warped features are fused by weighting [75, 76] or pooling [41, 44] to aggregate the temporal knowledge. In this work, we follow the practice of flow-guided warping, but the distinction from previous works is that we propose to fuse the predictions by leveraging Kalman filter principles [40].

Figure 1: The architecture of the proposed KFNet, which is decomposed into the process, measurement and filtering systems.

3 Bayesian Formulation

This section presents the Bayesian formulation of recursive scene coordinate regression in the time domain for temporal camera relocalization. Based on the formulation, the proposed KFNet is built and the probabilistic losses are defined in Sec. 4 6. Notations used below have been summarized in Table 1 for quick reference.

Given a stream of RGB images up to time , i.e., , our aim is to predict the latent state for each frame, i.e., the scene coordinate map, which is then used for pose computation. We denote the map as , where is the pixel number. By imposing the Gaussian noise assumption on the states, the state conditioned on

follows an unknown Gaussian distribution:


where and

are the expectation and covariance to be determined. Under the routine of Bayesian theorem, the posterior probability of

can be factorized as


where .

The first factor of the right hand side (RHS) of Eq. 2 indicates the prior belief about obtained from time through a process system. Provided that no occlusions or dynamic objects occur, the consecutive coordinate maps can be approximately associated by a linear process equation describing their pixel correspondences, wherein


with being the sparse state transition matrix given by the optical flow fields from time to , and , 111 denotes the set of N-dimensional positive definite matrices. being the process noise. Given , we already have the probability statement that . Then the prior estimation of from time can be expressed as


where , .


Module inputs outputs
- transition matrix
- process noise covariance
- prior state mean
- prior state covariance
- state observations
- measurement noise
- innovation
- Kalman gain
- posterior state mean
- posterior state covariance


Table 1: The summary of variables and notations used in the Bayesian formulation of KFNet.

The second factor of the RHS of Eq. 2 describes the likelihood of image observations at time made through a measurement system. The system models how is derived from the latent states , formally . However, the high nonlinearity of makes the following computation intractable. Alternatively, we map to via a nonlinear function inspired by [13], so that the system can be approximately expressed by a linear measurement equation:


where , denotes the measurement noise, and can be interpreted as the noisy observed scene coordinates. In this way, the likelihood can be re-written as by substituting for .

Let denote the residual of predicting from time ; thus


Since and are all known, observing is equivalent to observing . Hence, the likelihood can be rewritten as . Substituting Eq. 5 into Eq. 6, we have , so that the likelihood can be described by


Based on the theorems in multivariate statistics [1, 40], combining the two distributions 4 & 7

gives the bivariate normal distribution:


Making the conditioning variable, the filtering system gives the posterior distribution that writes


where is conceptually referred to as the Kalman gain and as the innovation222The derivation of Eqs. 8 & 9 is shown in Appendix B. [40, 20].

As shown in Fig. 1, the inference of the posterior scene coordinates and covariance for image pixels proceeds recursively as the time

evolves, which are then used for online pose determination. Specifically, the pixels with variances greater than

are first excluded as outliers. Then, a RANSAC+P3P [19] solver is applied to compute the initial camera pose from the 2D-3D correspondences, followed by a nonlinear optimization for pose refinement.

4 The Measurement System

The measurement system is basically a generative model explaining how the observations are generated from the latent scene coordinates , as expressed in Eq. 5. Then, the remaining problem is to learn the underlying mapping from to . This is similar to the SCoRe task [56, 6, 7], but differs in the constraint about imposed by Eq. 5. Below, the architecture of SCoordNet is first introduced, which outputs the scene coordinate predictions, along with the uncertainties, to model the measurement noise . Then, we define the probabilistic loss based on the likelihood of the measurement system.

Figure 2: The visualization of uncertainties which model the measurement noise and the process noise. (a) SCoordNet predicts larger uncertainties from single images over the object boundaries where larger errors occur. (b) OFlowNet gives larger uncertainties from the consecutive images (overlaid) over the areas where occlusions or dynamic objects appear.

4.1 Architecture

SCoordNet shares the similar fully convolutional structure to [7], as shown in Fig. 1. However, it is far more lightweight, with parameters fewer than one eighth of [7]. It encompasses twelve

convolution layers, three of which use a stride of

to downsize the input by a factor of

. ReLU follows each layer except the last one. To simplify computation and avoid the risk of over-parameterization, we postulate the isotropic covariance of the multivariate Gaussian measurement noise,

i.e., for each pixel , where denotes the identity matrix. The output thus has a channel of , comprising -d scene coordinates and a -d uncertainty measurement.

4.2 Loss

According to Eq. 5, the latent scene coordinates of pixel should follow the distribution

. Taking the negative logarithm of the probability density function (PDF) of

, we define the loss based on the likelihood which gives rise to the maximum likelihood (ML) estimation for each pixel in the form [24]:


with being the groundtruth label for . For numerical stability, we use logarithmic variance for the uncertainty measurements in practice, i.e., .

Including uncertainty learning in the loss formulation allows one to quantify the prediction errors stemming not just from the intrinsic noise in the data but also from the defined model [15]. For example, at the boundary with depth discontinuity, a sub-pixel offset would cause an abrupt coordinate shift which is hard to model. SCoordNet would easily suffer from a significant magnitude of loss in such cases. It is sensible to automatically downplay such errors during training by weighting with the uncertainty measurements. Fig. 2(a) illustrates the uncertainty predictions in such cases.

5 The Process System

The process system models the transition process of pixel states from time to , as described by the process equation of Eq. 3. Herein, first, we propose a cost volume based network, OFlowNet, to predict the optical flows and the process noise covariance jointly for each pixel. Once the optical flows are determined, Eq. 3 is equivalent to the flow-guided warping from time towards , as commonly used in [44, 21, 29, 57, 42]. Second, after the warping, the prior distribution of the states, i.e., of Eq. 4, can be evaluated. We then define the probabilistic loss based on the prior to train OFlowNet.

Figure 3: Sample optical flows predicted by OFlowNet over consecutive images (overlaid) of three different datasets [56, 61, 26].

5.1 Architecture

OFlowNet is composed of two components: the cost volume constructor and the flow estimator.

The cost volume constructor first extracts features from the two input images and respectively through seven convolutions, three of which have a stride of . The output feature maps and have a spatial size of one-eighth of the inputs and a channel number of . Then, we build up a cost volume for each pixel of the feature map , so that


where is the size of the search window which corresponds to pixels in the full-resolution image, and is the spatial offset. We apply L2-normalization to the feature maps along the channel dimension before differentiation, as in [66, 35].

The following flow estimator operates over the cost volumes for flow inference. We use a U-Net with skip connections [47] as shown in Fig. 1, which first subsamples the cost volume by a factor of for an enlarged receptive field and then upsamples it to the original resolution. The output is a unbounded confidence map for each pixel. Related works usually attain flows by hard assignment based on the matching cost encapsulated by the cost volumes [66, 58]. However, it would cause non-differentiability in later steps where the optical flows are to be further used for spatial warping. Thus, we pass the confidence map through the differentiable spatial softmax operator [17] to compute the optical flow as the expectation of the pixel offsets inside the search window. Formally,


where is the confidence at offset . To fulfill the process noise modeling, i.e., in Eq. 3, we append three fully connected layers after the bottleneck of the U-Net to regress the logarithmic variance, as shown in Fig. 1. Sample optical flow predictions are visualized in Fig. 3.

5.2 Loss

Once the optical flows are computed, the state transition matrix of Eq. 3 can be evaluated. We then complete the linear transition process of Eq. 3 by warping the scene coordinate map and uncertainty map from time towards through bilinear warping [72]. Let and be the warped scene coordinates and Gaussian variance, and be the Gaussian variance of the process noise of pixel at time . Then, the prior coordinates of , denoted as , should follow the distribution


where . Taking the negative logarithm of the PDF of , we get the loss of the process system as


It is noteworthy that the loss definition uses the prior distribution of to provide the weak supervision for training OFlowNet, with no recourse to the optical flow labeling.

One issue with the proposed process system is that it assumes no occurrence of occlusions or dynamic objects which are two outstanding challenges for tracking problems [28, 77]. Our process system partially addresses the issue by giving the uncertainty measurements of the process noise. As shown in Fig. 2(b), OFlowNet generally produces much larger uncertainty estimations for the pixels from occluded areas and dynamic objects. This helps to give lower weights to these pixels that have incorrect flow predictions in the loss computation.

6 The Filtering System

The measurement and process systems in the previous two sections have derived the likelihood and prior estimations of the scene coordinates , respectively. The filtering system aims to fuse both of them based on Eq. 9 to yield the posterior estimation.

6.1 Loss

For a pixel at time , and are respectively the likelihood and prior distributions of its scene coordinates. Putting the variables in Eqs. 6 & 9, we evaluate the innovation and the Kalman gain at pixel as


Imposing the linear Gaussian postulate of the Kalman filter, the fused scene coordinates of with the least square error follow the posterior distribution below [40] :


where and . Hence, the Kalman filtering system is parameter-free, with the loss defined based on the posterior distribution:


which is then added to the full loss that allows the end-to-end training of KFNet as below:

Figure 4:

The illustration of NIS testing for the filtering system. The histogram draws the exemplar distribution of the Normalized Innovation Squared (NIS) values of the Kalman filter. The red curve denotes the PDF of the 3-DoF Chi-squared distribution

. NIS testing works by filtering out the inconsistent predictions whose NIS values locate out of the acceptance region (red shaded) of .

6.2 Consistency Examination

In practice, the filter could behave incorrectly due to the outlier estimations caused by the erratic scene coordinate regression or a failure of flow tracking. This would induce accumulated state errors in the long run. Therefore, we use the statistical assessment tool, Normalized Innovation Squared (NIS) [5], to filter the inconsistent predictions during inference.


  One-shot Relocalization   Temporal Relocalization
Search [52]


chess   0.08m, 3.25° 0.04m, 1.73° 0.04m, 1.96° 0.02m, 0.5° 0.019m, 0.63°   0.18m, - 0.33m, 6.9° 0.023m,1.44° 0.09m, 3.28° 0.018m, 0.65°
fire   0.27m, 11.7° 0.03m, 1,74° 0.03m, 1.53° 0.02m, 0.9° 0.023m, 0.91°   0.26m, - 0.41m, 15.7° 0.018m, 1.39° 0.26m, 10.92° 0.023m, 0.90°
heads   0.18m, 13.3° 0.05m, 1.98° 0.02m, 1.45° 0.01m, 0.8° 0.018m, 1.26°   0.21m, - 0.28m, 13.01° 0.016m, 0.99° 0.17m, 12.70° 0.014m, 0.82°
office   0.17m, 5.15° 0.04m, 1.62° 0.09m, 3.61° 0.03m, 0.7° 0.026m, 0.73°   0.36m, - 0.43m, 7.65° 0.024m, 1.14° 0.18m, 5.45° 0.025m, 0.69°
pumpkin   0.22m, 4.02° 0.04m, 1.64° 0.08m, 3.10° 0.04m, 1.1° 0.039m, 1.09°   0.31m, - 0.49m, 10.63° 0.024m, 1.45° 0.20m, 3.69° 0.037m, 1.02°
redkitchen   0.23m, 4.93° 0.04m, 1.63° 0.07m, 3.37° 0.04m, 1.1° 0.039m, 1.18°   0.26m, - 0.57m, 8.53° 0.025m, 2.27° 0.23m, 4.92° 0.038m, 1.16°
stairs   0.30m, 12.1° 0.04m, 1.51° 0.03m, 2.22° 0.09m, 2.6° 0.037m, 1.06°   0.14m, - 0.46m, 14.56° 0.021m,1.08° 0.23m, 11.3° 0.033m, 0.94°
Average   0.207m, 7.78° 0.040m, 1.69° 0.051m, 2.46° 0.036m, 1.10° 0.029m, 0.98°   0.246m, - 0.424m, 11.00° 0.022m, 1.39° 0.190m, 7.47° 0.027m, 0.88°



GreatCourt   - - - 0.40m, 0.2° 0.43m, 0.20°   - - - - 0.42m, 0.21°
KingsCollege   1.07m, 1.89° - 0.42m, 0.55° 0.18m, 0.3° 0.16m, 0.29°   - 2.01m, 5.35° - - 0.16m, 0.27°
OldHospital   1.94m, 3.91° - 0.44m, 1.01° 0.20m, 0.3° 0.18m, 0.29°   - 2.35m, 5.05° - - 0.18m, 0.28°
ShopFacade   1.49m, 4.22° - 0.12m, 0.40° 0.06m, 0.3° 0.05m, 0.34°   - 1.63m, 6.89° - - 0.05m, 0.31°
StMarysChurch   2.00m, 4.53° - 0.19m, 0.54° 0.13m, 0.4° 0.12m, 0.36°   - 2.61m, 8.94° - - 0.12m, 0.35°
Street   - - 0.85m, 0.83° - -   - 3.05m, 5.62° - - -
Average 1   1.63m, 3.64° - 0.29m, 0.63° 0.14m, 0.33° 0.13m, 0.32°   - 2.15m, 6.56° - - 0.13m, 0.30°


DeepLoc   - - 0.010m, 0.04° - 0.083m, 0.45°   - - 0.320m, 1.48° - 0.065m, 0.43°


  • The average does not include errors of GreatCourt and Street as some methods do not report results of the two scenes.

Table 2: The median translation and rotation errors of different relocalization methods. Best results are in bold.

Normally, the innovation variable follows the Gaussian distribution as shown by Eq. 8, where . Then,

is supposed to follow the Chi-squared distribution with three degrees of freedom, denoted as

. It is thus reasonable to see a pixel state as an outlier if its NIS value locates outside the acceptance region of . As illustrated in Fig. 4, we use the critical value of in the NIS test, which means we have at least statistical evidence to regard one pixel state as negative. The uncertainties of the pixels failing the test, e.g. , are reset to be infinitely large so that they will have no effect in later steps.


One-shot Temporal
DSAC++[7] ESAC [8] SCoordNet KFNet
96.8% 97.8% 98.9% 99.2%


Table 3: The 5cm-5deg accuracy of one-shot and temporal relocalization methods on 12scenes [61].

7 Experiments

7.1 Experiment Settings

Datasets. Following previous works [26, 6, 7, 46], we use two indoor datasets - 7scenes [56] and 12scenes [61], and two outdoor datasets - DeepLoc [46] and Cambridge [26] for evaluation. Each scene has been split into different strides of sequences for training and testing.

Data processing. Images are downsized to for 7scenes and 12scenes, for DeepLoc and Cambridge. The groundtruth scene coordinates of 7scenes and 12scenes are computed based on given camera poses and depth maps, whereas those of DeepLoc and Cambridge are rendered from surfaces reconstructed with training images.

Figure 5: The point clouds predicted by different relocalization methods. Our SCoordNet and KFNet increasingly suppress the noise as highlighted by the red boxes and produce much neater point clouds than the state-of-the-art DSAC++ [7]. The KFNet-filtered panel filters out the points of KFNet of which the uncertainties are too large and gives rather clean and accurate mapping results.

Training. Our best practice chooses the parameter setting as , , . The ADAM optimizer [27] is used with and . We use an initial learning rate of and then drop it with exponential decay. The training procedure has 3 stages. First, we train SCoordNet for each scene with the likelihood loss (Eq. 10). The iteration number is set to be proportional to the surface area of each scene and the learning rate drops from to . In particular, we use SCoordNet as the one-shot version of the proposed approach. Second, OFlowNet is trained using all the scenes for each dataset with the prior loss (Eq. 14). It also experiences the learning rate decaying from to . Each batch is composed of two consecutive frames. The window size of OFlowNet in the original images is set to 64, 128, 192 and 256 for the four datasets mentioned above, respectively, due to the increasing ego-motion through them. Third, we fine-tune all the parameters of KFNet jointly by optimizing the full loss (Eq. 18) with a learning rate going from to . Each batch in the third stage contains four consecutive frames.

7.2 Results

7.2.1 The Relocalization Accuracy

Following [6, 7, 12, 60], we use two accuracy metrics: (1) the median rotation and translation error of poses (see Table 2); (2) the 5cm-5deg accuracy (see Table 3), i.e., the mean percentage of the poses with translation and rotation errors less than 5 cm and 5°, respectively. The uncertainty threshold (Sec. 3) is set to 5 cm for 7scenes and 12scenes and 50 cm for DeepLoc and Cambridge.

One-shot relocalization. Our SCoordNet achieves the lowest pose errors on 7scenes and Cambridge, and the highest 5cm-5deg accuracy on 12scenes among the one-shot methods, surpassing CamNet [14] and MapNet [9] which are the state-of-the-art relative and absolute pose regression methods, respectively. Particularly, SCoordNet outperforms the state-of-the-art structure-based methods DSAC++ [7] and ESAC [8], yet with fewer parameters (M vs. M vs. M, respectively). The advantage of SCoordNet should be mainly attributed to the uncertainty modeling, as we will analyze in Appendix C. It also surpasses Active Search (AS) [52] on 7scenes and Cambridge, but underperforms AS on DeepLoc. We find that, in the experiments of AS on DeepLoc [53], AS is tested on a SfM model built with both training and test images. This may explain why AS is surprisingly more accurate on DeepLoc than on other datasets, since the 2D-3D matches between test images and SfM tracks have been established and their geometry has been optimized during the SfM reconstruction.

Temporal relocalization. Our KFNet improves over SCoordNet on all the datasets as shown in Tables 2 & 3. The improvement on Cambridge is marginal as the images are over-sampled from videos sparsely. The too large motions between frames make it hard to model the temporal correlations. KFNet obtains much lower pose errors than other temporal methods, except that it has a larger translation error than VLocNet++ [46] on 7scenes. However, the performance of VLocNet++ is inconsistent across different datasets. On DeepLoc, the dataset collected by the authors of VLocNet++, VLocNet++ has a much larger pose error than KFNet, even though it also integrates semantic segmentation into learning. The inconsistency is also observed in [53], which shows that VLocNet++ cannot substaintially exceed the accuracy of retrieval based methods [59, 3].


7scenes 12scenes DeepLoc Cambridge
mean stddev mean stddev mean stddev mean stddev
DSAC++ [7] 28.8 33.1 28.8 47.1 - - 467.3 883.7
SCoordNet 16.8 23.3 9.8 20.0 883.0 1520.8 272.7 497.6
KFNet 15.3 21.7 7.3 13.7 200.79 398.8 241.5 441.7


Table 4:

The mean and standard deviation of predicted scene coordinate errors in centimeters.

7.2.2 The Mapping Accuracy

Relocalization methods based on SCoRe [56, 7] can create a mapping result for each view by predicting per-pixel scene coordinates. Hence, relocalization and mapping can be seen as dual problems, as one can be easily resolved once the other is known. Here, we would like to evaluate the mapping accuracy with the mean and the standard deviation (stddev) of scene coordinate errors of the test images.

As shown in Table 4, the mapping accuracy is in accordance with the relocalization accuracy reported in Sec. 7.2.1. SCoordNet reduces the mean and stddev values greatly compared against DSAC++, and KFNet further reduces the mean error over SCoordNet by , , and on the four datasets, respectively. The improvements are also reflected in the predicted point clouds, as visualized in Fig. 5. SCoordNet and KFNet predict less noisy scene points with better temporal consistency compared with DSAC++. Additionally, we filter out the points of KFNet with uncertainties greater than as displayed in the KFNet-filtered panel of Fig. 5, which helps to give much neater and more accurate 3D point clouds.

7.2.3 Motion Blur Experiments

Figure 6:

(a) Artificial motion blur images. (b) & (c) The cumulative distribution functions (CDFs) of pose errors before and after motion blur is applied.

Although, in terms of the mean scene coordinate error in Table. 4, SCoordNet outperforms DSAC++ by over and KFNet further improves SCoordNet by a range from to , the improvements in terms of the median pose error in Table 2 are not as significant. The main reason is that the RANSAC-based PnP solver diminishes the benefits brought by the scene coordinate improvements, since only a small subset of accurate scene coordinates selected by RANSAC matters in the pose accuracy. Therefore, to highlight the advantage of KFNet, we conduct more challenging experiments over motion blur images which are quite common in real scenarios. For the test image sequences of 7scenes, we apply a motion blur filter with a kernel size of 30 pixels for every 10 images as shown in Fig. 6(a). In Fig. 6(b)&(c), we plot the cumulative distribution functions of the pose errors before and after applying motion blur. Thanks to the uncertainty reasoning, SCoordNet generally attains smaller pose errors than DSAC++ whether motion blur is present. While SCoordNet and DSAC++ show a performance drop after motion blur is applied, KFNet maintain the pose accuracy as shown in Fig. 6(b)&(c), leading to a more notable margin between KFNet and SCoordNet and demonstrating the benefit of the temporal modelling used by KFNet.

7.3 Ablation studies


One-shot Temporal
SCoordNet ConvLSTM [65] TPooler [44] SWeight [75] KFNet
0.029m, 0.98° 0.040m, 1.12° 0.029m, 0.94° 0.029m, 0.95° 0.027m, 0.88°


Table 5: The median pose errors produced by different temporal aggregation methods on 7scenes. Our KFNet achieves better pose accuracy than other temporal aggregation strategies.
Evaluation of Temporal Aggregation.

This section studies the efficacy of our Kalman filter based framework in comparison with other popular temporal aggregation strategies including ConvLSTM [65, 29], temporal pooler (TPooler) [44] and similarity weighting (SWeight) [75, 76]. KFNet is more related to TPooler and SWeight which also use the flow-guided warping yet within an n-frame neighborhood. For equitable comparison, the same feature network and probabilistic losses as KFNet are applied to all. We use a kernel size of for ConvLSTM to ensure a window size of in images. The same OFlowNet structure and a -frame neighborhood are used for TPooler and SWeight for flow-guided warping.

Table 5 shows the comparative results on 7scenes

. ConvLSTM largely underperforms SCoordNet and other aggregation methods in pose accuracy, which manifests the necessity of explicitly determining the pixel associations between frames instead of implicit modeling. Although the flow-guided warping is employed, TPooler and SWeight only achieve marginal improvements over SCoordNet compared with KFNet, which justifies the advantage of the Kalman filtering system. Compared with TPooler and SWeight, the Kalman filter behaves as a more disciplined and non-heuristic approach to temporal aggregation that ensures an optimal solution of the linear Gaussian state-space model

[18] defined in Sec. 3.

Evaluation of Consistency Examination

Here, we explore the functionality of the consistency examination which uses NIS testing [5] (see Sec. 6.2). Due to the infrequent occurrence of extreme outlier predictions among the well-built relocalization datasets, we simulate the tracking lost situations by trimming a sub-sequence off each testing sequence of 7scenes and 12scenes. Let and denote the last frame before and the first frame after the trimming. The discontinuous motion from to would cause outlier scene coordinate predictions for by KFNet. Fig. 7 plots the mean pose and scene coordinate errors of frames around and visualizes the poses of a sample trimmed sequence. With the NIS test, the errors revert to a normal level promptly right after , whereas without the NIS test, the accuracy of poses after is affected adversely. NIS testing stops the propagation of the outlier predictions of into later steps by giving them infinitely large uncertainties, so that will leave out the prior from and reinitialize itself with the predictions of the measurement system.

Figure 7: (a) & (b) With NIS testing [5], the errors of poses and scene coordinates quickly revert to normal after the lost tracking. (c) The poses of a sample sequence show that, without NIS testing, the lost tracking adversely affects the pose accuracy of the subsequent frames.

8 Conclusion

This work addresses the temporal camera relocalization problem by proposing a recurrent network named KFNet. It extends the scene coordinate regression problem to the time domain for online pose determination. The architecture and the loss definition of KFNet are based on the Kalman filter, which allows a disciplined manner of aggregating the pixel-level predictions through time. The proposed approach yields the top accuracy among the state-of-the-art relocalization methods over multiple benchmarks. Although KFNet is only validated on the camera relocalization task, the immediate application alongside other tasks like video processing [21, 29] and segmentation [63, 42] , object tracking [35, 76] would be anticipated.


Appendix A Full Network Architecture

As a supplement to the main paper, we detail the parameters of the layers of SCoordNet and OFlowNet used for training 7scenes in Table 10 at the end of the appendix.

Appendix B Supplementary Derivation of the Bayesian Formulation

This section supplements the derivation of the distributions 8 & 9 in the main paper.

Let us denote the bivariate Gaussian distribution of the latent state and the innovation conditional on as


where . Based on the multivariate statistics theorems [2], the conditional distribution of given is expressed as


and similarly,


Conversely, if Eq. 20 holds and , Eq. 19 will also hold according to [2]. Since we have had in Eq. 4 of the main paper, we can note that


Recalling Eq. 7 of the main paper, we already have


Equalizing Eq. 21 and Eq. 23, we have


Substituting the variables of Eqs. 22 & 24 into Eqs. 19 & 20, we have reached the distributions 8 & 9 in the main paper.

Figure 8:

(a) The confusion matrix of 19 scenes given by our uncertainty predictions. The redder a block (i, j), the more likely it is that the images of the j-th scene belong to the i-th scene. (b) The CDFs of scene coordinate errors given by SCoordNet and OFlowNet with or without uncertainty modeling.

Appendix C Ablation Study on the Uncertainty Modeling

The uncertainty modeling, which helps to quantify the measurement and process noise, is an indispensable component of KFNet. In this section, we conduct ablation studies on it.

First, we run the trained KFNet of each scene from 7scenes and 12scenes over the test images of each scene exhaustively and visualize the median uncertainties as the confusion matrix in Fig. 8

(a). The uncertainties between the same scene in the main diagonal are much lower than those between different scenes. It indicates that meaningful uncertainties are learned which can be used for scene recognition. Second, we qualitatively compare SCoordNet and OFlowNet against their counterparts which are trained with L2 loss without uncertainty modeling. The cumulative distribution functions (CDFs) of scene coordinate errors tested on

7scenes and 12scenes are shown in Fig. 8(b). The uncertainty modeling leads to more accurate predictions for both SCoordNet and OFlowNet. We attribute the improvements to the fact that the uncertainties apply auto-weighting to the loss term of each pixel as in Eqs. 10 & 14 of the main paper, which prevents the learning from getting stuck in the hard or infeasible examples like the boundary pixels for SCoordNet and the occluded pixels for OFlowNet (see Fig. 2 of the main paper).


Layers (kernel, stride)
L7 L8 L9 L10 L11 L12
8 29 1, 2 1, 1 1, 1 1, 1 1, 1 1, 1
8 45 3, 2 1, 1 1, 1 1, 1 1, 1 1, 1
8 61 3, 2 3, 1 1, 1 1, 1 1, 1 1, 1
8 93 3, 2 3, 1 3, 1 3, 1 1, 1 1, 1
8 125 3, 2 3, 1 3, 1 3, 1 3, 1 3, 1
8 157 3, 2 3, 1 5, 1 5, 1 3, 1 3, 1
8 189 3, 2 3, 1 5, 1 5, 1 5, 1 5, 1
8 221 3, 2 3, 1 7, 1 7, 1 5, 1 5, 1


4 93 3, 1 3, 1 5, 1 5, 1 3, 1 3, 1
8 93 3, 2 3, 1 3, 1 3, 1 1, 1 1, 1
16 93 3, 2 3, 1 3, 2 1, 1 1, 1 1, 1
32 93 3, 2 3, 1 3, 2 1, 1 1, 2 1, 1


Table 6: The parameters of 7-th to 12-th layers of SCoordNet w.r.t. different downsample rates and receptive fields. The number before comma is kernel size, while the one after comma is stride.


Relocalization accuracy Mapping accuracy
pose error pose accuracy mean stddev
29 0.025m, 0.87° 87.9% 29.6cm 32.3
45 0.023m, 0.88° 93.4% 24.4cm 29.2
61 0.023m, 0.84° 94.0% 17.3cm 23.1
93 0.024m, 0.91° 92.9% 11.5cm 16.4
125 0.026m, 0.95° 88.3% 11.7cm 16.1
157 0.026m, 0.97° 86.6% 10.3cm 15.0
189 0.030m, 1.07° 81.0% 10.3cm 13.9
221 0.031m, 1.22° 71.8% 9.5cm 12.9


Table 7: The performance of SCoordNet w.r.t. the receptive field. The pose accuracy means the percentage of poses with rotation and translation errors less than °and cm, respectively.

Appendix D Ablation Study on the Receptive Field

The receptive field, denoted as

, is an essential factor of Convolutional Neural Network (CNN) design. In our case, it determines how many image observations around a pixel are exposed and used for scene coordinate prediction. Here, we would like to evaluate the impact of

on the performance of SCoordNet. SCoordNet presented in the main paper has . We change the kernel size of -th to -th layers of SCoordNet to adjust the receptive field to , , , , , , , as shown in Table 6. Due to the time limitations, the evaluation only runs on heads of 7scenes dataset [56]. As reported in Table 7, the mean of scene coordinate errors grows up as the receptive field decreases. We illustrate the CDF of scene coordinate errors in Fig. 9. It is noteworthy that a smaller results in more outlier predictions which cause a larger mean of scene coordinate errors. However, a larger mean of scene coordinate error does not necessarily lead to a decrease in relocalization accuracy. For example, a receptive field of has worse mapping accuracy than the larger receptive fields, but it achieves the smaller pose error and the better pose accuracy than them. As we can see from Fig. 9, a smaller receptive field has a larger portion of precise scene coordinate predictions, especially those with errors smaller than . These predictions are crucial to the accuracy of pose determination, as the outlier predictions are generally filtered by RANSAC. Nevertheless, when we further reduce from to and then , a drop of relocalization accuracy is observed. It is because, as decreases, the growing number of outlier predictions deteriorates the robustness of pose computation. A receptive field between and is a good choice that respects the trade-off between precision and robustness.

Figure 9: The cumulative distribution function of scene coordinate errors w.r.t. different receptive field . A smaller generally has a denser distribution of errors smaller than cm as well as larger than cm. The more predictions with errors smaller than cm contribute to the accuracy of pose determination, while the larger number of outlier predictions with errors larger than cm hamper the robustness of relocalization.

Appendix E Ablation Study on the Downsample Rate

Due to the cost of dense predictions over full-resolution images, we predict scene coordinates for the images downsized by a factor of in the main paper, following previous works [7]. In this section, we intend to explore how the downsample rate affects the trade-off between accuracy and efficiency over SCoordNet. As reported in Table 6, we change the kernel size and strides of -th to -th layers to adjust the downsample rate to , , and with the same receptive field of . The mean accuracy and the average time taken to localize frames of heads are reported in Table 8. As intuitively expected, the larger downsample rate generally leads to a drop of relocalization and mapping accuracy, as well as an increasing speed. For example, the downsample rate and have a comparable performance, while the downsample rate outperforms by a large margin. However, on the upside, a larger downsample rate is appealing due to the higher efficiency which scales quadratically with the downsample rate. For real-time applications, a downsample rate of allows for a low latency of ms per frame with a frequency of about Hz333All the experiments of this work run on a machine with a 8-core Intel i7-4770K, a 32GB memory and a NVIDIA GTX 1080 Ti graphics card..


Relocalization accuracy Mapping accuracy Time
pose error pose accuracy mean stddev
4 0.024m, 0.97° 93.6% 11.2cm 17.3 1.34s
8 0.024m, 0.91° 92.9% 11.5cm 16.4 0.20s
16 0.025m, 0.92° 89.1% 16.3cm 20.5 0.11s
32 0.029m, 1.06° 79.6% 20.7cm 20.7 0.034s


Table 8: The performance of SCoordNet w.r.t. the downsample rate. The pose accuracy means the percentage of poses with rotation and translation errors less than °and cm, respectively.

Appendix F Running Time of KFNet Subsystems

Table 9 reports the mean running time per frame (of size ) of the measurement, process and filtering systems and NIS test, on a NVIDIA GTX 1080 Ti. Since the measurement and process systems are independent and can run in parallel, the total time per frame is 157.18 ms, which means KFNet only causes an extra overhead of 0.58 ms compared to the one-shot SCoordNet. Besides, our KFNet is 3 times faster than the state-of-the-art one-shot relocalization system DSAC++ [7].


Modules Measurement Process Filtering NIS Total -
Time (ms) 156.60 51.23 0.29 0.29 157.18 486.07


Table 9: Running time of the subsystems of KFNet.

Appendix G Mapping Visualization

As a supplement of Fig. 5 in the main paper, we visualize the point clouds of 7scenes [56], 12scenes [61] and Cambridge [26] predicted by DSAC++ [7] and our KFNet-filtered in Fig. 10. The clean point clouds predicted by KFNet in an end-to-end way provides an efficient alternative to costly 3D reconstruction from scratch [73, 71, 69, 55, 38, 74, 70, 37, 68] in the relocalization setting, which is supposed to be valuable to mapping-based applications such as augmented reality.

Figure 10: Point clouds of all the scenes predicted by DSAC++ [7] and our KFNet-filtered. Zoom in for better view.


Input Layer Output Output Size
Conv+ReLU, K=3x3, S=1, F=64 conv1a
conv1a Conv+ReLU, K=3x3, S=1, F=64 conv1b
conv1b Conv+ReLU, K=3x3, S=2, F=256 conv2a
conv2a Conv+ReLU, K=3x3, S=1, F=256 conv2b
conv2b Conv+ReLU, K=3x3, S=2, F=512 conv3a
conv3a Conv+ReLU, K=3x3, S=1, F=512 conv3b
conv3b Conv+ReLU, K=3x3, S=2, F=1024 conv4a
conv4a Conv+ReLU, K=3x3, S=1, F=1024 conv4b
conv4b Conv+ReLU, K=3x3, S=1, F=512 conv5
conv5 Conv+ReLU, K=3x3, S=1, F=256 conv6
conv6 Conv+ReLU, K=1x1, S=1, F=128 conv7
conv7 Conv, K=1x1, S=1, F=3
conv7 Conv+Exp, K=1x1, S=1, F=1
Conv+ReLU, K=3x3, S=1, F=16 feat1
feat1 Conv+ReLU, K=3x3, S=2, F=32 feat2
feat2 Conv+ReLU, K=3x3, S=1, F=32 feat3
feat3 Conv+ReLU, K=3x3, S=2, F=64 feat4
feat4 Conv+ReLU, K=3x3, S=1, F=64 feat5
feat5 Conv+ReLU, K=3x3, S=2, F=128 feat6
feat6 Conv, K=3x3, S=1, F=32
Cost Volume Constructor vol1
vol1 Reshape vol2
vol2 Conv+ReLU, K=3x3, S=1, F=32 vol3
vol3 Conv+ReLU, K=3x3, S=2, F=32 vol4
vol4 Conv+ReLU, K=3x3, S=1, F=32 vol5
vol5 Conv+ReLU, K=3x3, S=2, F=64 vol6
vol6 Conv+ReLU, K=3x3, S=1, F=64 vol7
vol7 Conv+ReLU, K=3x3, S=2, F=128 vol8
vol8 Conv+ReLU, K=3x3, S=1, F=128 vol9
vol9 Deconv+ReLU, K=3x3, S=2, F=64 vol10
vol10 vol7 Conv+ReLU, K=3x3, S=1, F=64 vol11
vol11 Deconv+ReLU, K=3x3, S=2, F=32 vol12
vol12 vol5 Conv+ReLU, K=3x3, S=1, F=32 vol13
vol13 Deconv+ReLU, K=3x3, S=2, F=16 vol14
vol14 vol3 Conv+ReLU, K=3x3, S=1, F=16 vol15
vol15 Conv, K=3x3, S=1, F=1 confidence
confidence Spatial Softmax [17] flow1
flow1 Reshape flow2
flow2, Flow-guided Warping [75, 76, 41, 44]


vol9 Reshape fc1
fc1 FC+ReLU, F=64 fc2
fc2 FC+ReLU, F=32 fc3
fc3 FC+Exp, F=1 fc4
fc4 Reshape


Table 10: The full architecture of the proposed SCoordNet and OFlowNet. “” denotes concatenation along -th dimension.


  • [1] T. W. Anderson (1958) An introduction to multivariate statistical analysis. Vol. 2. Cited by: §3.
  • [2] T. Anderson (1984) An introduction to multivariate statistical analysis.. Cited by: Appendix B.
  • [3] R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic (2016) NetVLAD: cnn architecture for weakly supervised place recognition. In CVPR, Cited by: §2, §7.2.1.
  • [4] V. Balntas, S. Li, and V. Prisacariu (2018) Relocnet: continuous metric learning relocalisation using neural nets. In ECCV, Cited by: §1, §2.
  • [5] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan (2004) Estimation with applications to tracking and navigation: theory algorithms and software. Cited by: §6.2, Figure 7, §7.3.
  • [6] E. Brachmann, A. Krull, S. Nowozin, J. Shotton, F. Michel, S. Gumhold, and C. Rother (2017) DSAC-differentiable ransac for camera localization. In CVPR, Cited by: §1, §2, §2, §4, §7.1, §7.2.1.
  • [7] E. Brachmann and C. Rother (2018) Learning less is more-6d camera localization via 3d surface regression. In CVPR, Cited by: Appendix E, Appendix F, Figure 10, Appendix G, §1, §2, §2, §4.1, §4, Table 2, Table 3, Figure 5, §7.1, §7.2.1, §7.2.1, §7.2.2, Table 4.
  • [8] E. Brachmann and C. Rother (2019) Expert sample consensus applied to camera re-localization. In ICCV, Cited by: §2, Table 3, §7.2.1.
  • [9] S. Brahmbhatt, J. Gu, K. Kim, J. Hays, and J. Kautz (2018) Geometry-aware learning of maps for camera localization. In CVPR, Cited by: Table 2, §7.2.1.
  • [10] R. Castle, G. Klein, and D. W. Murray (2008) Video-rate localization in multiple maps for wearable augmented reality. In ISWC, Cited by: §1.
  • [11] S. Choudhary and P. Narayanan (2012) Visibility probability structure from sfm datasets and applications. In ECCV, Cited by: §2.
  • [12] R. Clark, S. Wang, A. Markham, N. Trigoni, and H. Wen (2017) VidLoc: a deep spatio-temporal model for 6-dof video-clip relocalization. In CVPR, Cited by: §2, Table 2, §7.2.1.
  • [13] H. Coskun, F. Achilles, R. S. DiPietro, N. Navab, and F. Tombari (2017) Long short-term memory kalman filters: recurrent neural estimators for pose regularization.. In ICCV, Cited by: §2, §3, Table 2.
  • [14] M. Ding, Z. Wang, J. Sun, J. Shi, and P. Luo (2019) CamNet: coarse-to-fine retrieval for camera re-localization. In ICCV, Cited by: §1, Table 2, §7.2.1.
  • [15] C. B. Do (2007) Gaussian processes. Stanford University. Cited by: §4.2.
  • [16] H. Durrant-Whyte and T. Bailey (2006) Simultaneous localization and mapping: part i. IEEE robotics & automation magazine 13 (2), pp. 99–110. Cited by: §1.
  • [17] C. Finn, X. Y. Tan, Y. Duan, T. Darrell, S. Levine, and P. Abbeel (2015)

    Learning visual feature spaces for robotic manipulation with deep spatial autoencoders

    Cited by: Table 10, §5.1.
  • [18] S. Frühwirth-Schnatter (1995)

    Bayesian model discrimination and bayes factors for linear gaussian state space models

    Journal of the Royal Statistical Society 57 (1), pp. 237–246. Cited by: §7.3.
  • [19] X. Gao, X. Hou, J. Tang, and H. Cheng (2003) Complete solution classification for the perspective-three-point problem. PAMI 25 (8), pp. 930–943. Cited by: §1, §2, §3.
  • [20] M. S. Grewal (2011) Kalman filtering. In International Encyclopedia of Statistical Science, pp. 705–708. Cited by: §3.
  • [21] T. Hyun Kim, M. S. M. Sajjadi, M. Hirsch, and B. Scholkopf (2018)

    Spatio-temporal transformer network for video restoration

    In ECCV, Cited by: §2, §5, §8.
  • [22] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox (2017) FlowNet 2.0: evolution of optical flow estimation with deep networks. In CVPR, Cited by: §2.
  • [23] R. E. Kalman (1960) A new approach to linear filtering and prediction problems. Cited by: 2nd item.
  • [24] A. Kendall and R. Cipolla (2016) Modelling uncertainty in deep learning for camera relocalization. In ICRA, Cited by: 2nd item, §1, §2, §4.2.
  • [25] A. Kendall and R. Cipolla (2017)

    Geometric loss functions for camera pose regression with deep learning

    In CVPR, Cited by: §1, §2.
  • [26] A. Kendall, M. Grimes, and R. Cipolla (2015) Posenet: a convolutional network for real-time 6-dof camera relocalization. In ICCV, Cited by: Appendix G, 3rd item, §1, §1, §2, Figure 3, §7.1.
  • [27] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §7.1.
  • [28] D. Koller, J. Weber, and J. Malik (1994) Robust multiple car tracking with occlusion reasoning. In ECCV, Cited by: §5.2.
  • [29] W. Lai, J. Huang, O. Wang, E. Shechtman, E. Yumer, and M. Yang (2018) Learning blind video temporal consistency. In ECCV, Cited by: §2, §5, §7.3, §8.
  • [30] Z. Laskar, I. Melekhov, S. Kalia, and J. Kannala (2017) Camera relocalization by computing pairwise relative poses using convolutional neural network. In ICCV, Cited by: §1, §2.
  • [31] V. Lepetit, F. Moreno-Noguer, and P. Fua (2009) Epnp: an accurate o (n) solution to the pnp problem. IJCV 81 (2), pp. 155. Cited by: §2.
  • [32] X. Li, J. Ylioinas, J. Verbeek, and J. Kannala (2018) Scene coordinate regression with angle-based reprojection loss for camera relocalization. In arXiv preprint arXiv:1808.04999, Cited by: §2.
  • [33] Y. Li, N. Snavely, and D. P. Huttenlocher (2010) Location recognition using prioritized feature matching. In ECCV, Cited by: §2.
  • [34] L. Liu, H. Li, and Y. Dai (2017) Efficient global 2d-3d matching for camera localization in a large-scale 3d map. In ICCV, Cited by: §2.
  • [35] Y. Lu, C. Lu, and C. Tang (2017) Online video object detection using association lstm. In ICCV, Cited by: §5.1, §8.
  • [36] Y. Luo, J. Ren, Z. Wang, W. Sun, J. Pan, J. Liu, J. Pang, and L. Lin (2018) LSTM pose machines. In CVPR, Cited by: §2.
  • [37] Z. Luo, T. Shen, L. Zhou, J. Zhang, Y. Yao, S. Li, T. Fang, and L. Quan (2019) Contextdesc: local descriptor augmentation with cross-modality context. In CVPR, Cited by: Appendix G.
  • [38] Z. Luo, T. Shen, L. Zhou, S. Zhu, R. Zhang, Y. Yao, T. Fang, and L. Quan (2018) Geodesc: learning local descriptors by integrating geometry constraints. In ECCV, Cited by: Appendix G.
  • [39] D. Massiceti, A. Krull, E. Brachmann, C. Rother, and P. H. Torr (2017) Random forests versus neural networks—what’s best for camera localization?. In ICRA, Cited by: §2.
  • [40] R. J. Meinhold and N. D. Singpurwalla (1983) Understanding the kalman filter. The American Statistician 37 (2), pp. 123–127. Cited by: §1, §2, §3, §6.1.
  • [41] P. Nguyen, T. Liu, G. Prasad, and B. Han (2018) Weakly supervised action localization by sparse temporal pooling network. In CVPR, Cited by: Table 10, §2.
  • [42] D. Nilsson and C. Sminchisescu (2018) Semantic video segmentation by gated recurrent flow propagation. In CVPR, Cited by: §2, §5, §8.
  • [43] D. Novotny, D. Larlus, and A. Vedaldi (2017) Learning 3d object categories by looking around them. In ICCV, Cited by: 2nd item.
  • [44] T. Pfister, J. Charles, and A. Zisserman (2015) Flowing convnets for human pose estimation in videos. In ICCV, Cited by: Table 10, §2, §5, §7.3, Table 5.
  • [45] L. Quan and Z. Lan (1999) Linear n-point camera pose determination. PAMI 21 (8), pp. 774–780. Cited by: §1, §2.
  • [46] N. Radwan, A. Valada, and W. Burgard (2018) Vlocnet++: deep multitask learning for semantic visual localization and odometry. IEEE Robotics and Automation Letters 3 (4). Cited by: 3rd item, §2, Table 2, §7.1, §7.2.1.
  • [47] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, Cited by: §5.1.
  • [48] E. Royer, M. Lhuillier, M. Dhome, and J. Lavest (2007) Monocular vision for mobile robot localization and autonomous navigation. IJCV 74 (3), pp. 237–260. Cited by: §1.
  • [49] S. Saha, G. Varma, and C. Jawahar (2018) Improved visual relocalization by discovering anchor points. In BMVC, Cited by: §2.
  • [50] P. Sarlin, C. Cadena, R. Siegwart, and M. Dymczyk (2019) From coarse to fine: robust hierarchical localization at large scale. In CVPR, Cited by: §2.
  • [51] T. Sattler, B. Leibe, and L. Kobbelt (2011) Fast image-based localization using direct 2d-to-3d matching. In ICCV, Cited by: §1, §2.
  • [52] T. Sattler, B. Leibe, and L. Kobbelt (2017) Efficient & effective prioritized matching for large-scale image-based localization. PAMI (9), pp. 1744–1756. Cited by: §1, §2, Table 2, §7.2.1.
  • [53] T. Sattler, Q. Zhou, M. Pollefeys, and L. Leal-Taixe (2019) Understanding the limitations of cnn-based absolute camera pose regression. In CVPR, Cited by: §1, §1, §2, §2, §7.2.1, §7.2.1.
  • [54] M. Schuster and K. K. Paliwal (1997) Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45 (11), pp. 2673–2681. Cited by: §2.
  • [55] T. Shen, Z. Luo, L. Zhou, R. Zhang, S. Zhu, T. Fang, and L. Quan (2018) Matchable image retrieval by learning from surface reconstruction. In ACCV, Cited by: Appendix G.
  • [56] J. Shotton, B. Glocker, C. Zach, S. Izadi, A. Criminisi, and A. Fitzgibbon (2013) Scene coordinate regression forests for camera relocalization in rgb-d images. In CVPR, Cited by: Appendix D, Appendix G, 1st item, 3rd item, §1, §2, §4, Figure 3, §7.1, §7.2.2.
  • [57] J. Song, L. Wang, L. Van Gool, and O. Hilliges (2017) Thin-slicing network: a deep structured model for pose estimation in videos. In CVPR, Cited by: §2, §5.
  • [58] D. Sun, X. Yang, M. Liu, and J. Kautz (2018) Pwc-net: cnns for optical flow using pyramid, warping, and cost volume. In CVPR, Cited by: §5.1.
  • [59] A. Torii, R. Arandjelovic, J. Sivic, M. Okutomi, and T. Pajdla (2015) 24/7 place recognition by view synthesis. In CVPR, Cited by: §2, §7.2.1.
  • [60] A. Valada, N. Radwan, and W. Burgard (2018) Deep auxiliary learning for visual localization and odometry. In ICRA, Cited by: §2, §7.2.1.
  • [61] J. Valentin, A. Dai, M. Nießner, P. Kohli, P. Torr, S. Izadi, and C. Keskin (2016) Learning to navigate the energy landscape. In 3DV, Cited by: Appendix G, 3rd item, Figure 3, Table 3, §7.1.
  • [62] J. Valentin, M. Nießner, J. Shotton, A. Fitzgibbon, S. Izadi, and P. H. Torr (2015) Exploiting uncertainty in regression forests for accurate camera relocalization. In CVPR, Cited by: §2.
  • [63] S. Valipour, M. Siam, M. Jagersand, and N. Ray (2017) Recurrent fully convolutional networks for video segmentation. In WACV, Cited by: §2, §8.
  • [64] F. Walch, C. Hazirbas, L. Leal-Taixe, T. Sattler, S. Hilsenbeck, and D. Cremers (2017) Image-based localization using lstms for structured feature correlation. In ICCV, Cited by: §1, §2.
  • [65] S. Xingjian, Z. Chen, H. Wang, D. Yeung, W. Wong, and W. Woo (2015)

    Convolutional lstm network: a machine learning approach for precipitation nowcasting

    In NIPS, Cited by: §2, §7.3, Table 5.
  • [66] J. Xu, R. Ranftl, and V. Koltun (2017) Accurate optical flow via direct cost volume processing. In CVPR, Cited by: §5.1, §5.1.
  • [67] F. Xue, X. Wang, Z. Yan, Q. Wang, J. Wang, and H. Zha (2019) Local supports global: deep camera relocalization with sequence enhancement. In ICCV, Cited by: §2, Table 2.
  • [68] J. Zhang, D. Sun, Z. Luo, A. Yao, L. Zhou, T. Shen, Y. Chen, L. Quan, and H. Liao (2019) Learning two-view correspondences and geometry using order-aware network. In ICCV, Cited by: Appendix G.
  • [69] R. Zhang, S. Zhu, T. Shen, L. Zhou, Z. Luo, T. Fang, and L. Quan (2018) Distributed very large scale bundle adjustment by global camera consensus. PAMI. Cited by: Appendix G.
  • [70] L. Zhou, S. Zhu, Z. Luo, T. Shen, R. Zhang, M. Zhen, T. Fang, and L. Quan (2018) Learning and matching multi-view descriptors for registration of point clouds. In ECCV, Cited by: Appendix G.
  • [71] L. Zhou, S. Zhu, T. Shen, J. Wang, T. Fang, and L. Quan (2017) Progressive large scale-invariant image matching in scale space. In ICCV, Cited by: Appendix G.
  • [72] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe (2017) Unsupervised learning of depth and ego-motion from video. In CVPR, Cited by: §5.2.
  • [73] S. Zhu, T. Shen, L. Zhou, R. Zhang, J. Wang, T. Fang, and L. Quan (2017) Parallel structure from motion from local increment to global averaging. arXiv preprint arXiv:1702.08601. Cited by: Appendix G.
  • [74] S. Zhu, R. Zhang, L. Zhou, T. Shen, T. Fang, P. Tan, and L. Quan (2018) Very large-scale global sfm by distributed motion averaging. In CVPR, Cited by: Appendix G.
  • [75] X. Zhu, Y. Wang, J. Dai, L. Yuan, and Y. Wei (2017) Flow-guided feature aggregation for video object detection. In ICCV, Cited by: Table 10, §2, §7.3, Table 5.
  • [76] Z. Zhu, W. Wu, W. Zou, and J. Yan (2018) End-to-end flow correlation tracking with spatial-temporal attention. In CVPR, Cited by: Table 10, §2, §7.3, §8.
  • [77] D. Zou and P. Tan (2013) Coslam: collaborative visual slam in dynamic environments. PAMI 35 (2). Cited by: §5.2.