VOLDOR-SLAM: For the Times When Feature-Based or Direct Methods Are Not Good Enough

04/14/2021 ∙ by Zhixiang Min, et al. ∙ Stevens Institute of Technology 0

We present a dense-indirect SLAM system using external dense optical flows as input. We extend the recent probabilistic visual odometry model VOLDOR [Min et al. CVPR'20], by incorporating the use of geometric priors to 1) robustly bootstrap estimation from monocular capture, while 2) seamlessly supporting stereo and/or RGB-D input imagery. Our customized back-end tightly couples our intermediate geometric estimates with an adaptive priority scheme managing the connectivity of an incremental pose graph. We leverage recent advances in dense optical flow methods to achieve accurate and robust camera pose estimates, while constructing fine-grain globally-consistent dense environmental maps. Our open source implementation [https://github.com/htkseason/VOLDOR] operates online at around 15 FPS on a single GTX1080Ti GPU.



There are no comments yet.


page 1

page 3

page 4

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Simultaneous localization and mapping (SLAM) addresses the incremental construction and instantaneous establishment of a global geometric reference using measurements from a dynamic observer as input. Applications benefiting from such capabilities include robotics, augmented/virtual reality, and autonomous driving. For visual SLAM, the choice of geometric representation and estimation framework must balance requirements for online operation (e.g. latency and computational burden) and achievement of task-relevant performance metrics (e.g. robustness, accuracy, and completeness). Recent dense-indirect methods challenge the standard dichotomy among feature-based and direct SLAM methods.

(a) Result on TartanAir ’westerndesert’ sequence with stereo input. (b) Result on KITTI 00, 05 sequence with stereo input.

Fig. 1: Reconstructed scene models. 3D point clouds are aggregated from keyframe depth maps.

Feature-based methods determine multi-view relationships among input video frames through the geometric analysis of sparse key-point correspondences. Popular feature-based SLAM systems [20, 17] have a visual odometry (VO) front-end solving feature correspondence/triangulation and PnP instances to initialize camera pose estimates, which are subsequently refined (e.g. bundle-adjusted) by back-end and mapping modules. Given that sparse feature analysis can operate on real-time, the trade-offs between online operation and accuracy/robustness achieved by these methods is mainly determined by the scope of the analysis performed by the back-end. However, the accuracy of front-end modules may be affected by lack of texture content, repetitive structures, and/or degenerate geometric configurations; compromising the efficacy and efficiency of back-end modules.

Direct methods jointly estimate photometrically consistent scene structure and camera poses from image content, obviating key-point correspondence. Dense direct methods [18] are typically sensitive to illumination changes, and need a sufficient number of images to triangulate structure at poorly textured regions. Sparse or semi-dense direct methods [6, 5] are less sensitive to illumination changes, but may still require accurate photometric calibration for full performance.

Recent dense-indirect (DI) formulations for VO [15] are a promising alternative to both their sparse and direct counterparts. The DI approach conditions local 3D geometry and camera motion estimates on their consistency w.r.t. (observed) dense optical flow (OF) estimates. Such frameworks actively leverage ongoing improvements in accuracy and robustness of learning-based OF estimators, while yielding high-quality dense geometry estimates as a byproduct.

To extend the recent DI inference model VOLDOR [15] into a SLAM pipeline, we develop integrative modules customized to the intermediate geometric estimates produced by its inference process. We address the efficient estimation, management and refinement of global-scope 3D data associations within the context of DI geometric estimates. Our contributions are: 1) extending VOLDOR’s inference framework to both generate and input explicit geometric priors for improved estimation accuracy across monocular, stereo and RGB-D image sources, 2) a robust point-to-plane inverse-depth alignment formulation for source-agnostic frame registration, and 3) a priority linking framework for incremental real-time pose graph management.

Ii Related Work

Indirect SLAM. Classic approaches like MonoSLAM [3] rely on EKF-based frameworks for fusing multiple observations to recover scene geometry. PTAM [12] simultaneously runs a front-end for VO and a back-end refining the estimation through bundle adjustment. ORB-SLAM [16, 17], introduced a versatile SLAM system with a more powerful back-end with global re-localization and loop closing, allowing large environment applications. More recently, ORB-SLAM3 [1] further built a multi-map system to survive from long periods of poor visual information.

Direct SLAM. DTAM [18] first introduced a GPU-based real-time dense modelling and tracking approach for small workspaces. It estimates a dense geometry model through a photometric term combined with a regularizer while estimating camera motion by finding a warping of the model that minimizes the photometric error w.r.t. the video frames. LSD-SLAM [6] switched to keyframe based semi-dense model that allows large scale CPU real-time application. DSO [5] builds sparse models and combines a probabilistic model that jointly optimizes for all parameters as well as further integrates a full photometric calibration.

Direct RGB-D SLAM. RGB-D SLAM [10] forgoes geometry estimation by using RGB-D input, while estimates camera poses by minimizing both photometric and geometric error, and managing keyframes within a pose graph. ElasticFusion [29] uses dense frame-to-model camera tracking, windowed surfel-based fusion, and frequent model refinement through non-rigid surface deformation. BundleFusion [2] tracks camera poses using sparse-to-dense optimization and a local-to-global strategy for global mapping consistency. Recently, BAD-SLAM [22] proposed a fast direct bundle-adjustment for poses, surfels and intrinsics.

Dense-Indirect VO. Valgaerts et al. [26] addressed robust recovery of fundamental matrix from a single dense OF field. Ranftl et al. [21] further addressed the motion segmentation and the recovery of depth maps. While these works highlighted the potential of DI modeling, they focused on the two-view geometry problem. VOLDOR [15] introduced the DI paradigm to the multi-view domain by modelling the VO problem within a probabilistic framework w.r.t. a log-logistic residual model on OF batches and developed a generalized-EM framework for joint real-time inference of camera motion, 3D structure and track reliability. While highly accurate, geometric analysis scope was strictly local.

Deep learning optical flow.

Current deep learning works on OF estimation outperform traditional methods in terms of accuracy and robustness under challenging conditions such as texture-less regions, motion blur and large occlusions. FlowNet/FlowNet2

[4, 8] introduced an encoder-decoder CNN for OF estimation. PWC-Net [24] integrates spatial pyramids, warping and cost volumes into deep OF estimation, improving performance and generalization. Most recently, MaskFlowNet [33] proposed an asymmetric occlusion aware feature matching module and achieved current state-of-the-art performance.

Fig. 2: VOLDORSLAM architecture. 1) Input optical flow is externally computed from video, along with optional geometric priors. 2) Dense-indirect VO front-end estimates scene structure and local camera poses over a sliding window. 3) A pose graph enforces global consistency among all pairwise pose estimates. 4) The set of edges to include in our pose graph is prioritized based on keyframe geometric analysis aimed at both identifying loop closures and reinforce local connectivity.

Iii Method

System Architecture. We build upon the recently proposed dense indirect VO method of Min et al. [15], which addressed the joint probabilistic estimation of camera motion, 3D structure and track reliability from a set of input dense OF estimates. As standard practice in SLAM literature [17, 5], our proposal VOLDOR

SLAM has a VO front-end and a mapping back-end. The front-end operates over small sequential batches (i.e. temporal sliding window) of dense OF estimates. We adaptively determine keyframe selection and stride among subsequent batches based on visibility-coverage metrics designed for our dense SLAM system. To enforce consistency within a larger geometric scope, while enabling online operation, the back-end adaptively prioritizes the analysis and establishment pose constraints between keyframe pairs balancing the search for loop closure connections and the reinforcement of local keyframe connectivity. VOLDOR

SLAM also implements a loop closure scheme based on an image retrieval and geometric verification module

[7]. Finally, all pairwise camera pose constraints are managed within a -based pose graph [23]. Module dependencies and data flow of our system are illustrated in Fig.2. The resulting dense SLAM implementation enables online operation at FPS on a single GTX1080Ti GPU.

Iii-a Front-End

VOLDOR: An extended VO model. Our front-end extends and improves upon the VOLDOR framework [15], which formulates monocular VO from observed OF input batches as a generalized EM problem where the three hidden variables are camera pose, depth, and track reliability. Originally, [15] proposed the probabilistic graphical model


where denotes the observed OF batch, the set of camera poses, the depth map of the first frame in the batch and

the set of rigidness maps used to down-weight OF pixels belonging to occlusions, moving objects or outliers. The likelihood function in

[15] defines residuals as the end-point-error (EPE) between the observed optical flow vs. the rigid flow computed from the parameters. This residual is assumed to follow an empirically validated log-logistic (i.e. Fisk) distribution model. The ensuing MAP problem is solved using a generalized-EM framework alternatively updating each of the parameters. Whereas [15] bootstraps each of the hidden variables deterministically, VOLDOR accommodates explicit priors, extending Eq.(1) to yield


where the set of explicit depth map priors is denoted by ; their associated (fixed) relative camera poses by and their pixel-level rigidness maps by . Henceforth, unless otherwise stated, our VOLDOR extensions strictly follow the framework described in [15] and we refer the reader to that publication for further details.

Geometric Priors. While [15] estimates depth maps exclusively from monocular OF fields, VOLDOR

extends the probabilistic model to optionally account for input depth map priors. Limiting our assumptions (for now) on the source of these priors, to knowing their first and second moments, we model a generic likelihood function of the depth value at pixel

as a Gaussian-Uniform mixture


where denotes the -component of the camera-space 3D coordinates of a depth value transferred across frames having relative motion , denotes the pixel of the reprojection of a depth estimate across camera frames, while and

are functions adjusting the variance of a Gaussian distribution and the density of a uniform distribution w.r.t. the observed depth prior, which are set in proportion to inverse depth value. As in

[15], we apply the likelihood function to a maximum inlier estimation framework to define the energy function for prior depths



is the probability density of

. Finally, aggregating with the original energy function conditioned on input OF described in [15], optimal depth values are selected based on the criterion


In practice each VO batch will typically have two prior depth maps: One generated during the analysis of the previous batch and another from the most recent available keyframe. Whenever both instances point to the same depth map, we will use that single depth prior. When stereo capture (or RGB-D imagery) is available, we add the corresponding depth map as a prior. However, in such case (i.e. depth priors come from known external source), VOLDOR replaces Eq.(3) by an empirical residual model as done in [15].

Depth Map Confidence Estimation. For each depth map generated through the VOLDOR framework, we associate a confidence map , which we define as the pixel-wise average of previously estimated rigidness of optical flows and depth priors used for its estimation:


When estimated depth maps are subsequently used as depth priors, the value is used as a prior for in Eq.(3), such that a new weight will replace in the depth update step. Similarly, is also used in the back-end keyframe alignment as will be described in Eq.(12).

Uncertainty-aware Pose Estimation

. The probabilistic camera pose inference in [15] assumes other parameters fixed and approximates the camera pose posterior distribution by Monte-Carlo sampling of P3P instances


where denotes the time stamp, the total number of samples, the sample index, and is a variational distribution approximating the intractable sample posterior. Further, let

be a normal distribution

, where is solved using a AP3P solver [9], while the covariance matrix is a fixed hyper-parameter. The camera pose defined by the mode of the approximated posterior, is found using meanshift with a Gaussian kernel in Lie algebra. To accommodate subsequent back-end modules, VOLDOR estimates each camera pose uncertainty

by using the meanshift result as initialization and iteratively fitting a Gaussian distribution to the samples. We discard samples outside 3-sigma for robustness and force all eigenvalues of

to be larger than an epsilon to ensure numerical stability.

Hierarchical Propagation Scheme. For estimating depth and rigidness maps, [15] adopts a sampling and propagation scheme [34]

based on a generalized-EM framework. The image 2D field is broken into alternatively directed 1D chains. In the M-step, depth values are randomly sampled and propagated through each chain. In the E-step, hidden Markov chain smoothing is imposed on the chains of rigidness maps using the forward-backward algorithm. Such sequential global depth propagation scheme imposes a performance bottleneck for GPU implementations, as it is not massively parallelizable. Hence, VOLDOR

proposes a hierarchical propagation scheme, where a global propagation is done on a depth map of reduced scale to ensure convergence rate, and local propagations done in local windows at full resolution to retain fine details. Per Fig.3, VOLDOR achieves 3x speed-up w/o loss in depth map quality w.r.t. [15].

Fig. 3: Depth propagation workflow. Top right shows runtime (ms) vs depth map quality (PSNR) trade-offs of our hierarchical propagation scheme across varying scaling factors.

VO Stride and Keyframe Selection. Once VOLDOR runs on an input batch and estimates local 3D geometries, we determine the stride to the next batch and select keyframes using visibility-coverage (VC) metrics. The VC metrics are defined over a pair of frames with known poses, where only may have a depth map. We define visibility score as the proportion of depth pixels in whose projection in is inside ’s image frame. Conversely, we define the coverage score as the proportion of the image area in being covered by the depth pixels’ projection from

. We define the VC score as the harmonic mean of visibility and coverage


The VC score is estimated between the reference frame and all other frames within the batch. The first frame with VC score below is selected as the next batch’s reference. Finally, if the VC score between the current batch’s reference frame and the latest keyframe is below , we register the current batch’s reference frame as a keyframe.

Iii-B Back-End

The front-end VO only establishes local pair-wise constraints between successive frames and within an input batch. Conversely, the back-end manages spatial relationships of larger scope by establishing spatial constraints between keyframes and detecting loop closure instances.

Keyframe Alignment. We establish keyframe alignment (KA) constraints by 3D registration of their depthmaps. That is, for two depth maps and , we estimate both their relative motion and scale factor . Similar to [19], we formulate the depth image alignment objective in terms of the difference of the inverse depth values


where is the Cauchy kernel function, is the confidence associated with pixel defined in Eq.(6). While effective, the convergence rate of Eq.(9) impeded online operation. Towards this end, we generalize the point-to-plane error to account for an inverse depth parameterization:



denotes the vector inner product, while

denotes the normal vector at pixel of depth map . Such objective implicitly enforces spatial regularization given that is computed from the local depth estimates and significantly increases convergence speed and quality. Geometrically, scaling pair-wise point-to-plane distances among depth estimates and inversely proportional to their depths, prioritizes geometry nearby to the reference frame. Conversely, weighting each kernel output by our confidence measure, mitigates depth outliers. Optionally, our keyframe alignment process can benefit from the input intensity images by adding the energy function for photometric consistency:


where are the pixel brightness affine transformation parameters to be estimated, while yields the intensity value of pixel in image . The photometric term is optional since we favor our system to be an indirect method, and rely on external OF modules to handle common intensity-based imaging aberrations such as exposure and/or white-balance variations, non-Lambertian reflections, etc. Also, VOLDOR estimates depth maps from small batches of successive imagery, while temporally distant observations are subject to arbitrary changes in appearance (i.e. global illumination, shadows, specularities) even if their underlying geometry is consistent. Moreover, our depth inference framework targets the estimation of static geometry observed throughout a multi-frame input batch. Empirically, we observe it consistently ”looks through” dynamic or small foreground structures observed in the batch (first) reference frame and estimates the background’s depth. For such cases, the photometric term may reduce the overall accuracy. Finally, the criteria for keyframe alignment is


where balances geometry and photometric errors, which are omitted when intensity images are absent. The relative depth scaling factor is set to for depth input with absolute scale (e.g. stereo or RGB-D). Once Eq.(12) is optimized using Levenberg-Marquardt, the covariance of is linearly approximated through its Jacobian matrix as .
Priority Linking. The aggregation of all pairwise geometric constraints among keyframes yields a fully connected graph (i.e. quadratic complexity). To enable online operation, we avoid exhaustive exploration of these relationships by devising an adaptive prioritization mechanism aimed at: 1) Linking keyframes with sufficient overlap for robust and accurate depth map 3D alignment 2) Balancing exploration for potential loop closures vs. exploiting local connectivity. While loop closure benefits are self-evident, strengthening local connectivity fosters pose corrections to the current keyframe, which may be used as geometric priors in the VOLDOR front-end, contributing to more accurate VO estimates. We construct a priority matrix for pairwise keyframe linkage, based on the observation that temporal proximity serves as a low-cost proxy for VC scores. That is, the distance between indexes roughly encodes co-visibility under the assumption of smooth local motion. Accordingly, we systematically update the entries in based on two linkage types (i.e. realtime linkage and loop closure linkage). First, for realtime linkages, after the current keyframe with index is created, we update the priority of each arbitrary keyframe pair with indices , as


where the parameter controls the priority w.r.t. co-visibility (proximal keyframes correlate to larger shared overlaps), while controls the priority w.r.t. the timeliness of considered keyframe pair (we prioritize recent keyframes over older ones). Second, to foster the inclusion of loop closure linkages, we run an image retrieval engine based on the DBoW3 [7] library, using the current keyframe as query, to obtain a candidate keyframe . Assume pair passes a geometric verification test, whenever , we compute the linking priorities in the proximity of the detected overlap as:


where controls the scope of priority propagation surrounding the loop closure pair. Given total detected loop closure pairs, the final linking priority matrix is


The largest unlinked element in larger than will be processed by the keyframe alignment module to determine an accurate relative motion among their associated depth maps. Finally, the selected pairwise motion constraint is incorporated into a pose graph framework [23], which enforces both globally consistent correction propagation (i.e. from loop closure linkages) as well as local refinements (i.e. from realtime linkages).

Stereo Monocular
(Hard Only)
abandonedfactory 0.0404 0.0914 0.0345 0.1095 0.0269 0.0924 0.969 0.2204 0.970 0.2624 0.983 0.3210
abandonedfactory_night 0.1633 0.0765 0.0227 0.0800 0.0196 0.0702 0.932 0.4907 0.916 1.4233 0.964 0.2229
amusement 0.0129 0.0654 0.0125 0.0315 0.0118 0.0265 0.640 - 0.505 - 1.000 0.1188
carwelding 0.0060 0.0269 0.0124 0.0518 0.0131 0.0535 0.995 0.0850 0.435 - 0.997 0.3203
ocean 0.0664 0.4379 0.0376 0.1340 0.0335 0.1311 0.908 0.7564 0.898 1.7069 0.958 0.4775
office 0.0035 0.0323 0.0088 0.0740 0.0071 0.0624 0.907 0.2872 0.950 4.4242 0.853 0.1715
japanesealley 0.0193 0.0773 0.0105 0.0369 0.0102 0.0413 0.964 0.1819 0.959 0.1291 1.000 0.1150
seasonsforest 0.1160 0.3102 0.0293 0.1094 0.0196 0.0733 0.330 - 0.534 - 1.000 0.1525
westerndesert 0.0906 0.3311 0.0177 0.0898 0.0149 0.0786 0.918 0.3933 0.855 1.4918 0.957 0.2145
Avg. 0.0576 0.1610 0.0207 0.0797 0.0174 0.0699 0.840 0.3449 0.780 1.5729 0.968 0.2348
TABLE I: Camera pose accuracy evaluated on TartanAir dataset. Translation and rotation error metrics are based on normalized relative pose error as in KITTI [11], but averaged over all possible sub-sequences of length meters. The completeness metric measures the number of successfully registered frames, which reveals a method’s robustness under each environment. If the completeness of a certain sequence falls bellow 0.8, we do not estimate rotation error.
Fig. 4: Depth map quality evaluated on TartanAir dataset. We threshold over depth confidence value to evaluate pixel density, inlier rate and EPE across different confidence levels.

(a) Results on TartanAir ’ocean’, ’oldtown’ and ’carwelding’ sequence with stereo input.
        (b) Results on KITTI and TUM RGB-D datasets with different types of input.

Fig. 5: Qualitative Results. Top: Results across diverse environments show VOLDORSLAM’s robustness to capture variability. Bottom: Zoomed-in regions and global views illustrate our 3D mapping density, consistency and level of detail.

Iv Experiments

Experimental Setup. We benchmarked on the photo-realistic synthetic dataset TartanAir[28], which provides ground-truth camera poses, depth maps and optical flows. The dataset features challenging environments in the presence of moving objects, changing light, various weather conditions, with diverse viewpoints and motion patterns that are difficult to obtain in real world. We tested on the ”hard” track of 9 sequences covering diverse indoor and outdoor environments. Our off-the-shelf OF estimator MaskFlowNet [33] was not trained or fine-tuned on the TartanAir dataset. For stereo input, we use the X-component of the OF estimated from the left to the right camera as the disparity map, enabling us to use the same empirical residual model as described in [15] both for stereo and optical flow. For all sequences, we use the photometric consistency term of Eq.(11). We compared VOLDORSLAM (full pipeline) and VOLDOR (VO-only) vs. ORB-SLAM3 [1] (stereo and monocular) and DSO [5] (monocular).
Stereo Results. Per Table.I (left) ORB-SLAM3 excels on stable environments with enough textures such as ’office’ and ’carwelding’, but accuracy drops dramatically for sequences exhibiting poorly-textured regions and rapidly changing viewpoints (i.e. trees that rapidly move closer) such as ’ocean’ and ’seasonsforest’. Conversely, both our variants perform stably across various environments with considerable improvement from our full pipeline, which has the best overall pose accuracy scores.
Monocular Results. For monocular input, the challenging dataset caused our baselines to frequently lose tracking, obfuscating a fair comparison over translation error, which requires scale correction due to scale ambiguity. That is, whenever a system loses tracking, a new map with arbitrary scale is generated. If we scale each sub-sequence respectively, the system losing tracking more often will benefit from getting more accurate scale correction. Thus, we replace translation error with the completeness metric, similar to the success rate proposed in [28]. The results in Table.I (right) show VOLDORSLAM can robustly handle different environments with high completeness compared to our baselines, while achieving best overall rotation accuracy.
Depth Evaluation. Fig.4 conveys our depth map quality results. As baselines, we use GA-Net [32] and MaskFlowNet [33] (the stereo input of our stereo-based VO). The accuracy metrics used are inlier-rate and EPE. A depth value is considered inlier when its corresponding disparity EPE is less than pixels or ground-truth disparity. Results show our subset of pixels with high confidence greatly out-performs existing baselines, where around pixels have a confidence over . Our framework provides only slightly more accurate depth maps using stereo instead of monocular input. Since VOLDOR disadvantages the estimation of observed dynamic content, we deem reported accuracy values an under-estimate for static environments.
Qualitative Results. Representative reconstructions from TartanAir, KITTI and TUM datasets can be found in Fig.5.

V Conclusions

VOLDORSLAM demonstrates the potential of dense-indirect (DI) estimation frameworks for the geometric analysis of large-scale and unstructured environments. The modular nature of the DI approach allows for the exploration of novel formulations and ancillary tools. In particular, recently proposed learning approaches jointly estimating depth and poses [36, 31, 27, 30], and multi-way 3D registrations [14, 13, 35, 25] are highly related to our work and offer promising research paths toward tighter couplings between DI estimation and global map management modules.


  • [1] C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. Montiel, and J. D. Tardós (2020) ORB-slam3: an accurate open-source library for visual, visual-inertial and multi-map slam. arXiv preprint arXiv:2007.11898. Cited by: §II, §IV.
  • [2] A. Dai, M. Nießner, M. Zollhöfer, S. Izadi, and C. Theobalt (2017) Bundlefusion: real-time globally consistent 3d reconstruction using on-the-fly surface reintegration. ACM Transactions on Graphics (ToG) 36 (4), pp. 1. Cited by: §II.
  • [3] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse (2007) MonoSLAM: real-time single camera slam. IEEE Transactions on Pattern Analysis & Machine Intelligence (6), pp. 1052–1067. Cited by: §II.
  • [4] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox (2015) Flownet: learning optical flow with convolutional networks. In

    Proceedings of the IEEE international conference on computer vision

    pp. 2758–2766. Cited by: §II.
  • [5] J. Engel, V. Koltun, and D. Cremers (2017) Direct sparse odometry. IEEE transactions on pattern analysis and machine intelligence 40 (3), pp. 611–625. Cited by: §I, §II, §III, §IV.
  • [6] J. Engel, T. Schöps, and D. Cremers (2014) LSD-slam: large-scale direct monocular slam. In European conference on computer vision, pp. 834–849. Cited by: §I, §II.
  • [7] D. Gálvez-López and J. D. Tardós (2012-10) Bags of binary words for fast place recognition in image sequences. IEEE Transactions on Robotics 28 (5), pp. 1188–1197. External Links: Document, ISSN 1552-3098 Cited by: §III-B, §III.
  • [8] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox (2017) Flownet 2.0: evolution of optical flow estimation with deep networks. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    pp. 2462–2470. Cited by: §II.
  • [9] T. Ke and S. I. Roumeliotis (2017) An efficient algebraic solution to the perspective-three-point problem. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7225–7233. Cited by: §III-A.
  • [10] C. Kerl, J. Sturm, and D. Cremers (2013) Dense visual slam for rgb-d cameras. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2100–2106. Cited by: §II.
  • [11] B. Kitt, A. Geiger, and H. Lategahn (2010) Visual odometry based on stereo image sequences with ransac-based outlier rejection scheme. In 2010 ieee intelligent vehicles symposium, pp. 486–492. Cited by: TABLE I.
  • [12] G. Klein and D. Murray (2007) Parallel tracking and mapping for small ar workspaces. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 1–10. Cited by: §II.
  • [13] Y. Li, G. Wang, X. Ji, Y. Xiang, and D. Fox (2018) Deepim: deep iterative matching for 6d pose estimation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 683–698. Cited by: §V.
  • [14] Z. Lv, F. Dellaert, J. M. Rehg, and A. Geiger (2019) Taking a deeper look at the inverse compositional algorithm. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4581–4590. Cited by: §V.
  • [15] Z. Min, Y. Yang, and E. Dunn (2020) VOLDOR: visual odometry from log-logistic dense optical flow residuals. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4898–4909. Cited by: VOLDORSLAM: For the times when feature-based or direct methods are not good enough , §I, §I, §II, §III-A, §III-A, §III-A, §III-A, §III, §IV.
  • [16] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos (2015) ORB-slam: a versatile and accurate monocular slam system. IEEE transactions on robotics 31 (5), pp. 1147–1163. Cited by: §II.
  • [17] R. Mur-Artal and J. D. Tardós (2017) Orb-slam2: an open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Transactions on Robotics 33 (5), pp. 1255–1262. Cited by: §I, §II, §III.
  • [18] R. A. Newcombe, S. J. Lovegrove, and A. J. Davison (2011-11) DTAM: dense tracking and mapping in real-time. In 2011 International Conference on Computer Vision, Vol. , pp. 2320–2327. External Links: Document, ISSN 2380-7504 Cited by: §I, §II.
  • [19] J. Park, Q. Zhou, and V. Koltun (2017) Colored point cloud registration revisited. In Proceedings of the IEEE International Conference on Computer Vision, pp. 143–152. Cited by: §III-B.
  • [20] T. Pire, T. Fischer, J. Civera, P. De Cristóforis, and J. J. Berlles (2015) Stereo parallel tracking and mapping for robot localization. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1373–1378. Cited by: §I.
  • [21] R. Ranftl, V. Vineet, Q. Chen, and V. Koltun (2016) Dense monocular depth estimation in complex dynamic scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4058–4066. Cited by: §II.
  • [22] T. Schops, T. Sattler, and M. Pollefeys (2019) BAD slam: bundle adjusted direct rgb-d slam. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 134–144. Cited by: §II.
  • [23] H. Strasdat, J. Montiel, and A. J. Davison (2010) Scale drift-aware large scale monocular slam. Robotics: Science and Systems VI 2 (3), pp. 7. Cited by: §III-B, §III.
  • [24] D. Sun, X. Yang, M. Liu, and J. Kautz (2018) PWC-net: cnns for optical flow using pyramid, warping, and cost volume. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8934–8943. Cited by: §II.
  • [25] C. Tang and P. Tan (2018) Ba-net: dense bundle adjustment network. arXiv preprint arXiv:1806.04807. Cited by: §V.
  • [26] L. Valgaerts, A. Bruhn, M. Mainberger, and J. Weickert (2012) Dense versus sparse approaches for estimating the fundamental matrix. International Journal of Computer Vision 96 (2), pp. 212–234. Cited by: §II.
  • [27] R. Wang, S. M. Pizer, and J. Frahm (2019) Recurrent neural network for (un-) supervised learning of monocular video visual odometry and depth. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5555–5564. Cited by: §V.
  • [28] W. Wang, D. Zhu, X. Wang, Y. Hu, Y. Qiu, C. Wang, Y. Hu, A. Kapoor, and S. Scherer (2020) TartanAir: a dataset to push the limits of visual slam. arXiv preprint arXiv:2003.14338. Cited by: §IV.
  • [29] T. Whelan, S. Leutenegger, R. Salas-Moreno, B. Glocker, and A. Davison (2015) ElasticFusion: dense slam without a pose graph. Cited by: §II.
  • [30] N. Yang, L. v. Stumberg, R. Wang, and D. Cremers (2020) D3VO: deep depth, deep pose and deep uncertainty for monocular visual odometry. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1281–1292. Cited by: §V.
  • [31] Z. Yin and J. Shi (2018)

    Geonet: unsupervised learning of dense depth, optical flow and camera pose

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1983–1992. Cited by: §V.
  • [32] F. Zhang, V. Prisacariu, R. Yang, and P. H.S. Torr (2019-06) GA-net: guided aggregation net for end-to-end stereo matching. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §IV.
  • [33] S. Zhao, Y. Sheng, Y. Dong, E. I. Chang, Y. Xu, et al. (2020) MaskFlownet: asymmetric feature matching with learnable occlusion mask. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6278–6287. Cited by: §II, §IV.
  • [34] E. Zheng, E. Dunn, V. Jojic, and J. Frahm (2014) Patchmatch based joint view selection and depthmap estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1510–1517. Cited by: §III-A.
  • [35] H. Zhou, B. Ummenhofer, and T. Brox (2018) Deeptam: deep tracking and mapping. In Proceedings of the European conference on computer vision (ECCV), pp. 822–838. Cited by: §V.
  • [36] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe (2017) Unsupervised learning of depth and ego-motion from video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1851–1858. Cited by: §V.