Multi-Camera Sensor Fusion for Visual Odometry using Deep Uncertainty Estimation

by   Nimet Kaygusuz, et al.
University of Surrey

Visual Odometry (VO) estimation is an important source of information for vehicle state estimation and autonomous driving. Recently, deep learning based approaches have begun to appear in the literature. However, in the context of driving, single sensor based approaches are often prone to failure because of degraded image quality due to environmental factors, camera placement, etc. To address this issue, we propose a deep sensor fusion framework which estimates vehicle motion using both pose and uncertainty estimations from multiple on-board cameras. We extract spatio-temporal feature representations from a set of consecutive images using a hybrid CNN - RNN model. We then utilise a Mixture Density Network (MDN) to estimate the 6-DoF pose as a mixture of distributions and a fusion module to estimate the final pose using MDN outputs from multi-cameras. We evaluate our approach on the publicly available, large scale autonomous vehicle dataset, nuScenes. The results show that the proposed fusion approach surpasses the state-of-the-art, and provides robust estimates and accurate trajectories compared to individual camera-based estimations.



page 1

page 3

page 5


MDN-VO: Estimating Visual Odometry with Confidence

Visual Odometry (VO) is used in many applications including robotics and...

VidLoc: A Deep Spatio-Temporal Model for 6-DoF Video-Clip Relocalization

Machine learning techniques, namely convolutional neural networks (CNN) ...

Markov Localisation using Heatmap Regression and Deep Convolutional Odometry

In the context of self-driving vehicles there is strong competition betw...

A 2.5D Vehicle Odometry Estimation for Vision Applications

This paper proposes a method to estimate the pose of a sensor mounted on...

Towards Visual Ego-motion Learning in Robots

Many model-based Visual Odometry (VO) algorithms have been proposed in t...

Deep Sensor Fusion for Real-Time Odometry Estimation

Cameras and 2D laser scanners, in combination, are able to provide low-c...

Holistic Grid Fusion Based Stop Line Estimation

Intersection scenarios provide the most complex traffic situations in Au...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Perception of the environment via sensor systems is a major challenge in the quest to realise autonomous driving. To ensure safety, accurate and robust systems are required that can operate within any environment. One of the most fundamental components of any autonomous system is ego-motion estimation which can be used for both vehicle control and automated driving.

In recent years, considerable research effort has been invested into ego-motion estimation using on-board sensors [5]. It is common to employ various types of complementary sensors, e.g., IMU, GPS, LiDAR and cameras, to provide accurate motion for vehicle state estimation systems.

Cameras are a popular choice as they are cost effective, already installed on most modern vehicles and we know that vision is key to human driving ability. However, the accuracy of vision-based motion estimation is affected by various internal and external factors. Even if we assume that camera calibration is known and correct, the quality of the captured images are dependent on environmental factors such as occlusions, lighting and weather conditions. Considering that Visual Odometry (VO) works by examining the change in image pixel motion between consecutive frames, robust and distinctive features are crucial for its performance. This means that the quality of estimated motion is dependent upon the quality of images, which in turn is dependent upon the type and structure of the scene.

Fig. 1: An overview of the proposed multi-camera fusion approach.

Robustness can be provided in many ways, but for vehicle state estimation, sensor fusion techniques are often employed. Many successful sensor fusion frameworks employ a Kalman filter (or variant) [3, 27]. Kalman filters are designed to handle the sensor measurement noise and allow the combination of complementary sensors. However, measurement statistics and system dynamics needs to be known for the filter to operate successfully. As image quality is dependent upon environmental factors, fusion of multiple cameras in a VO system requires estimates of sensor reliability or measurement noise and this implies implicit knowledge about what environmental factors lead to degradation.

Motivated by recent developments in deep learning and its applications to autonomous vehicles, we propose a learning based sensor fusion framework which fuses multiple VO estimates from multiple cameras, taking into account the predicted confidence of each sensor.

We utilise a hybrid Convolutional Neural Network (CNN) - Recurrent Neural Network (RNN) architecture that extracts spatio-temporal features from images. We then employ a Mixture Density Network (MDN

) which predicts the probability distributions of motion for each camera independently. This approach allows our model to learn the confidence/uncertainty of motion for each camera. Independent motion estimates from multiple arms of the network are then combined in a neural fusion module. An overview of our approach can be seen in Figure 


We evaluate the proposed fusion approach on nuScenes [2] which is a large scale, public, autonomous driving dataset. We report quantitative and qualitative VO results. Our experiments show that, overall, the proposed fusion approach can provide more accurate estimations than individual cameras alone. Furthermore, our model surpasses the two state-of-the-art VO algorithms and baseline Extended Kalman Filter (EKF) based fusion.

The contributions of this work can be summarised as:

  • We utilise a deep neural network approach to estimate both vehicle motion and the confidence in that estimation, from multiple monocular cameras.

  • We propose a deep fusion framework that can intelligently fuse multiple motion estimates incorporating the pose and uncertainties in prediction.

  • We demonstrate state-of-the-art performance on a large-scale autonomous driving dataset and evaluate performance under different weather and lighting conditions.

The rest of this paper is structured as follows: In Section II we discuss relevant related work. We then introduce the proposed learning based fusion model in Section III. We share the experimental setup and report our results in Section IV. Finally, we conclude the paper in Section V by discussing our findings.

Ii Related Work

This paper focuses on estimating vehicle motion from multiple on-board cameras. VO can be defined as estimating the motion of a camera by examining the motion of features between consecutive image frames. Traditionally, there are two families of approaches to estimating vehicle motion via vision sensors, namely direct methods and feature based methods. Direct methods [21, 8] track the changes in pixel intensities to estimate the camera motion. Feature based methods [6, 20] use handcrafted features to detect salient patterns and track their displacement on the image plane in order to estimate the camera motion [24]. Even though both methods show promising results, they both have their short comings. Feature based methods are prone to failure in low texture environments or with insufficient lighting, while direct methods tend to fail at high velocities and are sensitive to photometric changes.

More recently, deep learning based models have been applied to the VO task [28, 15, 31]. One of the main advantages of these models is not requiring handcrafted features, due to their ability to learn meaningful feature representations for the task they were trained upon. Compared to classical VO approaches, they are more robust in low illumination and texture-less environments [5]. In [18], Mohanty et al. proposed estimating motion via monocular camera from consecutive frames using CNNs which were trained in a supervised manner. Wang et al. [28, 29] expanded this approach and introduced the use of RNN to model the temporal dependencies between frames. In this work, we employ a similar supervised approach to estimate motion for individual cameras. However, instead of directly regressing a 6-DoF pose, we estimate the motion as a mixture of Gaussians using MDN’s. This enables our model to estimate pose and its uncertainty together which form the input of our fusion framework.

One of the main shortcomings of supervised deep learning approaches is their need to be trained on large scale datasets. To address this issue, unsupervised VO methods have been studied. Zhou et al. [32] proposed learning depth maps and camera pose change together with view synthesis. Li et al. [15] utilised stereo images which enabled their network to recover the global scale. More recently, Yang et al. [30] proposed learning depth, pose and photometric uncertainty in an unsupervised manner, where they used photometric uncertainties to optimise the VO estimates. Although unsupervised VO methods have the potential of exploiting unlabelled data, these approaches do not have the necessary robustness for vehicle state estimation.

To improve the robustness of vehicle motion estimation, sensor fusion techniques have been employed [22, 16, 4]. However, most of the sensor fusion studies focus on using VO as a complementary source of information. Parra et al. [22] use VO when GPS fails. Lynen et al. fuse Inertial Measurement Unit (IMU) and vision sensors using a Kalman filter. Chen et al. [4] proposed a fusion framework which selectively fuses a monocular camera and IMU by eliminating the corrupted sensor measurements.

To perform more reliably than any individual sensor, sensor fusion techniques require measurement uncertainty. But building an accurate uncertainty model for sensors is a difficult task [9]. In addition to the sensor imperfections, adverse environmental conditions have a significant effect on sensor output. For example, visual sensors, such as colour cameras, are particularly sensitive to environmental factors such as direct sunlight, low texture areas or poor lighting. Thus, semantic understanding of the scene is required in order to estimate the accuracy of vision sensors. In this work we model the measurement uncertainty using an MDN

. By estimating a probability distribution over the motion for each camera, we can robustly fuse multiple sensors together in a single multi-stream neural network to overcome the possible failure of individual sensors.

Fig. 2: An overview of the proposed multi-camera sensor fusion for VO. The fusion module uses pose and uncertainty estimations from the MDN module.

Iii Methodology

In this section, we introduce the proposed multi-camera fusion approach for estimating VO. Our approach starts by acquiring video streams from multiple cameras that are mounted to the vehicle with different positions & orientations. Given video streams from cameras with number of frames, our model predicts the 6-DoF relative poses between consecutive time steps.

For each camera, we first extract spatio-temporal feature representations using a hybrid CNN - RNN architecture (Section III-A). We then pass these representations to an MDN module (Section III-B

), which predicts a mixture of Gaussian distributions for the vehicle pose. By running multiple streams of

VO through different arms of the network simultaneously, we then combine the pose distributions coming from multiple cameras in a deep fusion module (Section III-C). This produces a final pose estimate. An overview of the proposed multi-camera fusion approach is visualised in Figure 2.

Iii-a Spatio-Temporal Encoder

Estimating motion from video requires both spatial and temporal understanding. Temporal modelling can be further divided into short term and long term tracking. While we are interested in short term changes between consecutive frames, i.e. relative pose, we also want to model the long term motion of the vehicle to help produce consistent estimates. In this work, we model spatial and short-term temporal representations using a CNN backbone similar to those used for optical flow estimation [7], which is a related task. Given a consecutive image pair from the camera, we extract the motion features, as:


where is the concatenation operation over the image colour channels. The CNN network is composed of convolutional layers. After each convolutional layer, we include batch normalisation [13]

, ReLU

[17] and Dropout [25].

To model the longer term vehicle motion, we feed the features from the CNN, , to an RNN module. At each time step , the RNN produces temporally enhanced representations as:


where and are visual features and the previous hidden states at time for the camera, respectively. We initialise the hidden state as all zeros for . In this work we employed a bi-directional Long Short-Term Memory (LSTM) [12] and used a small sliding window of past frames. However, any RNN architecture can be utilised with our approach.

Iii-B Mixture Density Network

Learning based approaches commonly suffer from the problem of regression to the mean, which can be considered as approximating the conditional average of the output. To address this issue, we utilise an MDN, which estimates a mixture of distributions of the 6-DoF relative poses.

We choose to construct our mixture model from Gaussian distributions to model the vehicle’s pose. For each camera , our model estimates a mixture model, , with components ( in our experiments) at every time step , which can be notated as:


where , and

represent the mean, standard deviation and mixture coefficients, respectively. This approach enables us to model the variances in the output and provide an estimate of how confident our model is in its estimation.

Given the LSTM outputs of the camera, , we estimate the conditional density of the 6-DoF pose, , for the mixture component, , as:


where and denote mean and standard deviation of distribution, conditioned on the features, .

The probability density of the 6-DoF pose, is represented as a linear combination of mixture components as:


where is the mixture coefficient which represents the probability of the pose being generated from component.

Finding the appropriate weights for our model can be achieved by maximising the likelihood . Thus, we train our model by minimising the negative log likelihood of the ground truth being generated by the mixture distribution as:


We train MDN modules independently for each view. Estimating parameters of a mixture of distributions [1] allows our model to predict the variance of the pose which explicitly represents the VO uncertainty.

Iii-C Uncertainty Based Deep Sensor Fusion

The fusion module within the network combines the predictions from individual cameras and estimates a final 6-DoF pose transformation. In a similar fashion to using covariance matrices in a Kalman filter to model measurement uncertainties, we use the output of each camera MDN (including the means, standard deviations and mixture coefficients) as input to the fusion module.

We concatenate the mixture model estimates from cameras, and project them to a latent space using a Multilayer Perceptron (MLP) with dropout and ReLU activation as:


where is the mixture model parameters of the camera at time (See Equation 3). To encourage a temporally consistent final pose estimate, we employ an RNN model. We pass MLP outputs to the RNN layer and extract temporal representation, as:


where is the hidden state from the previous time step. Finally, we employ a final MLP layer to estimate the fused 6-DoF relative pose, as:


We train our fusion module using Mean Squared Error (MSE

) loss function. To balance the translation and rotation errors we apply a weighted sum and calculate the error as:


where and denote the translation and rotation (Euler angles) of the ground truth relative poses, respectively. is an empirically chosen hyper-parameter to weight the rotation errors, which is set to be in our experiments.

Iv Experiments

Iv-a Dataset and Implementation Details

We evaluate our approach on the recently released nuScenes dataset [2], which is a large-scale autonomous driving dataset consisting of million camera images containing different types of manoeuvres (velocities ranging from km/h to km/h), weather and lighting conditions. It has six cameras attached at different locations & orientations around the vehicle. The cameras have FOV for front and side and FOV for rear providing a degree view. This makes nuScenes an ideal dataset for multi-camera fusion. nuScenes has driving sequences with available ground truth poses. Among them we eliminate the static scenarios, where the vehicle does not move in the sequence. It should be noted that the remaining scenes already contain static parts of a trajectory which can be easily learned by the model. We split the sequences for training and test, yielding 676 and 100 sequences in each set, respectively.

We implemented our network using the PyTorch deep learning framework

[23]. Adam optimiser [14] is used to train our network with a learning rate of . We utilise plateau learning rate scheduling with a patience of and a decay factor of . We use pre-trained FlowNet [7] weights to initialise our CNN backbone. We use Xavier initialisation [10] for the remaining parameters. We train our networks on a machine equipped with an NVIDIA Titan X GPU. The training takes approximately epochs. At inference, our model runs at ms per frame on the same machine. Considering nuScenes cameras run at Hz, the proposed approach meets real time requirements. To evaluate the performance of our approach, we use the evo python package [11] and report Relative Pose Error (RPE) since it measures the local accuracy of a trajectory, which is a standard way to evaluate VO systems.

Iv-B Comparing Fusion Results with Individual Cameras

In our first set of experiments, we compare the performance of the proposed multi-camera fusion approach against the VO estimations from individual cameras. As mentioned in Section III-B, the MDN module estimates a mixture of distributions over the 6-DoF pose for each camera. Here, we use the mean of the estimated Gaussians as their VO predictions and compare them to our fusion module’s estimations (See Section III-C) to see the efficacy of the proposed approach compared to using only single camera estimations.

Camera Views RMSE Max Mean std
FRONT 0.077 0.531 0.049 0.058
FRONT LEFT 0.086 0.536 0.057 0.062
FRONT RIGHT 0.087 0.563 0.056 0.066
BACK 0.090 0.621 0.054 0.070
BACK LEFT 0.111 0.599 0.073 0.082
BACK RIGHT 0.085 0.531 0.056 0.064
FUSION 0.045 0.171 0.035 0.028
TABLE I: Relative pose errors of monocular VO and the proposed fusion approach.
(a) Daylight / scene-0303
(b) Rain / scene-0570
(c) Night / scene-1062
Fig. 3: Estimated trajectories (top) and sample frames (bottom) from the corresponding nuScenes sequences.
Daylight (64 seq.) Rain (12 seq.) Night (24 seq.)
RMSE Max Mean std RMSE Max Mean std RMSE Max Mean std
ORB-SLAM [20] 0.40 3.28 0.12 0.38 0.76 10.19 0.14 0.74 NR NR NR
DeepVO [28] 0.09 0.54 0.06 0.07 0.07 0.39 0.05 0.05 0.12 0.74 0.09 0.09
EKF [19] 0.07 0.35 0.05 0.05 0.06 0.36 0.04 0.04 0.16 0.86 0.8 0.12
MC-Fusion (Ours) 0.04 0.15 0.03 0.02 0.04 0.14 0.03 0.03 0.07 0.26 0.05 0.04
TABLE II: Relative pose errors on the nuScenes test sequences, categorised by weather/lighting conditions. NR represents sequences where an approach fail to produce results.

Table I shows the VO performance from six different cameras compared to the result of our fusion model. We use the average RPE on the nuScenes test split which consist of unseen sequences. As can be seen, the fusion results outperform all individual camera based estimations. This is because the proposed fusion model can employ complementary information from the different views. This demonstrates the efficacy of our multi-camera fusion approach and verifies that VO accuracy can be improved using the proposed uncertainty-based fusion approach.

Iv-C Comparison against the state-of-the-art

We now compare our fusion model, which already outperforms all individual camera based estimates, against other state-of-the-art methods on the nuScenes dataset. In our comparisons, we use monocular ORB-SLAM [20], DeepVO [28] and the ROS implementation of an EKF [19]. We use the front camera images for ORB-SLAM and DeepVO. To be able to compare ORB-SLAM in the context of monocular VO, we run it without loop-closure following the protocols of [15, 29]. Since classical monocular VO algorithms do not recover the absolute scale for their estimated trajectories, we scale & align ORB-SLAM trajectories with the ground truth using a least-squares similarity transform [26]. For DeepVO results, we use its PyTorch implementation111 and for a fair comparison we train it with the same training split as used to train our fusion module. In order to evaluate EKF fusion performance, it is necessary to provide the algorithm with accurate pose and uncertainty estimates. In this test, we wish to compare our deep fusion to a traditional EKF fusion approach. As such, we feed the MDN outputs from each individual camera, including their estimated covariances, to the EKF.

In Table II, we share the RPE for three different categories based on weather/lighting conditions e.g. daylight, rain and night time. As a classical VO algorithm, ORB-SLAM works by extracting and tracking salient features in consecutive frames. Thus, it needs sufficient illumination and texture in the environment to be able to work effectively. This explains why, in our experiments, ORB-SLAM can often not initialise successfully for night time scenarios. Thus, we could not report its qualitative (See Figure 2(c)) and quantitative (See Table II) analysis for night time driving conditions. In contrast, deep learning based approaches do produce results for night time scenarios, which is one of the most challenging conditions to estimate VO due to the reduced visibility.

The Kalman filter is sensitive to initial parameter selection e.g. the initial estimate of covariance, process noise etc.

We choose the set of parameters that yielded the best performance on average across the test sequences. Under this setup, the EKF produces better results than ORB-SLAM.

In the daylight and rain scenarios, the EKF also performs better than DeepVO. However, for the night sequences, it has a larger error. This can be explained by the fact that an optimal Kalman filter requires careful parameter tuning, which is difficult to achieve for both daytime and nighttime. However, our fusion based approach outperforms best across all scenarios.

We share qualitative examples from all three categories (daylight, rain and night) in Figure 3. This figure shows the estimated trajectories against ground truth, and a sample image from the corresponding scene.

As can be seen, all approaches estimate the trajectories accurately in daylight condition. However, ORB-SLAM’s performance decreases significantly in rain and night time scenarios. Deep approaches maintain accurate trajectories even in these challenging rain and night-time conditions. Most significantly, our multi camera fusion approach (MC-FUSION) outperforms other methods estimating more accurate trajectories.

V Conclusion

This paper presents a novel deep learning based multi-camera fusion framework to estimate VO and evaluate it on the nuScenes dataset. Exploiting the advantages of deep learning, we show that the proposed approach can estimate trajectories even in night-time scenarios which is challenging for classical VO approaches. Furthermore, we demonstrate that our proposed fusion model outperforms single camera based estimations by exploiting the complementary information from multiple camera views. Finally, we validate the performance of our approach by comparing it with the state-of-the-art methods namely, DeepVO and ORB-SLAM. We also show that the efficacy of our single camera based pose and uncertainty estimations by feeding the EKF with them.


  • [1] C. M. Bishop (1994) Mixture density networks. Cited by: §III-B.
  • [2] H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom (2020) Nuscenes: a multimodal dataset for autonomous driving. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Cited by: §I, §IV-A.
  • [3] F. Caron, E. Duflos, D. Pomorski, and P. Vanheeghe (2006) GPS/imu data fusion using multisensor kalman filtering: introduction of contextual aspects. Information fusion 7 (2). Cited by: §I.
  • [4] C. Chen, S. Rosa, Y. Miao, C. X. Lu, W. Wu, A. Markham, and N. Trigoni (2019) Selective sensor fusion for neural visual-inertial odometry. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Cited by: §II.
  • [5] C. Chen, B. Wang, C. X. Lu, N. Trigoni, and A. Markham (2020) A survey on deep learning for localization and mapping: towards the age of spatial machine intelligence. arXiv preprint arXiv:2006.12567. Cited by: §I, §II.
  • [6] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse (2007) MonoSLAM: real-time single camera slam. IEEE transactions on pattern analysis and machine intelligence 29 (6). Cited by: §II.
  • [7] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox (2015) Flownet: learning optical flow with convolutional networks. In Proceedings of the IEEE international conference on computer vision, Cited by: §III-A, §IV-A.
  • [8] J. Engel, V. Koltun, and D. Cremers (2017) Direct sparse odometry. IEEE transactions on pattern analysis and machine intelligence 40 (3). Cited by: §II.
  • [9] J. Fayyad, M. A. Jaradat, D. Gruyer, and H. Najjaran (2020) Deep learning sensor fusion for autonomous vehicle perception and localization: a review. Sensors 20 (15). Cited by: §II.
  • [10] X. Glorot and Y. Bengio (2010) Understanding the difficulty of training deep feedforward neural networks. In

    Proceedings of the thirteenth international conference on artificial intelligence and statistics

    Cited by: §IV-A.
  • [11] M. Grupp (2017) Evo: python package for the evaluation of odometry and slam.. Note: Cited by: §IV-A.
  • [12] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8). Cited by: §III-A.
  • [13] S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In

    International conference on machine learning

    Cited by: §III-A.
  • [14] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §IV-A.
  • [15] R. Li, S. Wang, Z. Long, and D. Gu (2018) Undeepvo: monocular visual odometry through unsupervised deep learning. In 2018 IEEE international conference on robotics and automation (ICRA), Cited by: §II, §II, §IV-C.
  • [16] S. Lynen, M. W. Achtelik, S. Weiss, M. Chli, and R. Siegwart (2013) A robust and modular multi-sensor fusion approach applied to mav navigation. In 2013 IEEE/RSJ international conference on intelligent robots and systems, Cited by: §II.
  • [17] A. L. Maas, A. Y. Hannun, and A. Y. Ng (2013) Rectifier nonlinearities improve neural network acoustic models. In Proc. icml, Vol. 30. Cited by: §III-A.
  • [18] V. Mohanty, S. Agrawal, S. Datta, A. Ghosh, V. D. Sharma, and D. Chakravarty (2016) Deepvo: a deep learning approach for monocular visual odometry. arXiv preprint arXiv:1611.06069. Cited by: §II.
  • [19] T. Moore and D. Stouch (2016) A generalized extended kalman filter implementation for the robot operating system. In Intelligent autonomous systems 13, Cited by: §IV-C, TABLE II.
  • [20] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos (2015) ORB-slam: a versatile and accurate monocular slam system. IEEE transactions on robotics 31 (5). Cited by: §II, §IV-C, TABLE II.
  • [21] R. A. Newcombe, S. J. Lovegrove, and A. J. Davison (2011) DTAM: dense tracking and mapping in real-time. In 2011 international conference on computer vision, Cited by: §II.
  • [22] I. Parra, M. A. Sotelo, D. F. Llorca, C. Fernández, A. Llamazares, N. Hernández, and I. García (2011) Visual odometry and map fusion for gps navigation assistance. In 2011 IEEE International Symposium on Industrial Electronics, Cited by: §II.
  • [23] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. Cited by: §IV-A.
  • [24] D. Scaramuzza and F. Fraundorfer (2011) Visual odometry [tutorial]. IEEE robotics & automation magazine 18 (4). Cited by: §II.
  • [25] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2014) Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15 (1). Cited by: §III-A.
  • [26] S. Umeyama (1991) Least-squares estimation of transformation parameters between two point patterns. IEEE Computer Architecture Letters 13 (04). Cited by: §IV-C.
  • [27] G. Wang, Y. Han, J. Chen, S. Wang, Z. Zhang, N. Du, and Y. Zheng (2018) A gnss/ins integrated navigation algorithm based on kalman filter. IFAC-PapersOnLine 51 (17). Cited by: §I.
  • [28] S. Wang, R. Clark, H. Wen, and N. Trigoni (2017) Deepvo: towards end-to-end visual odometry with deep recurrent convolutional neural networks. In 2017 IEEE International Conference on Robotics and Automation (ICRA), Cited by: §II, §IV-C, TABLE II.
  • [29] S. Wang, R. Clark, H. Wen, and N. Trigoni (2018) End-to-end, sequence-to-sequence probabilistic visual odometry through deep neural networks. The International Journal of Robotics Research 37 (4-5). Cited by: §II, §IV-C.
  • [30] N. Yang, L. v. Stumberg, R. Wang, and D. Cremers (2020) D3vo: deep depth, deep pose and deep uncertainty for monocular visual odometry. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Cited by: §II.
  • [31] H. Zhan, R. Garg, C. S. Weerasekera, K. Li, H. Agarwal, and I. Reid (2018) Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §II.
  • [32] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe (2017) Unsupervised learning of depth and ego-motion from video. In Proceedings of the IEEE conference on computer vision and pattern recognition, Cited by: §II.