For large-scale robotic applications in GPS-denied environments – such as indoor industrial environments, underground mining, or space – efficient and precise lidar-based robot localisation is in high demand. Geometry-based methods such as global registration [16, 29] and particle filters [27, 30, 7, 12, 1] are widely used both to rescue a ‘kidnapped’ robot and continuously localise the robot. However, the computational effort of these methods increases monotonically with the size of the environment. Deep learning methods [11, 9, 10, 28, 15]
are emerging in image-based relocalisation as a pivotal precursor to directly estimate the 6-DOF pose based on a model learned for a specific environment with a deep regression neural network. These learning methods are scalable as the computation time only depends on the complexity of the neural network. However, without geometric verification, they are likely to be less precise than geometry-based methods.
Using conventional filtering-based methods, e.g. Monte Carlo localisation (MCL), the pose estimate is updated recursively with each new observation. This tends to be very robust and lead to precise localisation, although when there is a large error in the initial estimate, it can take a long time for the filter to converge – on the scale of minutes even for modest-sized maps . To mitigate this limitation, our intuition is to combine deep localisation and MCL.
The first contribution of this paper is a hybrid localisation approach for lidar-based global localisation in large-scale environments. Our approach integrates
a learning-based method with a Markov-filter-based method,
which makes it possible to efficiently provide global and continuous localisation with centimetre precision.
A second contribution is a deep non-parametric model, i.e. a Gaussian Process with a conjunction of deep neural networks as the kernel, used to estimate the distribution of the global 6-DOF pose from superimposed lidar scans. As a core component, the estimated distribution can be used to seed the Monte-Carlo localisation, thereby seamlessly integrating the two systems. A video demo is available at:
The first contribution of this paper is a hybrid localisation approach for lidar-based global localisation in large-scale environments. Our approach integrates a learning-based method with a Markov-filter-based method, which makes it possible to efficiently provide global and continuous localisation with centimetre precision. A second contribution is a deep non-parametric model, i.e. a Gaussian Process with a conjunction of deep neural networks as the kernel, used to estimate the distribution of the global 6-DOF pose from superimposed lidar scans. As a core component, the estimated distribution can be used to seed the Monte-Carlo localisation, thereby seamlessly integrating the two systems. A video demo is available at:https://youtu.be/5wQXrpzxHNk.
Ii Related Work
Ii-a Lidar-based global localisation
There exists a large body of research dealing with lidar-based localisation. The work that is most relevant to the scope of this paper can be generally divided into methods that provide informed initialisation of MCL, and appearance descriptors that aim to provide ‘one-shot’ localisation.
While MCL in principle is robust and, by using a multimodal belief distribution, lends itself also to global localisation with a very uncertain initial estimate of the robot’s location, a naive initialisation using a uniform particle distribution does not scale well to maps of realistic size. Some authors [16, 30]
alleviate the problem by working with a multi-resolution grid over the map. This makes it possible to scale to slightly larger maps, but is close to a uniform distribution and does not make use of the learned appearance of the map, as in our work. Another strategy is to design a distribution from observations and sample particles from that[24, 7, 12]. Another possibility is to use external information, such as wi-fi beacons 
or a separate map layer that governs the prior probability of finding robots in each part of the map. In contrast, we work directly on point cloud data. None of the above methods have been evaluated on maps as large as those in our experiments.
Several engineered appearance descriptors have also been proposed for 2D , 3D lidar-based global localisation [13, 21, 4]. Seg-Map  proposed to learn instance-level descriptors on point cloud segments for 3D lidar loop-closure detection. These all have in common that they aim to provide ‘one-shot’ global localisation, but require a linear search over all descriptors created from the map. Even if the single descriptor look-up is very quick, for large-scale long-term localisation, the linear scaling factor is still a major drawback compared to the method proposed in Sec. III-B.
Ii-B Deep image-based global localisation
Image localisation is the task of accurately estimating the location and orientation of an image with respect to a global map, and has been studied extensively in both robotics and computer vision. Pose-Net and related approaches [9, 10] have initiated a new trend to estimate the 6-DOF global pose using deep regression neural networks. In , the geometric reprojection error, i.e. from the reprojection of the 3D reconstruction to the image frame, is jointly optimised with global pose loss during training. More recently, Valada et al.  proposed geometry consistency loss to learn a spatially consistent global representation using relative pose loss as an auxiliary. Some researchers investigate the predictive uncertainties in the deep model [9, 2], but sadly, uncertainties are not further utilised to improve the localisation.
Global loop-closure detection can be combined with local feature matching as a hierarchical approach for visual relocalisation [19, 18]. In these approaches, the deep global descriptors learned as a location signature are used to shortlist possible locations in a large-scale environment, then the precise 6-DOF pose can be estimated by 2D-to-3D matching between the retrieved key frames candidates and the map. With geometry verification, the localisation precision can be remarkably improved.
Iii-a Problem Formulation
In the global localisation problem, given the observation and 6-DOF robot pose at time , the goal is to estimate the posterior . If the robot is moving, and a sequence of observations and control inputs (e.g. odometry) are available, the a-posteriori pose becomes . Our proposed method has two systems working together for estimating this posterior.
System 1 – Efficient global localisation can be formulated through a learning-based method. We aim to obtain a pose estimate from a single observation using a fast deterministic model. In order to train the deterministic model, the observations and poses used to build the map, i.e. , are provided as training examples. The problem can be formulated as estimating the conditional probability . Parametric models (e.g. neural networks) or Gaussian methods (e.g. Gaussian Processes) can be used to resolve this.
System 2 – Precise localisation can be formulated through a Markov-filter-based method. The a-posteriori belief state of the robot pose can be iteratively updated as a Bayes filter as the robot is moving. With the motion model , the belief can be updated as and given the measurement model , the belief can be then updated as .
To integrate the two systems, importance sampling can be used to update the particles generated by System 1 using System 2. In particular, we propose to draw particles from in System 1 and update belief in System 2, and maintain the particles in a healthy and converging distribution through importance sampling.
Iii-B System 1: efficient localisation using a deep probabilistic model
To learn a deterministic model to efficiently predict the 6-DOF pose of the robot, a natural idea is to use a parametric statistical model like a deep neural network. A bonus is that the deep models can learn site-specific features through back-propagation. However, it is also important to model the predictive probability , which can efficiently generate robot pose hypotheses. Our intuition is to use a Gaussian process to estimate this conditional distribution.
Iii-B1 Observation for learning
Using dense point clouds has a proven effectiveness in robot localisation . To acquire a dense point cloud from a sparse lidar such as a Velodyne HDL-32E, without using extra mechanical devices, we first superimpose frames of consecutive observations at time using odometry:
where is the relative transformation between the pose at frame and that at frame . can be obtained by a fusion of wheel odometry and inertial sensors, i.e. IMU and gyro, or lidar odometry. The superimposed point cloud can be converted to a height map and further encoded as a gray-scale bird’s eye view image as shown in Fig. 1. In the learning of the deep probabilistic localisation, we use the bird’s eye view image of the superimposed point cloud (denoted ) as the observation at time .
Iii-B2 Deep Neural Network for Feature Learning
A deep neural network is used to learn site-specific features from regressing 6-DOF global poses. Specifically, we first use five convolutional stacks of a pretrained ResNet-50 model as the backbone, and then the network is divided into two branches and triple-layer MLPs are used to learn the three-dimensional position and four-dimensional rotation (quaternion)
, respectively. We propose the following loss function for simultaneously minimising the positional loss and rotational loss.
is the inner product of the predicted and ground-truth quaternion. The second term indicates the distance between two normalised quaternion vectors.
Given a pair of bird’s eye view images from time and , the predicted global poses and , and the ground truth poses and , we can calculate the relative transform from the predictions and ground truth and make them geometrically consistent.
More specifically, () is the transform matrix which can be obtained from the translation () and rotation () at time
. We convert the transform matrices back to pose vectors to compute the positional and rotational losses. We find that with the assistance of geometric consistency loss, the neural network can learn spatially consistent features and constrain the search space of global localisation, thereby enhancing the robustness of global pose estimation.
As mentioned in , a learning-based model works well near training trajectories but might be challenging to use elsewhere in the mapped region. Hence we augment the training data in a region of 12.5 m to improve the performance.111The bird’s eye view image of size 400400 can be generated from the superimposed local point cloud in a visual scope of 10010010 metres with a resolution of 0.4 metres per pixel. We randomly crop a 300300 image for training and apply the corresponding translational offset to the target pose. For the geometry consistency learning, we randomly pick paired images within a window of ten frames ([1,10]).
The proposed deep neural network can learn site-specific and spatio-temporally consistent features. However, the inference (prediction) is not fully probabilistic with
loss. The drawback is that predictive uncertainties cannot be provided, i.e., the neural network cannot give the predictive distribution of the robot pose. In our two-system framework, the uncertainty of Bayesian localisation is of critical importance. An appropriate predictive distribution can accelerate the convergence of the MCL by giving small variances, and, at the same time, suppress the effects of false positives by predicting with large variances. To mitigate this, we adapt Gaussian process regression as the basis of the deep localisation network. That is, a hybrid probabilistic regression method is proposed where the deep neural network provides the deep kernel of a Gaussian Process.
Iii-B3 Gaussian Process with Deep Kernel
Given the training observations , learning target i.e. poses , latent variable , and the testing example , the prediction step of Gaussian process regression involves inferring the conditional probability of the latent variable of testing example as:
This conventional inference formula is not scalable as the computation increases exponentially with , i.e. . Instead of using all the training examples for the prior , a reduced set of examples (known as the inducing points) is used to approximate the whole training set, where is the dimension of the feature and . Given the latent variables of the inducing points, , the posterior distribution can be estimated by a variational distribution (modelled as a multi-variant Gaussian, ). Via variational inference, the inducing points can be estimated by maximization of the evidence lower bound (ELBO) of the log marginal likelihood of :
In this formula, the first term is the predictive likelihood and KL refers to the Kullback-Leibler divergence. Titsias et al.  prove the final formula of the optimal inducing points and the mean and variances of . In order to further make the training scalable, we train the SVIGP  (Stochastic Variational Inference Gaussian Process) from mini-batch data.
With the optimised inducing points , and variational distribution , the predictive probability in Eq. 4 can be reformulated as:
We use a shared RBF kernel for multi-output prediction and the kernel is constructed from deep neural network features. To be more specific, the feature of the last layer (shown in Fig. 2) is used as the observation of the GP. By this means, the parametric neural network can be integrated with the non-parametric Gaussian Process.
More importantly, through maximising the log marginal likelihood, the inducing points , the variational distribution , hyper-parameters of the kernel , and the parameters of the deep neural network can be jointly optimised by simple back-propagation-through-time. As the GP is very sensitive to the kernel parameters, to avoid suffering from local minima, we address the training in two stages. We first train the deep neural network, i.e. feature of the kernel, using the translational and rotational loss. Then, the GP is trained end-to-end with the deep kernels.
It is worth noting that we only train the Gaussian Process Regression for positioning, and the angular distance loss function is still used to learn the rotation. This is for two reasons: firstly the inherent normalization attribute of the quaternion cannot be leveraged in the Gaussian Process via maximizing the log likelihood, and secondly, the predictive uncertainty of rotation is less important than that of position in large-scale localisation (in other words, rotational predictions depend on positional predictions). Qualitative results of the predictive distributions are shown in Fig. 4.
Iii-C System 2: Precise localisation using MCL
For System 2
(MCL), we use a reference 3D map built using poses from RTK-GPS (for training) and we represent the map using the Normal Distribution Transform (NDT) for memory efficiency. This step answers thein the problem formulation.
During localisation, with the motion model , the belief can be updated as
with the measurement model . This integral can be approximated via resampling with importance sampling, i.e. a particle filter. The a-posteriori belief estimate is updated as
where refers to the importance weights of samples, and the measurement model can be obtained by calculating the distance between the lidar distribution and Map distribution with the Normal Distribution Transform (NDT) representation .
Conventional methods usually first initialise a temporary particle distribution which is reminiscent of the belief . However, the belief can be estimated effectively from the current observation using the learning-based method, hence providing a parametric method to initialise the belief as
where the conditional probability can be estimated by Gaussian Process using Eq. 6.
Practically, the particles are generated from two origins in our implementation. They are the particles drawing from the GP’s predictive distribution and particles resampled from the previous belief set . Through the importance sampling mechanism, the particles from deep learning estimation and particles from MCL can be integrated, thereby integrating the two systems. A detailed description is shown in Algorithm 1.
|median transitional error||1.74m||1.69m||2.02m||1.99m||2.13m||2.14m||3.98m||3.59m||2.18m|
|median rotational error||3.25||3.36||3.34||3.17||3.66||3.67||5.12||4.72||3.65|
|mean transitional error||8.77m||2.88m||15.3m||11.57m||14.06m||17.33m||30.15||32.08m||16.55m|
|mean rotational error||6.19||4.43||9.50||7.96||8.52||11.58||17.98||14.51||4.99|
|number of frames||33K||14K||26K||19K||27K||30K||14K||25K||184K|
Our research focuses on long-term localisation using 3D lidar data. In order to evaluate our proposed approach, a long-term mapping dataset with ground truth is required. We did not choose the KITTI dataset  as the amount of overlapping trajectories are not sufficient for training. To the best of our knowledge, the Michigan North Campus Long-Term Vision and lidar (NCLT) dataset  is the only long-term multi-session dataset currently available for lidar mapping and relocalisation. The dataset consists of 27 sessions with varying routes and days over 15 months. In each session, a Segway robot is driven via joystick to traverse the campus, and multi-sensor data including wheel odometry, 3D lidar, IMU, gyro, etc. are recorded. Ground-truth pose data are obtained by fusion of lidar scan matching and high precision RTK-GPS. The whole dataset spans 34.9 h and 147.4 km. More details can be found in .
Since the learning-based method requires training examples and the filter-based method needs a pre-built map, we selected eight sessions222Training sessions are 2012-01-08, 2012-01-15, 2012-01-22, 2012-02-02, 2012-02-04, 2012-02-05, 2012-03-31, and 2012-09-28. for training and another eight sessions for testing. We selected the training sessions because they cover all explored areas of the campus, and the testing sessions were chosen randomly from varying seasons.
Our hypothesis is that the learning-based method (System 1) is efficient but lacks accuracy, while the filter-based method (System 2) is precise but computationally intensive. By combining the two systems, efficient and precise localisation can be achieved.
Iv-a Deep Bayesian localisation evaluation
In this experiment, we evaluate the proposed deep probabilistic localisation and the learned uncertainties. The training of the neural network has two stages. Firstly, we train the network with
positional loss, angular orientation loss and auxiliary loss. Here the RES-Net stacks weights are transplanted from a pre-trained model (on ImageNet). In this stage, we use the Adam optimizer and train for 200 epochs with an initial learning rate of 10with exponential decay of 0.95. We clip the gradient by 5.0 and the learning rate by 10. In the second stage, we use the same rotational loss and the last layer feature for position prediction to build the kernel of the Gaussian Process. We use 350 inducing points ( in Eq. 6) to estimate the variational distribution to approximate the prior for the whole dataset. More specifically, we transplant all the weights of the first stage to the deep GP model. We freeze the weights of the 5 ResNet stacks and optimise the parameters of the GP, fully-connected layers and context stack layers jointly by maximizing the log likelihood. In this stage, an initial learning rate of 10
with exponential decay of 0.95 is used, and the model is trained for another 100 epochs. A discount of 0.1 and 0.01 on the learning rate is applied to the fully-connected layers and context stack layers. Our implementation is based on the TensorFlow and GPflow333https://github.com/GPflow/GPflow toolboxes. We use an i7 desktop with a NVIDIA TITAN X GPU for training, and the whole training process takes five days.
Iv-A2 Evaluation Criteria
In this experiment, we follow the criteria used in  to evaluate the deep localisation. In particular, we use the positional error to measure the 3D position estimation, and the angular distance between predictive quaternion and ground truth quaternion to measure the rotational error. In this evaluation, the median errors are the more robust statistics for large-scale localisation. We evaluated our deep localisation model on eight testing sessions, and both the results on individual days and over the whole testing sets are provided in Table I.
As shown in Table I, we achieve an overall median positional and angular error of 2.18 m and 3.65 over eight sessions over the duration of one year. The median errors increase gradually from February to December from 1.74 m to 3.59 m, which can be attributed to the environment changes (i.e. plants and construction work) and new explored trajectories. Nevertheless, the reported performance shows satisfactory robustness to weather and seasonal variance, which demonstrates the capability of our model for long-term localisation.
|Success rate (%)||Localisation time [s]|
|Hybrid - GP Cov (ours)|
|Hybrid - Fixed Cov (ours)|
|NDT-MCL with uniform initialisation|
Iv-A3 Uncertainty Evaluation
We further evaluate the predictive probabilities of our proposed model. It is more important to predict locations with uncertainties, which is the advantage of non-parametric models, e.g. Gaussian Process, compared to parametric deep neural networks. In the hybrid method, a well modelled predictive probability distribution is able to accelerate the convergence of the particle filter, but can also suppress ill-posed localisation due to false positives.
To evaluate the uncertainties, we divide the magnitude of the predictive uncertainties ( norm of variances in x, y, and z directions). We divide the magnitude of uncertainties (i.e. of positional prediction) into uniform intervals and calculate the mean and median errors within intervals. The histogram of predictions is also counted. The statistical results are shown in Fig. 3. We found that both positional error and rotational error positively propagate to the magnitude of uncertainties. Most of the predictions (85%) fall into the first four bins, i.e. the magnitude of uncertainty is less than 20. Within this uncertainty range (0–20 in magnitude), the proposed model achieves positional errors of less than 4.3 m on average and 2.0 m median, and rotational errors of less than 2.7 on average and 1.7 median.
Iv-B Monte-Carlo Localisation Baseline Evaluation
For large-scale environments like the Michigan campus, the large amount of particles required for naive MCL initialisation makes MCL intractable. For that reason, we restricted sampling only around previously explored positions from the training dataset. Specifically, for each explored position, 333 points were sampled from a voxel grid with voxel size , , of m. A finer resolution would make the amount of particles unmanageable. Nearby points were then filtered using a voxel-filter with the same resolution. For each remaining point, 8 pose particles were created with evenly spaced orientations around the z-axis. During the resampling step, the number of particles was reduced by a fraction of until particles remained.
In total, over 4000 localisation attempts (initial locations are uniformly chosen from 8 sessions) were performed using our two methods and the MCL baseline. An attempt was considered successful if an error m was achieved within iterations. MCL scored a success rate with an average localisation time of s as shown in Table II.
Iv-C Hybrid Localisation Evaluation
Instead of initialising particles in all possible locations, we use the proposed deep probabilistic method to seed the MCL (Eq. 9) and continue to update the samples following Algorithm 1. Specifically, we used a nominal number of 500 particles, increasing up to 1000 as additional particles were sampled, and reducing back to 500 during the resampling step. Using only a small amount of particles with the fast and sparse NDT-based measurement likelihood model, we achieved an average iteration time of 0.073 s with 0.02. To investigate the benefit of sampling from the uncertainty estimates, we compared it to sampling from a fixed position distribution (). Orientations were sampled from a fixed distribution() in both cases. These parameters were chosen according to practical experience. The localisation success rate and speed is shown in table. II and Fig. 5.
We found a median and mean localisation time of 0.799 s and 1.944 s respectively for the hybrid approach with covariance estimated by GP. Compared to the MCL baseline, the median localisation time is reduced by . Similarly to the evaluation in Sec. IV-A2, the highest success rate was obtained in the early months of the year, gradually decreasing over the year. We observed in November and December, the localisation time significantly increased because there are more novel trajectories and landmarks.
This paper proposes a hybrid probabilistic localisation method which leverages the efficient inference of the deep deterministic model and the rigorous geometry verification of Bayes-filter-based localisation. This paper seeks a solution to resolve the non-conjugacy between the Gaussian method (Gaussian Process) and non-Gaussian method (Monte-Carlo localisation) through importance sampling. Consequently, the two systems can be integrated seamlessly.
From the experiments, we found that the learning-based localisation method can provide an optimised predictive distribution to seed MCL, thereby accelerating the convergence of particles. On the other hand, the false positives can be suppressed by the correctly modelled uncertainties in the continuous localisation. The experimental results show that the hybrid system is able to localise in less time compared to the Monte-Carlo baseline method, i.e. NDT-MCL, and increases the precision to centimetres to meet the needs of large-scale real-world localisation problems.
We thank NVIDIA Co. for donating a high-power GPU on which this work was performed. This project has received funding from the EU Horizon 2020 under grant agreement No 732737 (ILIAD) and EPSRC Programme Grant EP/M019918/1.
-  (2017) Detection of kidnapped robot problem in Monte Carlo localization based on the natural displacement of the robot. International Journal of Advanced Robotic Systems 14 (4), pp. 1729881417717469. Cited by: §I.
-  (2018) A hybrid probabilistic model for camera relocalization.. In BMVC, Vol. 1, pp. 8. Cited by: §II-B.
-  (2015) University of Michigan North Campus long-term vision and lidar dataset. International Journal of Robotics Research 35 (9), pp. 1023–1035. Cited by: §IV.
-  (2018) DELIGHT: an efficient descriptor for global localisation using lidar intensities. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 3653–3660. Cited by: §II-A, §III-B1.
-  (2020) SegMap: segment-based mapping and localization using data-driven descriptors. The International Journal of Robotics Research 39 (2-3), pp. 339–355. Cited by: §II-A.
Are we ready for autonomous driving? the kitti vision benchmark suite.
Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §IV.
-  (2013) Observation-driven bayesian filtering for global location estimation in the field area. Journal of Field Robotics 30 (4), pp. 489–518. Cited by: §I, §II-A.
Gaussian processes for big data.
Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, pp. 282–290. Cited by: §III-B3.
-  (2016) Modelling uncertainty in deep learning for camera relocalization. In 2016 IEEE international conference on Robotics and Automation (ICRA), pp. 4762–4769. Cited by: §I, §II-B.
-  (2017) Geometric loss functions for camera pose regression with deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5974–5983. Cited by: §I, §II-B.
-  (2015) Posenet: a convolutional network for real-time 6-dof camera relocalization. In Proceedings of the IEEE international conference on computer vision, pp. 2938–2946. Cited by: §I, §II-B, §IV-A2.
-  (2015-09) Where am I?: an NDT-based prior for MCL. In Proceedings of the European Conference on Mobile Robots (ECMR), Cited by: §I, §I, §II-A.
-  (2009) Automatic appearance-based loop detection from three-dimensional laser data using the normal distributions transform. Journal of Field Robotics 26 (11-12), pp. 892–914. Cited by: §II-A.
-  (2004) Map-based priors for localization. In IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS)., Vol. 3, pp. 2179–2184. Cited by: §II-A.
-  (2018) Vlocnet++: deep multitask learning for semantic visual localization and odometry. IEEE Robotics and Automation Letters 3 (4), pp. 4407–4414. Cited by: §I.
-  (2010) 3D mapping with multi-resolution occupied voxel lists. Autonomous Robots 28 (2), pp. 169. Cited by: §I, §II-A.
-  (2013) Normal distributions transform monte-carlo localization (ndt-mcl). In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 382–389. Cited by: §III-C.
-  (2019-06) From coarse to fine: robust hierarchical localization at large scale. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §II-B.
-  (2018) Leveraging deep visual descriptors for hierarchical efficient localization. In Conference on Robot Learning, pp. 456–465. Cited by: §II-B.
-  (2019) Understanding the limitations of cnn-based absolute camera pose regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3302–3312. Cited by: §III-B2.
-  (2015) IRON: a fast interest point descriptor for robust NDT-map matching and its application to robot localization. In IEEE/RSJ International Conference on Intelligent Robots and Systems, Cited by: §II-A.
-  (2017) Detecting and solving the kidnapped robot problem using laser range finder and wifi signal. In 2017 IEEE international conference on real-time computing and robotics (RCAR), pp. 303–308. Cited by: §II-A.
-  (2005) Probabilistic robotics. MIT press. Cited by: §III-C.
-  (2000) Monte Carlo localization with mixture proposal distribution. In AAAI/IAAI, pp. 859–865. Cited by: §II-A.
-  (2013-05) Geometrical FLIRT phrases for large scale place recognition in 2d range data. In 2013 IEEE International Conference on Robotics and Automation, Vol. , pp. 2693–2698. External Links: Cited by: §II-A.
-  (2009) Variational learning of inducing variables in sparse Gaussian processes. In Artificial Intelligence and Statistics, pp. 567–574. Cited by: §III-B3, §III-B3.
-  (2004) Expansion resetting for recovery from fatal error in Monte Carlo localization-comparison with sensor resetting methods. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. 3, pp. 2481–2486. Cited by: §I.
-  (2018) Deep auxiliary learning for visual localization and odometry. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 6939–6946. Cited by: §I, §II-B.
-  (2015) Fast lidar localization using multiresolution gaussian mixture maps. In 2015 IEEE international conference on robotics and automation (ICRA), pp. 2814–2821. Cited by: §I.
-  (2005) A grid-based proposal for efficient global localisation of mobile robots. In Proceedings.(ICASSP’05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005., Vol. 5, pp. v–217. Cited by: §I, §II-A.