I Introduction
For largescale robotic applications in GPSdenied environments – such as indoor industrial environments, underground mining, or space – efficient and precise lidarbased robot localisation is in high demand. Geometrybased methods such as global registration [16, 29] and particle filters [27, 30, 7, 12, 1] are widely used both to rescue a ‘kidnapped’ robot and continuously localise the robot. However, the computational effort of these methods increases monotonically with the size of the environment. Deep learning methods [11, 9, 10, 28, 15]
are emerging in imagebased relocalisation as a pivotal precursor to directly estimate the 6DOF pose based on a model learned for a specific environment with a deep regression neural network. These learning methods are scalable as the computation time only depends on the complexity of the neural network. However, without geometric verification, they are likely to be less precise than geometrybased methods.
Using conventional filteringbased methods, e.g. Monte Carlo localisation (MCL), the pose estimate is updated recursively with each new observation. This tends to be very robust and lead to precise localisation, although when there is a large error in the initial estimate, it can take a long time for the filter to converge – on the scale of minutes even for modestsized maps [12]. To mitigate this limitation, our intuition is to combine deep localisation and MCL.
The first contribution of this paper is a hybrid localisation approach for lidarbased global localisation in largescale environments. Our approach integrates a learningbased method with a Markovfilterbased method, which makes it possible to efficiently provide global and continuous localisation with centimetre precision. A second contribution is a deep nonparametric model, i.e. a Gaussian Process with a conjunction of deep neural networks as the kernel, used to estimate the distribution of the global 6DOF pose from superimposed lidar scans. As a core component, the estimated distribution can be used to seed the MonteCarlo localisation, thereby seamlessly integrating the two systems. A video demo is available at:
https://youtu.be/5wQXrpzxHNk.Ii Related Work
Iia Lidarbased global localisation
There exists a large body of research dealing with lidarbased localisation. The work that is most relevant to the scope of this paper can be generally divided into methods that provide informed initialisation of MCL, and appearance descriptors that aim to provide ‘oneshot’ localisation.
While MCL in principle is robust and, by using a multimodal belief distribution, lends itself also to global localisation with a very uncertain initial estimate of the robot’s location, a naive initialisation using a uniform particle distribution does not scale well to maps of realistic size. Some authors [16, 30]
alleviate the problem by working with a multiresolution grid over the map. This makes it possible to scale to slightly larger maps, but is close to a uniform distribution and does not make use of the learned appearance of the map, as in our work. Another strategy is to design a distribution from observations and sample particles from that
[24, 7, 12]. Another possibility is to use external information, such as wifi beacons [22]or a separate map layer that governs the prior probability of finding robots in each part of the map
[14]. In contrast, we work directly on point cloud data. None of the above methods have been evaluated on maps as large as those in our experiments.Several engineered appearance descriptors have also been proposed for 2D [25], 3D lidarbased global localisation [13, 21, 4]. SegMap [5] proposed to learn instancelevel descriptors on point cloud segments for 3D lidar loopclosure detection. These all have in common that they aim to provide ‘oneshot’ global localisation, but require a linear search over all descriptors created from the map. Even if the single descriptor lookup is very quick, for largescale longterm localisation, the linear scaling factor is still a major drawback compared to the method proposed in Sec. IIIB.
IiB Deep imagebased global localisation
Image localisation is the task of accurately estimating the location and orientation of an image with respect to a global map, and has been studied extensively in both robotics and computer vision. PoseNet
[11] and related approaches [9, 10] have initiated a new trend to estimate the 6DOF global pose using deep regression neural networks. In [10], the geometric reprojection error, i.e. from the reprojection of the 3D reconstruction to the image frame, is jointly optimised with global pose loss during training. More recently, Valada et al. [28] proposed geometry consistency loss to learn a spatially consistent global representation using relative pose loss as an auxiliary. Some researchers investigate the predictive uncertainties in the deep model [9, 2], but sadly, uncertainties are not further utilised to improve the localisation.Global loopclosure detection can be combined with local feature matching as a hierarchical approach for visual relocalisation [19, 18]. In these approaches, the deep global descriptors learned as a location signature are used to shortlist possible locations in a largescale environment, then the precise 6DOF pose can be estimated by 2Dto3D matching between the retrieved key frames candidates and the map. With geometry verification, the localisation precision can be remarkably improved.
Iii Methodology
Iiia Problem Formulation
In the global localisation problem, given the observation and 6DOF robot pose at time , the goal is to estimate the posterior . If the robot is moving, and a sequence of observations and control inputs (e.g. odometry) are available, the aposteriori pose becomes . Our proposed method has two systems working together for estimating this posterior.
System 1 – Efficient global localisation can be formulated through a learningbased method. We aim to obtain a pose estimate from a single observation using a fast deterministic model. In order to train the deterministic model, the observations and poses used to build the map, i.e. , are provided as training examples. The problem can be formulated as estimating the conditional probability . Parametric models (e.g. neural networks) or Gaussian methods (e.g. Gaussian Processes) can be used to resolve this.
System 2 – Precise localisation can be formulated through a Markovfilterbased method. The aposteriori belief state of the robot pose can be iteratively updated as a Bayes filter as the robot is moving. With the motion model , the belief can be updated as and given the measurement model , the belief can be then updated as .
To integrate the two systems, importance sampling can be used to update the particles generated by System 1 using System 2. In particular, we propose to draw particles from in System 1 and update belief in System 2, and maintain the particles in a healthy and converging distribution through importance sampling.
IiiB System 1: efficient localisation using a deep probabilistic model
To learn a deterministic model to efficiently predict the 6DOF pose of the robot, a natural idea is to use a parametric statistical model like a deep neural network. A bonus is that the deep models can learn sitespecific features through backpropagation. However, it is also important to model the predictive probability , which can efficiently generate robot pose hypotheses. Our intuition is to use a Gaussian process to estimate this conditional distribution.
IiiB1 Observation for learning
Using dense point clouds has a proven effectiveness in robot localisation [4]. To acquire a dense point cloud from a sparse lidar such as a Velodyne HDL32E, without using extra mechanical devices, we first superimpose frames of consecutive observations at time using odometry:
(1) 
where is the relative transformation between the pose at frame and that at frame . can be obtained by a fusion of wheel odometry and inertial sensors, i.e. IMU and gyro, or lidar odometry. The superimposed point cloud can be converted to a height map and further encoded as a grayscale bird’s eye view image as shown in Fig. 1. In the learning of the deep probabilistic localisation, we use the bird’s eye view image of the superimposed point cloud (denoted ) as the observation at time .
IiiB2 Deep Neural Network for Feature Learning
A deep neural network is used to learn sitespecific features from regressing 6DOF global poses. Specifically, we first use five convolutional stacks of a pretrained ResNet50 model as the backbone, and then the network is divided into two branches and triplelayer MLPs are used to learn the threedimensional position and fourdimensional rotation (quaternion)
, respectively. We propose the following loss function for simultaneously minimising the positional loss and rotational loss.
(2) 
Here,
is the inner product of the predicted and groundtruth quaternion. The second term indicates the distance between two normalised quaternion vectors.
Given a pair of bird’s eye view images from time and , the predicted global poses and , and the ground truth poses and , we can calculate the relative transform from the predictions and ground truth and make them geometrically consistent.
(3) 
More specifically, () is the transform matrix which can be obtained from the translation () and rotation () at time
. We convert the transform matrices back to pose vectors to compute the positional and rotational losses. We find that with the assistance of geometric consistency loss, the neural network can learn spatially consistent features and constrain the search space of global localisation, thereby enhancing the robustness of global pose estimation.
As mentioned in [20], a learningbased model works well near training trajectories but might be challenging to use elsewhere in the mapped region. Hence we augment the training data in a region of 12.5 m to improve the performance.^{1}^{1}1The bird’s eye view image of size 400400 can be generated from the superimposed local point cloud in a visual scope of 10010010 metres with a resolution of 0.4 metres per pixel. We randomly crop a 300300 image for training and apply the corresponding translational offset to the target pose. For the geometry consistency learning, we randomly pick paired images within a window of ten frames ([1,10]).
The proposed deep neural network can learn sitespecific and spatiotemporally consistent features. However, the inference (prediction) is not fully probabilistic with
loss. The drawback is that predictive uncertainties cannot be provided, i.e., the neural network cannot give the predictive distribution of the robot pose. In our twosystem framework, the uncertainty of Bayesian localisation is of critical importance. An appropriate predictive distribution can accelerate the convergence of the MCL by giving small variances, and, at the same time, suppress the effects of false positives by predicting with large variances. To mitigate this, we adapt Gaussian process regression as the basis of the deep localisation network. That is, a hybrid probabilistic regression method is proposed where the deep neural network provides the deep kernel of a Gaussian Process.
IiiB3 Gaussian Process with Deep Kernel
Given the training observations , learning target i.e. poses , latent variable , and the testing example , the prediction step of Gaussian process regression involves inferring the conditional probability of the latent variable of testing example as:
(4) 
This conventional inference formula is not scalable as the computation increases exponentially with , i.e. . Instead of using all the training examples for the prior , a reduced set of examples (known as the inducing points) is used to approximate the whole training set, where is the dimension of the feature and . Given the latent variables of the inducing points, , the posterior distribution can be estimated by a variational distribution (modelled as a multivariant Gaussian, ). Via variational inference, the inducing points can be estimated by maximization of the evidence lower bound (ELBO) of the log marginal likelihood of [26]:
(5) 
In this formula, the first term is the predictive likelihood and KL refers to the KullbackLeibler divergence. Titsias et al. [26] prove the final formula of the optimal inducing points and the mean and variances of . In order to further make the training scalable, we train the SVIGP [8] (Stochastic Variational Inference Gaussian Process) from minibatch data.
With the optimised inducing points , and variational distribution , the predictive probability in Eq. 4 can be reformulated as:
(6) 
We use a shared RBF kernel for multioutput prediction and the kernel is constructed from deep neural network features. To be more specific, the feature of the last layer (shown in Fig. 2) is used as the observation of the GP. By this means, the parametric neural network can be integrated with the nonparametric Gaussian Process.
More importantly, through maximising the log marginal likelihood, the inducing points , the variational distribution , hyperparameters of the kernel , and the parameters of the deep neural network can be jointly optimised by simple backpropagationthroughtime. As the GP is very sensitive to the kernel parameters, to avoid suffering from local minima, we address the training in two stages. We first train the deep neural network, i.e. feature of the kernel, using the translational and rotational loss. Then, the GP is trained endtoend with the deep kernels.
It is worth noting that we only train the Gaussian Process Regression for positioning, and the angular distance loss function is still used to learn the rotation. This is for two reasons: firstly the inherent normalization attribute of the quaternion cannot be leveraged in the Gaussian Process via maximizing the log likelihood, and secondly, the predictive uncertainty of rotation is less important than that of position in largescale localisation (in other words, rotational predictions depend on positional predictions). Qualitative results of the predictive distributions are shown in Fig. 4.
IiiC System 2: Precise localisation using MCL
For System 2
(MCL), we use a reference 3D map built using poses from RTKGPS (for training) and we represent the map using the Normal Distribution Transform (NDT) for memory efficiency. This step answers the
in the problem formulation.During localisation, with the motion model , the belief can be updated as
(7) 
with the measurement model [23]. This integral can be approximated via resampling with importance sampling, i.e. a particle filter. The aposteriori belief estimate is updated as
(8) 
where refers to the importance weights of samples, and the measurement model can be obtained by calculating the distance between the lidar distribution and Map distribution with the Normal Distribution Transform (NDT) representation [17].
Conventional methods usually first initialise a temporary particle distribution which is reminiscent of the belief . However, the belief can be estimated effectively from the current observation using the learningbased method, hence providing a parametric method to initialise the belief as
(9) 
where the conditional probability can be estimated by Gaussian Process using Eq. 6.
Practically, the particles are generated from two origins in our implementation. They are the particles drawing from the GP’s predictive distribution and particles resampled from the previous belief set . Through the importance sampling mechanism, the particles from deep learning estimation and particles from MCL can be integrated, thereby integrating the two systems. A detailed description is shown in Algorithm 1.
Iv Experiment
Metrics  Feb  April  May  June  Aug  Oct  Nov  Dec  Overall 

median transitional error  1.74m  1.69m  2.02m  1.99m  2.13m  2.14m  3.98m  3.59m  2.18m 
median rotational error  3.25  3.36  3.34  3.17  3.66  3.67  5.12  4.72  3.65 
mean transitional error  8.77m  2.88m  15.3m  11.57m  14.06m  17.33m  30.15  32.08m  16.55m 
mean rotational error  6.19  4.43  9.50  7.96  8.52  11.58  17.98  14.51  4.99 
number of frames  33K  14K  26K  19K  27K  30K  14K  25K  184K 
Our research focuses on longterm localisation using 3D lidar data. In order to evaluate our proposed approach, a longterm mapping dataset with ground truth is required. We did not choose the KITTI dataset [6] as the amount of overlapping trajectories are not sufficient for training. To the best of our knowledge, the Michigan North Campus LongTerm Vision and lidar (NCLT) dataset [3] is the only longterm multisession dataset currently available for lidar mapping and relocalisation. The dataset consists of 27 sessions with varying routes and days over 15 months. In each session, a Segway robot is driven via joystick to traverse the campus, and multisensor data including wheel odometry, 3D lidar, IMU, gyro, etc. are recorded. Groundtruth pose data are obtained by fusion of lidar scan matching and high precision RTKGPS. The whole dataset spans 34.9 h and 147.4 km. More details can be found in [3].
Since the learningbased method requires training examples and the filterbased method needs a prebuilt map, we selected eight sessions^{2}^{2}2Training sessions are 20120108, 20120115, 20120122, 20120202, 20120204, 20120205, 20120331, and 20120928. for training and another eight sessions for testing. We selected the training sessions because they cover all explored areas of the campus, and the testing sessions were chosen randomly from varying seasons.
Our hypothesis is that the learningbased method (System 1) is efficient but lacks accuracy, while the filterbased method (System 2) is precise but computationally intensive. By combining the two systems, efficient and precise localisation can be achieved.
Iva Deep Bayesian localisation evaluation
IvA1 Training
In this experiment, we evaluate the proposed deep probabilistic localisation and the learned uncertainties. The training of the neural network has two stages. Firstly, we train the network with
positional loss, angular orientation loss and auxiliary loss. Here the RESNet stacks weights are transplanted from a pretrained model (on ImageNet). In this stage, we use the Adam optimizer and train for 200 epochs with an initial learning rate of 10
with exponential decay of 0.95. We clip the gradient by 5.0 and the learning rate by 10. In the second stage, we use the same rotational loss and the last layer feature for position prediction to build the kernel of the Gaussian Process. We use 350 inducing points ( in Eq. 6) to estimate the variational distribution to approximate the prior for the whole dataset. More specifically, we transplant all the weights of the first stage to the deep GP model. We freeze the weights of the 5 ResNet stacks and optimise the parameters of the GP, fullyconnected layers and context stack layers jointly by maximizing the log likelihood. In this stage, an initial learning rate of 10with exponential decay of 0.95 is used, and the model is trained for another 100 epochs. A discount of 0.1 and 0.01 on the learning rate is applied to the fullyconnected layers and context stack layers. Our implementation is based on the TensorFlow and GPflow
^{3}^{3}3https://github.com/GPflow/GPflow toolboxes. We use an i7 desktop with a NVIDIA TITAN X GPU for training, and the whole training process takes five days.IvA2 Evaluation Criteria
In this experiment, we follow the criteria used in [11] to evaluate the deep localisation. In particular, we use the positional error to measure the 3D position estimation, and the angular distance between predictive quaternion and ground truth quaternion to measure the rotational error. In this evaluation, the median errors are the more robust statistics for largescale localisation. We evaluated our deep localisation model on eight testing sessions, and both the results on individual days and over the whole testing sets are provided in Table I.
As shown in Table I, we achieve an overall median positional and angular error of 2.18 m and 3.65 over eight sessions over the duration of one year. The median errors increase gradually from February to December from 1.74 m to 3.59 m, which can be attributed to the environment changes (i.e. plants and construction work) and new explored trajectories. Nevertheless, the reported performance shows satisfactory robustness to weather and seasonal variance, which demonstrates the capability of our model for longterm localisation.
Success rate (%)  Localisation time [s]  

Method  Feb  April  May  June  Aug  Oct  Nov  Dec  Overall  Mean  Median 
Hybrid  GP Cov (ours)  
Hybrid  Fixed Cov (ours)  
NDTMCL with uniform initialisation 
IvA3 Uncertainty Evaluation
We further evaluate the predictive probabilities of our proposed model. It is more important to predict locations with uncertainties, which is the advantage of nonparametric models, e.g. Gaussian Process, compared to parametric deep neural networks. In the hybrid method, a well modelled predictive probability distribution is able to accelerate the convergence of the particle filter, but can also suppress illposed localisation due to false positives.
To evaluate the uncertainties, we divide the magnitude of the predictive uncertainties ( norm of variances in x, y, and z directions). We divide the magnitude of uncertainties (i.e. of positional prediction) into uniform intervals and calculate the mean and median errors within intervals. The histogram of predictions is also counted. The statistical results are shown in Fig. 3. We found that both positional error and rotational error positively propagate to the magnitude of uncertainties. Most of the predictions (85%) fall into the first four bins, i.e. the magnitude of uncertainty is less than 20. Within this uncertainty range (0–20 in magnitude), the proposed model achieves positional errors of less than 4.3 m on average and 2.0 m median, and rotational errors of less than 2.7 on average and 1.7 median.
IvB MonteCarlo Localisation Baseline Evaluation
For largescale environments like the Michigan campus, the large amount of particles required for naive MCL initialisation makes MCL intractable. For that reason, we restricted sampling only around previously explored positions from the training dataset. Specifically, for each explored position, 333 points were sampled from a voxel grid with voxel size , , of m. A finer resolution would make the amount of particles unmanageable. Nearby points were then filtered using a voxelfilter with the same resolution. For each remaining point, 8 pose particles were created with evenly spaced orientations around the zaxis. During the resampling step, the number of particles was reduced by a fraction of until particles remained.
In total, over 4000 localisation attempts (initial locations are uniformly chosen from 8 sessions) were performed using our two methods and the MCL baseline. An attempt was considered successful if an error m was achieved within iterations. MCL scored a success rate with an average localisation time of s as shown in Table II.
IvC Hybrid Localisation Evaluation
Instead of initialising particles in all possible locations, we use the proposed deep probabilistic method to seed the MCL (Eq. 9) and continue to update the samples following Algorithm 1. Specifically, we used a nominal number of 500 particles, increasing up to 1000 as additional particles were sampled, and reducing back to 500 during the resampling step. Using only a small amount of particles with the fast and sparse NDTbased measurement likelihood model, we achieved an average iteration time of 0.073 s with 0.02. To investigate the benefit of sampling from the uncertainty estimates, we compared it to sampling from a fixed position distribution (). Orientations were sampled from a fixed distribution() in both cases. These parameters were chosen according to practical experience. The localisation success rate and speed is shown in table. II and Fig. 5.
We found a median and mean localisation time of 0.799 s and 1.944 s respectively for the hybrid approach with covariance estimated by GP. Compared to the MCL baseline, the median localisation time is reduced by . Similarly to the evaluation in Sec. IVA2, the highest success rate was obtained in the early months of the year, gradually decreasing over the year. We observed in November and December, the localisation time significantly increased because there are more novel trajectories and landmarks.
V Conclusion
This paper proposes a hybrid probabilistic localisation method which leverages the efficient inference of the deep deterministic model and the rigorous geometry verification of Bayesfilterbased localisation. This paper seeks a solution to resolve the nonconjugacy between the Gaussian method (Gaussian Process) and nonGaussian method (MonteCarlo localisation) through importance sampling. Consequently, the two systems can be integrated seamlessly.
From the experiments, we found that the learningbased localisation method can provide an optimised predictive distribution to seed MCL, thereby accelerating the convergence of particles. On the other hand, the false positives can be suppressed by the correctly modelled uncertainties in the continuous localisation. The experimental results show that the hybrid system is able to localise in less time compared to the MonteCarlo baseline method, i.e. NDTMCL, and increases the precision to centimetres to meet the needs of largescale realworld localisation problems.
Acknowledgment
We thank NVIDIA Co. for donating a highpower GPU on which this work was performed. This project has received funding from the EU Horizon 2020 under grant agreement No 732737 (ILIAD) and EPSRC Programme Grant EP/M019918/1.
References
 [1] (2017) Detection of kidnapped robot problem in Monte Carlo localization based on the natural displacement of the robot. International Journal of Advanced Robotic Systems 14 (4), pp. 1729881417717469. Cited by: §I.
 [2] (2018) A hybrid probabilistic model for camera relocalization.. In BMVC, Vol. 1, pp. 8. Cited by: §IIB.
 [3] (2015) University of Michigan North Campus longterm vision and lidar dataset. International Journal of Robotics Research 35 (9), pp. 1023–1035. Cited by: §IV.
 [4] (2018) DELIGHT: an efficient descriptor for global localisation using lidar intensities. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 3653–3660. Cited by: §IIA, §IIIB1.
 [5] (2020) SegMap: segmentbased mapping and localization using datadriven descriptors. The International Journal of Robotics Research 39 (23), pp. 339–355. Cited by: §IIA.

[6]
(2012)
Are we ready for autonomous driving? the kitti vision benchmark suite.
In
Conference on Computer Vision and Pattern Recognition (CVPR)
, Cited by: §IV.  [7] (2013) Observationdriven bayesian filtering for global location estimation in the field area. Journal of Field Robotics 30 (4), pp. 489–518. Cited by: §I, §IIA.

[8]
(2013)
Gaussian processes for big data.
In
Proceedings of the TwentyNinth Conference on Uncertainty in Artificial Intelligence
, pp. 282–290. Cited by: §IIIB3.  [9] (2016) Modelling uncertainty in deep learning for camera relocalization. In 2016 IEEE international conference on Robotics and Automation (ICRA), pp. 4762–4769. Cited by: §I, §IIB.
 [10] (2017) Geometric loss functions for camera pose regression with deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5974–5983. Cited by: §I, §IIB.
 [11] (2015) Posenet: a convolutional network for realtime 6dof camera relocalization. In Proceedings of the IEEE international conference on computer vision, pp. 2938–2946. Cited by: §I, §IIB, §IVA2.
 [12] (201509) Where am I?: an NDTbased prior for MCL. In Proceedings of the European Conference on Mobile Robots (ECMR), Cited by: §I, §I, §IIA.
 [13] (2009) Automatic appearancebased loop detection from threedimensional laser data using the normal distributions transform. Journal of Field Robotics 26 (1112), pp. 892–914. Cited by: §IIA.
 [14] (2004) Mapbased priors for localization. In IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS)., Vol. 3, pp. 2179–2184. Cited by: §IIA.
 [15] (2018) Vlocnet++: deep multitask learning for semantic visual localization and odometry. IEEE Robotics and Automation Letters 3 (4), pp. 4407–4414. Cited by: §I.
 [16] (2010) 3D mapping with multiresolution occupied voxel lists. Autonomous Robots 28 (2), pp. 169. Cited by: §I, §IIA.
 [17] (2013) Normal distributions transform montecarlo localization (ndtmcl). In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 382–389. Cited by: §IIIC.
 [18] (201906) From coarse to fine: robust hierarchical localization at large scale. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §IIB.
 [19] (2018) Leveraging deep visual descriptors for hierarchical efficient localization. In Conference on Robot Learning, pp. 456–465. Cited by: §IIB.
 [20] (2019) Understanding the limitations of cnnbased absolute camera pose regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3302–3312. Cited by: §IIIB2.
 [21] (2015) IRON: a fast interest point descriptor for robust NDTmap matching and its application to robot localization. In IEEE/RSJ International Conference on Intelligent Robots and Systems, Cited by: §IIA.
 [22] (2017) Detecting and solving the kidnapped robot problem using laser range finder and wifi signal. In 2017 IEEE international conference on realtime computing and robotics (RCAR), pp. 303–308. Cited by: §IIA.
 [23] (2005) Probabilistic robotics. MIT press. Cited by: §IIIC.
 [24] (2000) Monte Carlo localization with mixture proposal distribution. In AAAI/IAAI, pp. 859–865. Cited by: §IIA.
 [25] (201305) Geometrical FLIRT phrases for large scale place recognition in 2d range data. In 2013 IEEE International Conference on Robotics and Automation, Vol. , pp. 2693–2698. External Links: Document, ISSN Cited by: §IIA.
 [26] (2009) Variational learning of inducing variables in sparse Gaussian processes. In Artificial Intelligence and Statistics, pp. 567–574. Cited by: §IIIB3, §IIIB3.
 [27] (2004) Expansion resetting for recovery from fatal error in Monte Carlo localizationcomparison with sensor resetting methods. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. 3, pp. 2481–2486. Cited by: §I.
 [28] (2018) Deep auxiliary learning for visual localization and odometry. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 6939–6946. Cited by: §I, §IIB.
 [29] (2015) Fast lidar localization using multiresolution gaussian mixture maps. In 2015 IEEE international conference on robotics and automation (ICRA), pp. 2814–2821. Cited by: §I.
 [30] (2005) A gridbased proposal for efficient global localisation of mobile robots. In Proceedings.(ICASSP’05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005., Vol. 5, pp. v–217. Cited by: §I, §IIA.
Comments
There are no comments yet.