I Introduction
State estimation techniques for mobile robotics have been predominantly formulated in discrete time. While discretetime techniques are sufficient for many applications, they are not ideal for highrate sensors that take measurements continuously along a trajectory (e.g., scanningwhilemoving lidars), or a combination of asynchronous sensors. Continuoustime estimation techniques are much more suitable in these cases, since measurements can be incorporated at any time along the trajectory, without needing to include an additional state at every measurement time. Moreover, continuoustime techniques have the advantage that the posterior estimates can be queried at any time along the trajectory, not just at measurement times.
Continuoustime estimation techniques can be categorized into two types: parametric and nonparametric. Parametric approaches typically represent the trajectory using a finite set of temporal basis functions. Our work focuses on the nonparametric approach in which the trajectory is represented as a onedimensional Gaussian process, with time as the independent variable. While model fidelity in parametric approaches are affected by choices regarding trajectory representation and discretization, the GP approach relies heavily on the continuoustime prior distribution for solution quality.
Current formulations of the GP approach to continuoustime trajectory estimation employ a whitenoiseonacceleration (WNOA) prior, or one that assumes the prior mean is constantvelocity. While this choice of prior is appropriate for certain types of motion, we argue that it is insufficient for representing trajectories with nonzero acceleration, such as in the motion of a vehicle in urban driving. We show that a bias can occur when the motion prior does not sufficiently represent the underlying trajectory.
With this in mind, we derive a whitenoiseonjerk (WNOJ) motion prior, which assumes the prior mean is constantacceleration. Our derivation starts with the same form of physically motivated stochastic differential equation (SDE) for describing motion as in the WNOA prior.
By evaluating on several realworld lidar datasets, we show that our variation of STEAM with the WNOJ prior greatly outperforms the current formulation of STEAM, which employs a WNOA prior. In particular, the use of WNOJ prior results in reduced bias and improved odometry accuracy to the estimated trajectory. We perform the experimental evaluation using lidaronly motion estimation, as this is a problem particularly suitable for continuoustime methods. The contribution of this paper, however, can be applied to any choice of sensor suite.
In Section II we review previous work. An overview of our existing continuoustime lidaronly estimator is provided in Section III. In Section IV we identify a source of estimator bias that relates to the choice of GP prior. Section V presents the derivation of a whitenoiseonjerk motion prior, which is compared against the whitenoiseonacceleration prior experimentally in Section VI. In Section VII we give concluding remarks and discuss future work.
Ii Related Work
Early works on continuoustime estimation are mostly parametric approaches, which represent the trajectory using temporal basis functions. Jung and Taylor [1] first presented an estimator where the sensor trajectory is modelled by spline functions. Furgale et. al [2] derived the simultaneous localization and mapping (SLAM) problem in continuous time, and showed a small number of basis functions can sufficiently represent the state. Anderson and Barfoot [3] derived a relative coordinate formulation by estimating the bodycentric velocity. Lovegrove et al. [4] applied continuoustime estimation in visualinertial SLAM. Recent work by Dubé et al. [5] explored strategies for selecting knot sampling in parametric approaches to continuoustime trajectories.
Batch nonparametric approaches, which represent the trajectory as a Gaussian process, were first formulated by Tong et al. [6]. The smoothness assumption is handled in a principled manner through the underlying GP prior. Barfoot et al. [7] extended the GP approach to STEAM, which employs a WNOA motion prior, by jointly estimating the pose and velocity. This choice of GP motion prior results in the inverse kernel matrix being exactly sparse, leading to a very efficient formulation. Anderson and Barfoot [8] extended [7] to matrix Lie groups. Boots et al. [9] reformulated STEAM from batch estimation to an incremental algorithm. STEAM has been applied to motion planning [10], crop monitoring [11], and visual teach and repeat [12].
Without explicitly treating the trajectory as a continuous function of time, some estimators use interpolation between discrete poses to compensate for motion distortion, particularly in the case of scanning lidars. Bosse and Zlot
[13], [14] used cubic splines to enforce smoothness for a trajectory estimated using data from a 2D spinning lidar, and linearly interpolated between sampled poses. Dong et al. [15] performed visual odometry from lidar intensity images, and interpolated on rotation and translation using a scheme detailed in [16]. Stateoftheart lidaronly motion estimation algorithm, LOAM [17], interpolates on between adjacent discrete poses, using a scheme similar to [13]. Unlike continuoustime methods, these methods need to make adhoc assumptions about trajectory smoothness in order to carry out interpolation.While various motion estimation methods have made assumptions with the trajectory being constantvelocity [18], [7], [8], [19], [20], a constantacceleration trajectory assumption has been used for tracking control [21], manipulator motion planning [22], and manipulator state estimation [23]. To the best of our knowledge, the derivation we present in this paper is the first attempt at modelling the trajectory as constantacceleration mean (whitenoiseonjerk) in the context of continuoustime trajectory estimation on
Iii ContinuousTime Estimator
In this section, we give details on our existing continuoustime lidar odometry algorithm, which uses the STEAM framework with a WNOA motion prior [8]. This serves as the baseline against which the WNOJ prior will be evaluated.
Iiia WNOA GP Prior
Our goal is to employ a class of GP priors that leads to an efficient formulation and a simple solution [7] [16]. This class of GP priors is based on linear timeinvariant (LTI) stochastic differential equations (SDEs) of the form
(1) 
where is the state, is a known exogenous input, and is a zeromean, whitenoise GP with power spectral density matrix, If then for the mean function we have the simple solution
(2) 
where is the prior mean, and is the state transition function.
IiiB GP Prior for SE(3)
In a physicallymotivated GP prior is the following SDE:
(3) 
where is the pose with and is the bodycentric velocity. converts into a member of Lie algebra, [24] [16]. The state is
(4) 
However, it can be seen that the SDE in (3) is nonlinear, and therefore cannot be cast into the form of (1) and solved efficiently [8]. Instead, [8] defines a local pose variable:
(5) 
which is a function of the global pose variables, and where For simplicity we use to denote Here is the inverse of and converts a member of to [24] [16].
IiiC Cost Terms in Optimization
For our estimator, the negativeloglikelihood objective function consists of prior and measurement cost terms:
(8) 
Since our estimator is only for odometry, we do not keep landmarks as part of the state, as in the full STEAM problem. The optimization problem is then
(9) 
where the state consists of all trajectory poses and velocities, as defined in (4). We solve the optimization problem using GaussNewton, where each and are updated using an perturbation scheme [24] [8]:
(10) 
where is the operating point. Each prior cost term is
(11) 
In terms of local pose variables, each prior error term is
(12) 
where the local state variables are defined as [8]
(13) 
with The state transition function can be computed as in [16],
(14) 
and the inverse covariance matrix is [24] [8]
(15) 
where Using the relationship between local and global state variables, we can rewrite the prior error term in terms of global state variables as [8]
(16) 
Each measurement cost term is
(17) 
where we have chosen the GemanMcClure robust cost [16]. Each is a whitened error norm. Given a point measured at time and let be its matched point, expressed in the reference frame, Define a measurement error term:
(18) 
where is a projection matrix. If lies on a plane, then we formulate a pointtoplane whitened error norm:
(19) 
where is the surface normal of and is a scale factor. If does not lie on a plane, we formulate a pointtopoint whitened error norm:
(20) 
where is the associated measurement covariance.
Our lidar odometry algorithm utilizes slidingwindow optimization, and runs in an iterative fashion where matched pairs of points are found in each iteration. Please refer to our previous work [25] for further details on our odometry pipeline, such as point matching and keypoint selection.
IiiD Querying the Trajectory
Our formulation allows us to incorporate measurements at any time along the trajectory, not just at timesteps kept in the state vector as for discretetime methods. Suppose we have a measurement at
as in (18), and that where and are knot times in the state. We can interpolate for the state at using results from [8]:(21) 
where and are [7]
(22) 
Again, using our knowledge on the relationship between local and global state variables as in (13), we can reformulate (21) using global state variables. While interpolating for the bodycentric velocity at an arbitrary time might be of interest to certain applications, for lidaronly odometry we are mainly interested in pose interpolation:
(23) 
where and are subblocks of and This is a principled approach for querying the trajectory that comes directly from standard GP interpolation [26]. It can be seen that, given a measurement at with the result in (23) allows updates to temporally adjacent state variables at and in the optimization process.
Iv Estimator Bias
As shown in Section VI (Figures 4, 5, 7), there are noticeable biases in our baseline estimator, particularly in the directions of (), roll (), and pitch (). See Figure 2 for the coordinate system for our estimator.
There could be many sources that might cause the estimated trajectory to be biased, such as poor sensor calibration, or choosing a measurement covariance that does not reflect the sensor noise characteristics. In this paper, we focus on a very specific source of estimator bias, which results from the motion prior being insufficient to represent the underlying continuoustime trajectory.
Consider a very simple estimation problem in which a robot, initially stationary at time , travels from at to at under constant acceleration. The robot only travels forward, therefore motion only occurs in the direction. The robot measures a single point at which is matched against measured at Keeping and fixed, the state we wish to estimate is
(24) 
We can define the following ground truth quantities:
(25) 
where is the acceleration in We can define the following measurement error equation and the associated measurement Jacobian [24]:
(26) 
where the operator is defined in [24]. We also have a WNOA prior error term that can be constructed from (16). For simplicity, we assume and that the measurement is noisefree. Initializing the state variables at ground truth, we have as the measurement is noisefree. However, is not zero since the motion is not constant velocity. Performing GaussNewton for one iteration using the perturbation scheme in [8] (Equation (10)), we have:
(27) 
where and Equation (27
) shows that our simple problem results in perturbations to degrees of freedom where there is no motion (
and ), effectively creating a bias. Moreover, the perturbations to these DOFs depend on the Cartesian coordinate of the transformed point,We can draw the observation that when the motion prior cannot sufficiently describe the underlying trajectory, such as when the prior mean is constantvelocity but the trajectory is constantacceleration, then the estimator will be biased in certain degrees of freedom. Particularly, the induced bias is a function of the Cartesian coordinates of points. The bias stems from the optimizer’s desire to keep the cost low; the prior cost is made smaller by increasing the measurement cost in the overall objective function (8).
Equation (27) is computed assuming the robot has forward acceleration. However, a similar case can be made for motion with angular acceleration, such as when initiating a turn.
V WhiteNoiseOnJerk Motion Prior
Here we derive a whitenoiseonjerk motion prior. Instead of modelling the acceleration as a zeromean, whitenoise Gaussian process as in the case of a WNOA prior [8], we now explicitly estimate the following state:
(28) 
where is the bodycentric acceleration.
Extending the idea of local pose variables as presented in Section IIIB, we can define a sequence of local whitenoiseonjerk priors as a LTI SDE in the form of (1):
(29) 
We now have whitenoise on the third derivative (jerk) of where For the WNOJ prior, the state transition function is now
(30) 
and the covariance matrix can be computed as
(31) 
The inverse covariance matrix is then
(32) 
Figure 1 shows trajectories sampled from a whitenoiseonjerk prior distribution where the prior mean is constantacceleration, compared with trajectories sampled from a whitenoiseonacceleration prior distribution where the prior mean is constantvelocity. We argue that the WNOJ prior is more suitable for representing motion with nonzero acceleration trajectory sections, such as in urban driving.
Va Prior Error Term
In local pose variables, the prior error term is the same as in (12). We wish to then express the prior error in terms of and The relationship between and and global state variables are shown in Equations (5) and (7). To express in terms of global state variables, we have
(33) 
We can write the inverse left Jacobian of as a powerseries expansion [16]:
(34) 
where the coefficients, are the Bernoulli numbers. The operator is defined as [24] [16]
(35) 
It can be shown easily that therefore
(36) 
As it turns out, we cannot express analytically in terms of or which are familiar terms with which to work. We instead resort to the firstorder approximation [16] that
(37) 
Our approximation is reasonable as long as is small, which it will be in our case. Under this approximation, we have
(38) 
and finally
(39) 
The local state variables can then be written as
(40) 
where we have made use of the identity [16].
In terms of global state variables, the prior error term is
(41) 
Suppose we assume the trajectory has zero acceleration (which is assumed by a prior mean that is constantvelocity), and also make the assumption that In this case, the last component in (41) becomes zero, and the first two components become identical to the WNOA prior as in (16); we have essentially recovered the prior error equation for the WNOA prior.
VB Querying the Trajectory
We start from the same interpolation equation using local state variables (21). For the WNOJ prior, the interpolation coefficients and can be computed from (22), using and from (30) and (31).
Substituting with global state variables for the WNOJ prior using (40), the pose interpolation equation is
(42) 
where and are subblocks of and
Again, if we assume that and the terms with coefficients and become zeros. Similar to the case with the prior error term, we can essentially recover the pose interpolation equation for the WNOA prior as in (23).
Vi Experimental Validation
To evaluate the whitenoiseonjerk prior we derived, we formulated a variation of our continuoustime lidar odometry estimator that employs the WNOJ prior. The new estimator is evaluated on various Velodyne lidar datasets, and the odometry errors are compared against the baseline estimator presented in Section III, which employs a WNOA prior. To ensure a fair comparison, all other aspects of STEAM such as constructing measurement terms, and the other components of lidar odometry such as the point matching method, are kept the same. Any differences between the two estimators arise solely from their different motion priors. Our evaluations are odometric, and we make no attempt to use mapping or loopclosure to reduce estimation error.
The power spectral density matrix, which determines the inverse covariance matrix for prior cost terms as in Equations (15) and (32
), is the only hyperparameter for our lidar odometry algorithm. For both types of motion priors, we tuned
to achieve the best performance on the training set (sequences 0 to 10) of the KITTI odometry benchmark [27], which has accurate ground truth. was then kept the same when evaluating on all other datasets.Via KITTI Odometry Benchmark
Sequences 0 to 10 are the training sequences of KITTI. Sequences 11 to 21 are the test sequences, where the ground truths are not publicly available. The KITTI benchmark evaluates percentage translation errors across path segments of lengths meters, and an average over all path segments is computed. A total error averaged over path segments evaluated for all sequences is also reported.
The baseline estimator that employs a WNOA prior achieved an overall error of on the training set, and on the test set. Our new estimator that employs a WNOJ prior achieved an overall error of on the training set, and on the test set. A detailed breakdown of the odometry error for various path segment lengths is presented in Figure 3. The new estimator using WNOJ prior outperforms the baseline estimator for almost all path segment lengths. Figure 4 shows a sequence where the odometry biases are noticeably reduced when we use the WNOJ prior.
Our baseline odometry algorithm is already fairly accurate despite using a WNOA motion prior, as it ranked 3rd on the KITTI odometry leader board at the time of submission among methods that use lidar only [25]. By choosing a motion prior that we believe is more representative of realworld vehicle trajectories, we achieved consistent improvements to our baseline method, which is already fairly accurate. Currently, all lidaronly methods that rank ahead of our WNOJ estimator on the KITTI leader board have a mapping or loopclosure aspect, whereas our estimator is strictly odometric.
The lidar pointclouds from the KITTI odometry benchmark were postprocessed by the dataset authors to compensate motion distortion. As a result, all points in a sensor revolution can be treated as being measured at exactly the same time, and we do not need to rely on the motion prior for interpolating the pose as in Equations (23) and (42). The prior cost terms (11), however, are still used to smooth the trajectory. Nevertheless, for undistorted data such as in the KITTI dataset, it is not necessary to use a continuoustime estimation framework. Even though the new estimator with the WNOJ prior outperformed our baseline estimator on the KITTI benchmark, we argue that datasets with motiondistorted pointclouds are more suitable for comparing continuoustime methods.
ViB University of Toronto Dataset
A dataset was collected by our test vehicle (Figure 6) along different routes around University of Toronto (U of T). This resulted in 9 sequences of Velodyne data where each is at least 1.7 km in distance. 6DOF ground truth is available via an onboard Applanix positioning and orientation system (POS). For consistency, to evaluate for odometry errors we use the same method as the KITTI benchmark, where translational errors are evaluated across path segments of lengths meters. This is a motiondistorted lidar dataset, as we do not employ external sensors or ground truth to compensate the pointclouds. We rely solely on the continuoustime estimator for handling motion distortion.






0  3.34  1.5326  1.2663  
1  2.21  1.3706  1.2797  
2  3.04  1.3967  1.3485  
3  2.91  1.7980  1.5844  
4  2.99  1.6100  1.4307  
5  1.71  2.4319  2.1696  
6  3.48  2.1322  2.054  
7  3.04  1.2932  1.2122  
8  2.92  1.6988  1.5327  
overall  25.63  1.6736  1.5235 
The U of T dataset features driving in urban scenes where the vehicle’s speed is generally under . However, the vehicle needs to constantly slow down for traffic, or take a turn at an intersection. Since the vehicle’s trajectory contains many sections where the velocity changes consistently, this dataset is much more suitable for motion estimation using a WNOJ prior, than a WNOA prior.
The results for the baseline estimator using WNOA prior and the new estimator using WNOJ are compared in Table I. The WNOJ prior resulted in smaller odometry error for all sequences, and an overall of reduction in error, when compared against the WNOA prior. Figure 5 shows comparison plots of the estimated trajectory using the WNOA prior and WNOJ prior for two sequences from the U of T dataset. The estimated trajectory using a WNOJ prior is significantly more accurate than using a WNOA prior.
Again, since the lidar data are motiondistorted, we need to interpolate the pose for each point measurement. We argue that the WNOJ prior (42) offers a more suitable interpolation scheme than the WNOA prior (23). Results for the U of T dataset achieved a greater reduction in error from using the WNOJ prior than the KITTI dataset, which makes no use of the interpolation scheme.
ViC Richmond Hill Dataset
A dataset was collected in the city of Richmond Hill, North of Toronto, using our test vehicle. This resulted in three long sequences more than in total. The Richmond Hill dataset features driving in suburban areas, and on highways, which contain less useful geometry and structure. Moreover, the test vehicle was driving more than on highways, making this a highly challenging dataset. Similar to the U of T dataset, the pointclouds are motiondistorted.






0 (suburban)  17.91  1.5887  1.5094  
1 (urban)  7.49  2.3229  1.9363  
2 (highway)  35.01  2.3449  2.1627  
overall  60.41  2.1180  1.9409 
The results are summarized in Table II. The new estimator using the WNOJ prior outperformed the baseline estimator for all sequences, resulting in a reduction of the overall error by . Figure 7 shows a comparison plot of odometry estimates using the WNOJ prior and the WNOA prior, where the odometry from the new estimator is noticeably less biased than the odometry from the baseline estimator, when compared against ground truth.
ViD Remarks
We found that the new estimator using the WNOJ prior is more sensitive to than the baseline estimator using the WNOA prior. This is noticeable because of the difference on the coefficient of each term in the inverse covariance matrix, between the WNOJ prior (32) and the WNOA prior (15). Despite this, we achieved an improvement on all datasets using tuned on the KITTI training set alone. We plan on releasing our datasets for public use in the future.
Vii Conclusion and Future Work
In this paper, we showed that in continuoustime trajectory estimation, a source of estimator bias can arise when the motion prior cannot sufficiently represent the underlying trajectory. The main contribution of this paper is the derivation of a whitenoiseonjerk motion prior for continuoustime trajectory estimation on Through experimental validation, we showed that the new prior outperforms the existing whitenoiseonacceleration prior employed by STEAM on various lidar datasets, both with and without motion distortion.
Our new formulation of STEAM using the WNOJ prior now has accelerations in the state. Therefore, an extension would be to formulate an estimator that incorporates acceleration measurements from an inertial measurement unit (IMU) directly, rather than preintegrating to a fixed timestep as is done in many existing inertial estimators.
Acknowledgment
The authors would like to thank Applanix Corporation and the Natural Sciences and Engineering Research Council of Canada (NSERC) for supporting this work, and General Motors (GM) Canada for donating the test vehicle.
References
 [1] S.H. Jung and C. J. Taylor, “Camera Trajectory Estimation Using Inertial Sensor Measurements and Structure from Motion Results,” in Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on, vol. 2. IEEE, 2001, pp. II–II.
 [2] P. Furgale, T. D. Barfoot, and G. Sibley, “ContinuousTime Batch Estimation Using Temporal Basis Functions,” in Robotics and Automation (ICRA), 2012 IEEE International Conference on. IEEE, 2012, pp. 2088–2095.
 [3] S. Anderson and T. D. Barfoot, “Towards Relative ContinuousTime SLAM,” in Robotics and Automation (ICRA), 2013 IEEE International Conference on. IEEE, 2013, pp. 1033–1040.
 [4] S. Lovegrove, A. PatronPerez, and G. Sibley, “Spline Fusion: A ContinuousTime Representation for VisualInertial Fusion with Application to Rolling Shutter Cameras.” in BMVC, 2013.
 [5] R. Dubé, H. Sommer, A. Gawel, M. Bosse, and R. Siegwart, “Nonuniform Sampling Strategies for Continuous Correction Based Trajectory Estimation,” in Robotics and Automation (ICRA), 2016 IEEE International Conference on. IEEE, 2016, pp. 4792–4798.
 [6] C. H. Tong, P. Furgale, and T. D. Barfoot, “Gaussian Process GaussNewton for Nonparametric Simultaneous Localization and Mapping,” The International Journal of Robotics Research, vol. 32, no. 5, pp. 507–525, 2013.
 [7] T. D. Barfoot, C. H. Tong, and S. Särkkä, “Batch ContinuousTime Trajectory Estimation as Exactly Sparse Gaussian Process Regression.” in Robotics: Science and Systems, 2014.
 [8] S. Anderson and T. D. Barfoot, “Full STEAM ahead: Exactly Sparse Gaussian Process Regression for Batch ContinuousTime Trajectory Estimation on SE(3),” in Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on. IEEE, 2015, pp. 157–164.
 [9] X. Yan, V. Indelman, and B. Boots, “Incremental Sparse GP Regression for ContinuousTime Trajectory Estimation and Mapping,” Robotics and Autonomous Systems, vol. 87, pp. 120–132, 2017.
 [10] M. Mukadam, J. Dong, F. Dellaert, and B. Boots, “Simultaneous Trajectory Estimation and Planning via Probabilistic Inference,” in Proceedings of Robotics: Science and Systems (RSS), 2017.
 [11] J. Dong, J. G. Burnham, B. Boots, G. Rains, and F. Dellaert, “4D Crop Monitoring: Spatiotemporal Reconstruction for Agriculture.”
 [12] M. Warren, M. Paton, K. MacTavish, A. P. Schoellig, and T. D. Barfoot, “Towards Visual Teach and Repeat for GPSDenied Flight of a FixedWing UAV,” in Field and Service Robotics. Springer, 2018, pp. 481–498.
 [13] M. Bosse and R. Zlot, “Continuous 3D ScanMatching with a Spinning 2D Laser,” in Robotics and Automation, 2009. ICRA’09. IEEE International Conference on. IEEE, 2009, pp. 4312–4319.
 [14] R. Zlot and M. Bosse, “Efficient LargeScale 3D Mobile Mapping and Surface Reconstruction of an Underground Mine,” in Field and service robotics. Springer, 2014, pp. 479–493.
 [15] H. Dong and T. D. Barfoot, “Lightinginvariant Visual Odometry Using Lidar Intensity Imagery and Pose Interpolation,” in Field and Service Robotics. Springer, 2014, pp. 327–342.
 [16] T. D. Barfoot, State Estimation for Robotics. Cambridge University Press, 2017.
 [17] J. Zhang and S. Singh, “LOAM: Lidar Odometry and Mapping in Realtime.” in Robotics: Science and Systems, vol. 2, 2014, p. 9.
 [18] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: Realtime Single Camera SLAM,” IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 6, pp. 1052–1067, 2007.
 [19] S. Anderson and T. D. Barfoot, “RANSAC for Motiondistorted 3D Visual Sensors,” in Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on. IEEE, 2013, pp. 2093–2099.
 [20] J. Hedborg, P.E. Forssén, M. Felsberg, and E. Ringaby, “Rolling Shutter Bundle Adjustment,” in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. IEEE, 2012, pp. 1434–1441.
 [21] J. T. Jang, S. T. Moon, S. Han, H. C. Gong, G.H. Choi, I. H. Hwang, and J. Lyou, “Trajectory Generation with Piecewise Constant Acceleration and Tracking Control of a Quadcopter,” in Industrial Technology (ICIT), 2015 IEEE International Conference on. IEEE, 2015, pp. 530–535.
 [22] M. Mukadam, J. Dong, X. Yan, F. Dellaert, and B. Boots, “ContinuousTime Gaussian Process Motion Planning via Probabilistic Inference,” arXiv preprint arXiv:1707.07383, 2017.
 [23] B. Olofsson, J. Antonsson, H. G. Kortier, B. Bernhardsson, A. Robertsson, and R. Johansson, “Sensor Fusion for Robotic Workspace State Estimation,” IEEE/ASME Transactions on Mechatronics, vol. 21, no. 5, pp. 2236–2248, 2016.
 [24] T. D. Barfoot and P. T. Furgale, “Associating Uncertainty With ThreeDimensional Poses for Use in Estimation Problems,” IEEE Transactions on Robotics, vol. 30, no. 3, pp. 679–693, 2014.
 [25] T. Y. Tang, D. J. Yoon, F. Pomerleau, and T. D. Barfoot, “Learning a Bias Correction for Lidaronly Motion Estimation,” 15th Conference on Computer and Robot Vision (CRV), 2018.

[26]
C. E. Rasmussen, “Gaussian Processes in Machine Learning,” in
Advanced lectures on machine learning. Springer, 2004, pp. 63–71.  [27] A. Geiger, P. Lenz, and R. Urtasun, “Are We Ready for Autonomous Driving? The KITTI Vision Benchmark Suite,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
Comments
There are no comments yet.