A White-Noise-On-Jerk Motion Prior for Continuous-Time Trajectory Estimation on SE(3)

09/18/2018 ∙ by Tim Y. Tang, et al. ∙ 0

Simultaneous trajectory estimation and mapping (STEAM) offers an efficient approach to continuous-time trajectory estimation, by representing the trajectory as a Gaussian process (GP). Previous formulations of the STEAM framework use a GP prior that assumes white-noise-on-acceleration, with the prior mean encouraging constant body-centric velocity. We show that such a prior cannot sufficiently represent trajectory sections with non-zero acceleration, resulting in a bias to the posterior estimates. This paper derives a novel motion prior that assumes white-noise-on-jerk, where the prior mean encourages constant body-centric acceleration. With the new prior, we formulate a variation of STEAM that estimates the pose, body-centric velocity, and body-centric acceleration. By evaluating across several datasets, we show that the new prior greatly outperforms the white-noise-on-acceleration prior in terms of solution accuracy.



There are no comments yet.


page 4

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

State estimation techniques for mobile robotics have been predominantly formulated in discrete time. While discrete-time techniques are sufficient for many applications, they are not ideal for high-rate sensors that take measurements continuously along a trajectory (e.g., scanning-while-moving lidars), or a combination of asynchronous sensors. Continuous-time estimation techniques are much more suitable in these cases, since measurements can be incorporated at any time along the trajectory, without needing to include an additional state at every measurement time. Moreover, continuous-time techniques have the advantage that the posterior estimates can be queried at any time along the trajectory, not just at measurement times.

Fig. 1: Existing formulations of STEAM use a white-noise-on-acceleration motion prior (bottom), which have trouble representing trajectories with non-zero acceleration, such as in the motion of a vehicle in urban driving. We propose a white-noise-on-jerk motion prior (top), which is more suitable for representing these types of trajectories.

Continuous-time estimation techniques can be categorized into two types: parametric and nonparametric. Parametric approaches typically represent the trajectory using a finite set of temporal basis functions. Our work focuses on the nonparametric approach in which the trajectory is represented as a one-dimensional Gaussian process, with time as the independent variable. While model fidelity in parametric approaches are affected by choices regarding trajectory representation and discretization, the GP approach relies heavily on the continuous-time prior distribution for solution quality.

Current formulations of the GP approach to continuous-time trajectory estimation employ a white-noise-on-acceleration (WNOA) prior, or one that assumes the prior mean is constant-velocity. While this choice of prior is appropriate for certain types of motion, we argue that it is insufficient for representing trajectories with non-zero acceleration, such as in the motion of a vehicle in urban driving. We show that a bias can occur when the motion prior does not sufficiently represent the underlying trajectory.

With this in mind, we derive a white-noise-on-jerk (WNOJ) motion prior, which assumes the prior mean is constant-acceleration. Our derivation starts with the same form of physically motivated stochastic differential equation (SDE) for describing motion as in the WNOA prior.

By evaluating on several real-world lidar datasets, we show that our variation of STEAM with the WNOJ prior greatly outperforms the current formulation of STEAM, which employs a WNOA prior. In particular, the use of WNOJ prior results in reduced bias and improved odometry accuracy to the estimated trajectory. We perform the experimental evaluation using lidar-only motion estimation, as this is a problem particularly suitable for continuous-time methods. The contribution of this paper, however, can be applied to any choice of sensor suite.

In Section II we review previous work. An overview of our existing continuous-time lidar-only estimator is provided in Section III. In Section IV we identify a source of estimator bias that relates to the choice of GP prior. Section V presents the derivation of a white-noise-on-jerk motion prior, which is compared against the white-noise-on-acceleration prior experimentally in Section VI. In Section VII we give concluding remarks and discuss future work.

Ii Related Work

Early works on continuous-time estimation are mostly parametric approaches, which represent the trajectory using temporal basis functions. Jung and Taylor [1] first presented an estimator where the sensor trajectory is modelled by spline functions. Furgale et. al [2] derived the simultaneous localization and mapping (SLAM) problem in continuous time, and showed a small number of basis functions can sufficiently represent the state. Anderson and Barfoot [3] derived a relative coordinate formulation by estimating the body-centric velocity. Lovegrove et al. [4] applied continuous-time estimation in visual-inertial SLAM. Recent work by Dubé et al. [5] explored strategies for selecting knot sampling in parametric approaches to continuous-time trajectories.

Batch nonparametric approaches, which represent the trajectory as a Gaussian process, were first formulated by Tong et al. [6]. The smoothness assumption is handled in a principled manner through the underlying GP prior. Barfoot et al. [7] extended the GP approach to STEAM, which employs a WNOA motion prior, by jointly estimating the pose and velocity. This choice of GP motion prior results in the inverse kernel matrix being exactly sparse, leading to a very efficient formulation. Anderson and Barfoot [8] extended [7] to matrix Lie groups. Boots et al. [9] re-formulated STEAM from batch estimation to an incremental algorithm. STEAM has been applied to motion planning [10], crop monitoring [11], and visual teach and repeat [12].

Without explicitly treating the trajectory as a continuous function of time, some estimators use interpolation between discrete poses to compensate for motion distortion, particularly in the case of scanning lidars. Bosse and Zlot

[13], [14] used cubic splines to enforce smoothness for a trajectory estimated using data from a 2D spinning lidar, and linearly interpolated between sampled poses. Dong et al. [15] performed visual odometry from lidar intensity images, and interpolated on rotation and translation using a scheme detailed in [16]. State-of-the-art lidar-only motion estimation algorithm, LOAM [17], interpolates on between adjacent discrete poses, using a scheme similar to [13]. Unlike continuous-time methods, these methods need to make ad-hoc assumptions about trajectory smoothness in order to carry out interpolation.

While various motion estimation methods have made assumptions with the trajectory being constant-velocity [18], [7], [8], [19], [20], a constant-acceleration trajectory assumption has been used for tracking control [21], manipulator motion planning [22], and manipulator state estimation [23]. To the best of our knowledge, the derivation we present in this paper is the first attempt at modelling the trajectory as constant-acceleration mean (white-noise-on-jerk) in the context of continuous-time trajectory estimation on

Iii Continuous-Time Estimator

In this section, we give details on our existing continuous-time lidar odometry algorithm, which uses the STEAM framework with a WNOA motion prior [8]. This serves as the baseline against which the WNOJ prior will be evaluated.

Iii-a WNOA GP Prior

Our goal is to employ a class of GP priors that leads to an efficient formulation and a simple solution [7] [16]. This class of GP priors is based on linear time-invariant (LTI) stochastic differential equations (SDEs) of the form


where is the state, is a known exogenous input, and is a zero-mean, white-noise GP with power spectral density matrix, If then for the mean function we have the simple solution


where is the prior mean, and is the state transition function.

Iii-B GP Prior for SE(3)

In a physically-motivated GP prior is the following SDE:


where is the pose with and is the body-centric velocity. converts into a member of Lie algebra, [24] [16]. The state is


However, it can be seen that the SDE in (3) is nonlinear, and therefore cannot be cast into the form of (1) and solved efficiently [8]. Instead, [8] defines a local pose variable:


which is a function of the global pose variables, and where For simplicity we use to denote Here is the inverse of and converts a member of to [24] [16].

Using local variables, [8] defines a sequence of local priors that can be cast into a LTI SDE of the form in (1), with


where is defined as the local state. Under this formulation, we have white-noise on the second derivative of i.e. Furthermore, we have the following [8]:


where is the left Jacobian of [24] [16].

Iii-C Cost Terms in Optimization

For our estimator, the negative-log-likelihood objective function consists of prior and measurement cost terms:


Since our estimator is only for odometry, we do not keep landmarks as part of the state, as in the full STEAM problem. The optimization problem is then


where the state consists of all trajectory poses and velocities, as defined in (4). We solve the optimization problem using Gauss-Newton, where each and are updated using an perturbation scheme [24] [8]:


where is the operating point. Each prior cost term is


In terms of local pose variables, each prior error term is


where the local state variables are defined as [8]


with The state transition function can be computed as in [16],


and the inverse covariance matrix is [24] [8]


where Using the relationship between local and global state variables, we can re-write the prior error term in terms of global state variables as [8]


Each measurement cost term is


where we have chosen the Geman-McClure robust cost [16]. Each is a whitened error norm. Given a point measured at time and let be its matched point, expressed in the reference frame, Define a measurement error term:


where is a projection matrix. If lies on a plane, then we formulate a point-to-plane whitened error norm:


where is the surface normal of and is a scale factor. If does not lie on a plane, we formulate a point-to-point whitened error norm:


where is the associated measurement covariance.

Our lidar odometry algorithm utilizes sliding-window optimization, and runs in an iterative fashion where matched pairs of points are found in each iteration. Please refer to our previous work [25] for further details on our odometry pipeline, such as point matching and keypoint selection.

Iii-D Querying the Trajectory

Our formulation allows us to incorporate measurements at any time along the trajectory, not just at timesteps kept in the state vector as for discrete-time methods. Suppose we have a measurement at

as in (18), and that where and are knot times in the state. We can interpolate for the state at using results from [8]:


where and are [7]


Again, using our knowledge on the relationship between local and global state variables as in (13), we can re-formulate (21) using global state variables. While interpolating for the body-centric velocity at an arbitrary time might be of interest to certain applications, for lidar-only odometry we are mainly interested in pose interpolation:


where and are sub-blocks of and This is a principled approach for querying the trajectory that comes directly from standard GP interpolation [26]. It can be seen that, given a measurement at with the result in (23) allows updates to temporally adjacent state variables at and in the optimization process.

Iv Estimator Bias

As shown in Section VI (Figures 4, 5, 7), there are noticeable biases in our baseline estimator, particularly in the directions of (), roll (), and pitch (). See Figure 2 for the coordinate system for our estimator.

Fig. 2: The coordinate system for our estimator. Roll (), pitch (), and yaw () are rotations about the and axes, respectively.

There could be many sources that might cause the estimated trajectory to be biased, such as poor sensor calibration, or choosing a measurement covariance that does not reflect the sensor noise characteristics. In this paper, we focus on a very specific source of estimator bias, which results from the motion prior being insufficient to represent the underlying continuous-time trajectory.

Consider a very simple estimation problem in which a robot, initially stationary at time , travels from at to at under constant acceleration. The robot only travels forward, therefore motion only occurs in the direction. The robot measures a single point at which is matched against measured at Keeping and fixed, the state we wish to estimate is


We can define the following ground truth quantities:


where is the acceleration in We can define the following measurement error equation and the associated measurement Jacobian [24]:


where the operator is defined in [24]. We also have a WNOA prior error term that can be constructed from (16). For simplicity, we assume and that the measurement is noise-free. Initializing the state variables at ground truth, we have as the measurement is noise-free. However, is not zero since the motion is not constant velocity. Performing Gauss-Newton for one iteration using the perturbation scheme in [8] (Equation (10)), we have:


where and Equation (27

) shows that our simple problem results in perturbations to degrees of freedom where there is no motion (

and ), effectively creating a bias. Moreover, the perturbations to these DOFs depend on the Cartesian coordinate of the transformed point,

We can draw the observation that when the motion prior cannot sufficiently describe the underlying trajectory, such as when the prior mean is constant-velocity but the trajectory is constant-acceleration, then the estimator will be biased in certain degrees of freedom. Particularly, the induced bias is a function of the Cartesian coordinates of points. The bias stems from the optimizer’s desire to keep the cost low; the prior cost is made smaller by increasing the measurement cost in the overall objective function (8).

Equation (27) is computed assuming the robot has forward acceleration. However, a similar case can be made for motion with angular acceleration, such as when initiating a turn.

V White-Noise-On-Jerk Motion Prior

Here we derive a white-noise-on-jerk motion prior. Instead of modelling the acceleration as a zero-mean, white-noise Gaussian process as in the case of a WNOA prior [8], we now explicitly estimate the following state:


where is the body-centric acceleration.

Extending the idea of local pose variables as presented in Section III-B, we can define a sequence of local white-noise-on-jerk priors as a LTI SDE in the form of (1):


We now have white-noise on the third derivative (jerk) of where For the WNOJ prior, the state transition function is now


and the covariance matrix can be computed as


The inverse covariance matrix is then


Figure 1 shows trajectories sampled from a white-noise-on-jerk prior distribution where the prior mean is constant-acceleration, compared with trajectories sampled from a white-noise-on-acceleration prior distribution where the prior mean is constant-velocity. We argue that the WNOJ prior is more suitable for representing motion with non-zero acceleration trajectory sections, such as in urban driving.

V-a Prior Error Term

In local pose variables, the prior error term is the same as in (12). We wish to then express the prior error in terms of and The relationship between and and global state variables are shown in Equations (5) and (7). To express in terms of global state variables, we have


We can write the inverse left Jacobian of as a power-series expansion [16]:


where the coefficients, are the Bernoulli numbers. The operator is defined as [24] [16]


It can be shown easily that therefore


As it turns out, we cannot express analytically in terms of or which are familiar terms with which to work. We instead resort to the first-order approximation [16] that


Our approximation is reasonable as long as is small, which it will be in our case. Under this approximation, we have


and finally


The local state variables can then be written as


where we have made use of the identity [16].

In terms of global state variables, the prior error term is


Suppose we assume the trajectory has zero acceleration (which is assumed by a prior mean that is constant-velocity), and also make the assumption that In this case, the last component in (41) becomes zero, and the first two components become identical to the WNOA prior as in (16); we have essentially recovered the prior error equation for the WNOA prior.

V-B Querying the Trajectory

We start from the same interpolation equation using local state variables (21). For the WNOJ prior, the interpolation coefficients and can be computed from (22), using and from (30) and (31).

Substituting with global state variables for the WNOJ prior using (40), the pose interpolation equation is


where and are sub-blocks of and

Again, if we assume that and the terms with coefficients and become zeros. Similar to the case with the prior error term, we can essentially recover the pose interpolation equation for the WNOA prior as in (23).

Vi Experimental Validation

To evaluate the white-noise-on-jerk prior we derived, we formulated a variation of our continuous-time lidar odometry estimator that employs the WNOJ prior. The new estimator is evaluated on various Velodyne lidar datasets, and the odometry errors are compared against the baseline estimator presented in Section III, which employs a WNOA prior. To ensure a fair comparison, all other aspects of STEAM such as constructing measurement terms, and the other components of lidar odometry such as the point matching method, are kept the same. Any differences between the two estimators arise solely from their different motion priors. Our evaluations are odometric, and we make no attempt to use mapping or loop-closure to reduce estimation error.

The power spectral density matrix, which determines the inverse covariance matrix for prior cost terms as in Equations (15) and (32

), is the only hyperparameter for our lidar odometry algorithm. For both types of motion priors, we tuned

to achieve the best performance on the training set (sequences 0 to 10) of the KITTI odometry benchmark [27], which has accurate ground truth. was then kept the same when evaluating on all other datasets.

Vi-a KITTI Odometry Benchmark

Sequences 0 to 10 are the training sequences of KITTI. Sequences 11 to 21 are the test sequences, where the ground truths are not publicly available. The KITTI benchmark evaluates percentage translation errors across path segments of lengths meters, and an average over all path segments is computed. A total error averaged over path segments evaluated for all sequences is also reported.

Fig. 3: Odometry error for the baseline estimator which employs the WNOA prior, and for the new estimator which employs the WNOJ prior.

The baseline estimator that employs a WNOA prior achieved an overall error of on the training set, and on the test set. Our new estimator that employs a WNOJ prior achieved an overall error of on the training set, and on the test set. A detailed break-down of the odometry error for various path segment lengths is presented in Figure 3. The new estimator using WNOJ prior outperforms the baseline estimator for almost all path segment lengths. Figure 4 shows a sequence where the odometry biases are noticeably reduced when we use the WNOJ prior.

Fig. 4: 3D plots of odometry estimates for sequence 10: baseline estimator using WNOA prior (black) vs. new estimator using WNOJ prior (blue) when compared against ground truth (red).

Our baseline odometry algorithm is already fairly accurate despite using a WNOA motion prior, as it ranked 3rd on the KITTI odometry leader board at the time of submission among methods that use lidar only [25]. By choosing a motion prior that we believe is more representative of real-world vehicle trajectories, we achieved consistent improvements to our baseline method, which is already fairly accurate. Currently, all lidar-only methods that rank ahead of our WNOJ estimator on the KITTI leader board have a mapping or loop-closure aspect, whereas our estimator is strictly odometric.

The lidar point-clouds from the KITTI odometry benchmark were post-processed by the dataset authors to compensate motion distortion. As a result, all points in a sensor revolution can be treated as being measured at exactly the same time, and we do not need to rely on the motion prior for interpolating the pose as in Equations (23) and (42). The prior cost terms (11), however, are still used to smooth the trajectory. Nevertheless, for undistorted data such as in the KITTI dataset, it is not necessary to use a continuous-time estimation framework. Even though the new estimator with the WNOJ prior outperformed our baseline estimator on the KITTI benchmark, we argue that datasets with motion-distorted point-clouds are more suitable for comparing continuous-time methods.

[height=1.6in,clip]figs/george_2_cv [height=1.6in,,clip]figs/george_2_ca [height=1.34in,clip]figs/george_6_cv [height=1.34in,,clip]figs/george_6_ca
Fig. 5: 3D plots of odometry estimates for sequence 0 (top) and sequence 4 (bottom) of the University of Toronto dataset from the same perspective. Black is odometry using the baseline estimator with WNOA prior, and blue is the new estimator using WNOJ prior. Left: due to biases, odometry does not overlap when the vehicle travels back to a path it has been before (circled). Right: the biases are significantly reduced when using WNOJ prior.

Vi-B University of Toronto Dataset

Fig. 6: The Buick test vehicle used for data collection. The vehicle is equipped with a Velodyne HDL-64E lidar, and an Applanix POS-LV system.

A dataset was collected by our test vehicle (Figure 6) along different routes around University of Toronto (U of T). This resulted in 9 sequences of Velodyne data where each is at least 1.7 km in distance. 6-DOF ground truth is available via an on-board Applanix positioning and orientation system (POS). For consistency, to evaluate for odometry errors we use the same method as the KITTI benchmark, where translational errors are evaluated across path segments of lengths meters. This is a motion-distorted lidar dataset, as we do not employ external sensors or ground truth to compensate the point-clouds. We rely solely on the continuous-time estimator for handling motion distortion.

Baseline estimator
with WNOA prior (%)
New estimator
with WNOJ prior (%)
0 3.34 1.5326 1.2663
1 2.21 1.3706 1.2797
2 3.04 1.3967 1.3485
3 2.91 1.7980 1.5844
4 2.99 1.6100 1.4307
5 1.71 2.4319 2.1696
6 3.48 2.1322 2.054
7 3.04 1.2932 1.2122
8 2.92 1.6988 1.5327
overall 25.63 1.6736 1.5235
TABLE I: Odometry errors for the baseline estimator using WNOA prior and new estimator using WNOJ prior, evaluated on the U of T dataset.

The U of T dataset features driving in urban scenes where the vehicle’s speed is generally under . However, the vehicle needs to constantly slow down for traffic, or take a turn at an intersection. Since the vehicle’s trajectory contains many sections where the velocity changes consistently, this dataset is much more suitable for motion estimation using a WNOJ prior, than a WNOA prior.

The results for the baseline estimator using WNOA prior and the new estimator using WNOJ are compared in Table I. The WNOJ prior resulted in smaller odometry error for all sequences, and an overall of reduction in error, when compared against the WNOA prior. Figure 5 shows comparison plots of the estimated trajectory using the WNOA prior and WNOJ prior for two sequences from the U of T dataset. The estimated trajectory using a WNOJ prior is significantly more accurate than using a WNOA prior.

Again, since the lidar data are motion-distorted, we need to interpolate the pose for each point measurement. We argue that the WNOJ prior (42) offers a more suitable interpolation scheme than the WNOA prior (23). Results for the U of T dataset achieved a greater reduction in error from using the WNOJ prior than the KITTI dataset, which makes no use of the interpolation scheme.

Vi-C Richmond Hill Dataset

Fig. 7: 3D plots of odometry estimates for sequence 1 of the Richmond Hill dataset: baseline estimator using WNOA prior (black) vs. new estimator using WNOJ prior (blue) when compared against ground truth (red).

A dataset was collected in the city of Richmond Hill, North of Toronto, using our test vehicle. This resulted in three long sequences more than in total. The Richmond Hill dataset features driving in suburban areas, and on highways, which contain less useful geometry and structure. Moreover, the test vehicle was driving more than on highways, making this a highly challenging dataset. Similar to the U of T dataset, the point-clouds are motion-distorted.

Baseline estimator
with WNOA
prior (%)
New estimator
with WNOJ
prior (%)
0 (suburban) 17.91 1.5887 1.5094
1 (urban) 7.49 2.3229 1.9363
2 (highway) 35.01 2.3449 2.1627
overall 60.41 2.1180 1.9409
TABLE II: Errors for the baseline estimator using WNOA prior and new estimator using WNOJ prior, evaluated on the Richmond Hill dataset.

The results are summarized in Table II. The new estimator using the WNOJ prior outperformed the baseline estimator for all sequences, resulting in a reduction of the overall error by . Figure 7 shows a comparison plot of odometry estimates using the WNOJ prior and the WNOA prior, where the odometry from the new estimator is noticeably less biased than the odometry from the baseline estimator, when compared against ground truth.

Vi-D Remarks

We found that the new estimator using the WNOJ prior is more sensitive to than the baseline estimator using the WNOA prior. This is noticeable because of the difference on the coefficient of each term in the inverse covariance matrix, between the WNOJ prior (32) and the WNOA prior (15). Despite this, we achieved an improvement on all datasets using tuned on the KITTI training set alone. We plan on releasing our datasets for public use in the future.

Vii Conclusion and Future Work

In this paper, we showed that in continuous-time trajectory estimation, a source of estimator bias can arise when the motion prior cannot sufficiently represent the underlying trajectory. The main contribution of this paper is the derivation of a white-noise-on-jerk motion prior for continuous-time trajectory estimation on Through experimental validation, we showed that the new prior outperforms the existing white-noise-on-acceleration prior employed by STEAM on various lidar datasets, both with and without motion distortion.

Our new formulation of STEAM using the WNOJ prior now has accelerations in the state. Therefore, an extension would be to formulate an estimator that incorporates acceleration measurements from an inertial measurement unit (IMU) directly, rather than pre-integrating to a fixed timestep as is done in many existing inertial estimators.


The authors would like to thank Applanix Corporation and the Natural Sciences and Engineering Research Council of Canada (NSERC) for supporting this work, and General Motors (GM) Canada for donating the test vehicle.


  • [1] S.-H. Jung and C. J. Taylor, “Camera Trajectory Estimation Using Inertial Sensor Measurements and Structure from Motion Results,” in Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on, vol. 2.   IEEE, 2001, pp. II–II.
  • [2] P. Furgale, T. D. Barfoot, and G. Sibley, “Continuous-Time Batch Estimation Using Temporal Basis Functions,” in Robotics and Automation (ICRA), 2012 IEEE International Conference on.   IEEE, 2012, pp. 2088–2095.
  • [3] S. Anderson and T. D. Barfoot, “Towards Relative Continuous-Time SLAM,” in Robotics and Automation (ICRA), 2013 IEEE International Conference on.   IEEE, 2013, pp. 1033–1040.
  • [4] S. Lovegrove, A. Patron-Perez, and G. Sibley, “Spline Fusion: A Continuous-Time Representation for Visual-Inertial Fusion with Application to Rolling Shutter Cameras.” in BMVC, 2013.
  • [5] R. Dubé, H. Sommer, A. Gawel, M. Bosse, and R. Siegwart, “Non-uniform Sampling Strategies for Continuous Correction Based Trajectory Estimation,” in Robotics and Automation (ICRA), 2016 IEEE International Conference on.   IEEE, 2016, pp. 4792–4798.
  • [6] C. H. Tong, P. Furgale, and T. D. Barfoot, “Gaussian Process Gauss-Newton for Non-parametric Simultaneous Localization and Mapping,” The International Journal of Robotics Research, vol. 32, no. 5, pp. 507–525, 2013.
  • [7] T. D. Barfoot, C. H. Tong, and S. Särkkä, “Batch Continuous-Time Trajectory Estimation as Exactly Sparse Gaussian Process Regression.” in Robotics: Science and Systems, 2014.
  • [8] S. Anderson and T. D. Barfoot, “Full STEAM ahead: Exactly Sparse Gaussian Process Regression for Batch Continuous-Time Trajectory Estimation on SE(3),” in Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on.   IEEE, 2015, pp. 157–164.
  • [9] X. Yan, V. Indelman, and B. Boots, “Incremental Sparse GP Regression for Continuous-Time Trajectory Estimation and Mapping,” Robotics and Autonomous Systems, vol. 87, pp. 120–132, 2017.
  • [10] M. Mukadam, J. Dong, F. Dellaert, and B. Boots, “Simultaneous Trajectory Estimation and Planning via Probabilistic Inference,” in Proceedings of Robotics: Science and Systems (RSS), 2017.
  • [11] J. Dong, J. G. Burnham, B. Boots, G. Rains, and F. Dellaert, “4D Crop Monitoring: Spatio-temporal Reconstruction for Agriculture.”
  • [12] M. Warren, M. Paton, K. MacTavish, A. P. Schoellig, and T. D. Barfoot, “Towards Visual Teach and Repeat for GPS-Denied Flight of a Fixed-Wing UAV,” in Field and Service Robotics.   Springer, 2018, pp. 481–498.
  • [13] M. Bosse and R. Zlot, “Continuous 3D Scan-Matching with a Spinning 2D Laser,” in Robotics and Automation, 2009. ICRA’09. IEEE International Conference on.   IEEE, 2009, pp. 4312–4319.
  • [14] R. Zlot and M. Bosse, “Efficient Large-Scale 3D Mobile Mapping and Surface Reconstruction of an Underground Mine,” in Field and service robotics.   Springer, 2014, pp. 479–493.
  • [15] H. Dong and T. D. Barfoot, “Lighting-invariant Visual Odometry Using Lidar Intensity Imagery and Pose Interpolation,” in Field and Service Robotics.   Springer, 2014, pp. 327–342.
  • [16] T. D. Barfoot, State Estimation for Robotics.   Cambridge University Press, 2017.
  • [17] J. Zhang and S. Singh, “LOAM: Lidar Odometry and Mapping in Real-time.” in Robotics: Science and Systems, vol. 2, 2014, p. 9.
  • [18] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: Real-time Single Camera SLAM,” IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 6, pp. 1052–1067, 2007.
  • [19] S. Anderson and T. D. Barfoot, “RANSAC for Motion-distorted 3D Visual Sensors,” in Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on.   IEEE, 2013, pp. 2093–2099.
  • [20] J. Hedborg, P.-E. Forssén, M. Felsberg, and E. Ringaby, “Rolling Shutter Bundle Adjustment,” in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on.   IEEE, 2012, pp. 1434–1441.
  • [21] J. T. Jang, S. T. Moon, S. Han, H. C. Gong, G.-H. Choi, I. H. Hwang, and J. Lyou, “Trajectory Generation with Piecewise Constant Acceleration and Tracking Control of a Quadcopter,” in Industrial Technology (ICIT), 2015 IEEE International Conference on.   IEEE, 2015, pp. 530–535.
  • [22] M. Mukadam, J. Dong, X. Yan, F. Dellaert, and B. Boots, “Continuous-Time Gaussian Process Motion Planning via Probabilistic Inference,” arXiv preprint arXiv:1707.07383, 2017.
  • [23] B. Olofsson, J. Antonsson, H. G. Kortier, B. Bernhardsson, A. Robertsson, and R. Johansson, “Sensor Fusion for Robotic Workspace State Estimation,” IEEE/ASME Transactions on Mechatronics, vol. 21, no. 5, pp. 2236–2248, 2016.
  • [24] T. D. Barfoot and P. T. Furgale, “Associating Uncertainty With Three-Dimensional Poses for Use in Estimation Problems,” IEEE Transactions on Robotics, vol. 30, no. 3, pp. 679–693, 2014.
  • [25] T. Y. Tang, D. J. Yoon, F. Pomerleau, and T. D. Barfoot, “Learning a Bias Correction for Lidar-only Motion Estimation,” 15th Conference on Computer and Robot Vision (CRV), 2018.
  • [26]

    C. E. Rasmussen, “Gaussian Processes in Machine Learning,” in

    Advanced lectures on machine learning.   Springer, 2004, pp. 63–71.
  • [27] A. Geiger, P. Lenz, and R. Urtasun, “Are We Ready for Autonomous Driving? The KITTI Vision Benchmark Suite,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2012.