I Introduction
Robotics and uncertainty come hand in hand. One of the defining challenges of robotics research is to design uncertaintyresilient behavior to overcome noise in sensing, actuation and/or dynamics. This paper studies the problems of simulation and filtering in systems with stochastic dynamics and noisy observations, with a particular interest in cases where the state belief cannot be realistically approximated by a single Gaussian distribution.
Complex multimodal beliefs naturally arise in manipulation tasks where state or action noise can make the difference between contact/separation or between sticking/sliding. For example, the ordinary task of pushgrasping a cup of coffee into your hand in Figure 1 illustrates the naturally occurring multimodality. Dogar and Srinivasa [1] used the observation that a pushgrasped object tends to cluster into two clearly distinct outcomes—inside and outside the hand—to plan robust grasp strategies. Multimodality and complex belief distributions have been experimentally observed in a variety of manipulation actions such as planar pushing [2, 3], ground impacts [4], and binpicking [5].
The main contribution of this paper is a new algorithm GPSUM to track complex state beliefs through a dynamic system expressed as a Gaussian process (GP) without the need to either linearize the transition or observation models or relying on unimodal Gaussian approximations of the belief.
GPSUM operates by sampling the state distribution, so it can be viewed as a samplingbased filter. It also maintains the basic structure of a Bayes filter by exploiting the GP form of the dynamic and observation models to provide a probabilistic sound interpretation of each sample, so it can also be viewed as a GPBayes filter.
We compare GPSUM’s performance to other existing GPfiltering algorithms like GPUKF, GPADF and GPParticle Filter in a standard benchmark [6, 7]. GPSUM shows better filtering results both after a single and multiple filtering steps with a variety of metrics, and requires significantly less samples than standard particle filtering techniques.
We demonstrate the use of GPSUM to predict the expected distribution of outcomes when pushing an object. Prior experimental work by Bauza and Rodriguez [3] shows that planar pushing produces heteroscedastic and multimodal behavior, i.e., some actions are more deterministic than others and distributions can break down into components. GPSUM successfully recovers both when applied to a GP learned model of planar pushing. We compare the results against trajectories simulated with less efficient Monte Carlobased simulations.
Both actions and sensed information determine the shape of the belief distribution. This paper provides an efficient algorithm for tracking distributions tailored to the case where the observation and transition models are expressed as GPs.
The paper is structured as follows:

Introduction (done),

related work,

background of Bayes filtering and Gaussian Processes,

main algorithm, assumptions and time complexity,

evaluation and benchmarking,

discussion of GPSUM’s main limitations and possible improvements.
Ii Related Work
Gaussian processes (GPs) have proved to be a powerful tool to model the dynamics of complex systems [8, 9, 5], and have been applied to different contexts of robotics including planning and control [10, 11, 12], system identification [8, 13, 14], or filtering [15, 6, 7]. In this work, we study the problem of propagating and filtering the state of a system by providing accurate distributions of the state. When the models for the dynamics and measurements are learned through GP regression, the filtering algorithms are referred as GPBayes filters. Among these algorithms, the most frequently considered are GPEKF [15], GPUKF [15] and GPADF [6], with GPADF regarded as the stateoftheart.
All these GPfilters rely on the assumption that the state distribution is well captured by a single Gaussian and depend on several approximations to maintain that Gaussianity. GPEKF is based on the extended Kalman filter (EKF) and linearizes the GP models to guarantee that the final distributions are Gaussian. GPUKF is based on the unscented Kalman filter (UKF) and provides a Gaussian distribution for the state using an appropriate set of sigma points that captures the moments of the state. Finally, GPADF computes the first two moments of the state distribution by exploiting the structure of GPs and thus returns a Gaussian distribution for the state.
GPSUM instead is based on sampling from the state distributions and using Gaussian mixtures to represent these probabilities. This links our algorithm to the classical problem of particle filtering where each element of the mixture can be seen as a sample with an associated weight and a Gaussian. As a result, GPSUM can be understood as a sampling algorithm that benefits from the parametric structure of Gaussians to simplify its computations and represent the state through weighted Gaussians. Another sampling algorithm for GPfiltering is provided in
[15] where they propose the GPPF algorithm based on the classical particle filter (PF). However, when compared to GPUKF or GPEKF, GPPF is less reliable and more prone to give inconsistent results.In the broader context of Bayes filtering where the dynamics and observation models are known, multiple algorithms have been proposed to recover nonGaussian state distributions. For instance, we can find some resemblances between GPSUM and the algorithms Gaussian Mixture Filter (GMF) [16], Gaussian Sum Filter (GSF) [17], and Gaussian Sum Particle Filtering (GSPM) [17]
; all using different techniques to propagate the state distributions as sum of Gaussians. GPM considers a Gaussian mixture model to represent the state distribution, but the covariance of each Gaussian is equal and comes from sampling the previous state distribution and computing the covariance of the resulting samples; GPSUM instead recovers the covariance of the mixture from the dynamics of the system. GSF is as a set of weighted EKF running in parallel. As a consequence it requires to linearize the system models while GPSUM does not. Finally GSPM, which has proven to be superior to GSF, is based on the sequential importance sampling filter (SIS)
[17]. GSPM samples from the importance function which is defined as the likelihood of a state given an observation z, . GPSUM instead does not need to learn this extra mapping, , to effectively propagate the state distributions.Other algorithms relevant to GPSUM are the multihypothesis tracking filter (MHT) [18] and the manifold particle filter (MPF) [19]
. MHT is designed to solve a data association problem for multiple target tracking by representing the joint distribution of the targets as a Gaussian mixture. Instead, MPF is a samplebased algorithm that has been recently applied to several manipulation tasks with complex dynamics. MPF exploits the contact manifolds of the system by collapsing the distribution defined by the samples into that manifold.
An advantage of GPSUM is that it can be viewed as both a sampling technique and a parametric filter. Therefore most of the techniques employed for particle filtering are applicable. Similarly, GPSUM can also be adapted to special types of GPs such as heteroscedastic or sparse GPs. For instance, GPSUM can be easily extended to the case where sparse spectrum Gaussian processes (SSGPs) are considered by using the work from Pan et al. [7]. This implies that GPSUM can be made significantly faster by using sparse GPs for the dynamics and observation models of the system.
Iii Background of Gaussian process filtering
This work focuses on the classical problem of Bayes filtering where the dynamics and observation models of the system are learned through Gaussian process regression. In this section, we introduce the reader to the concepts of Bayes filtering and Gaussian processes.
Iiia Bayes filters
The goal of a Bayes filter is to track the state of the system, , in a probabilistic setting. At time , we consider that an action is applied to the system making its state evolve from to , and an observation of the new state, , is obtained. As a result, a Bayes filter computes the state distribution, , conditioned on the history of actions and observations obtained: . This probability is often referred as the belief of the state at time .
In general, a Bayes filter is composed of two steps: the prediction update and the measurement or filter update following the terminology from [20].
Prediction update. Given a model of the system dynamics, , the prediction update computes the prediction belief, , as:
(1) 
where is the belief of the system before applying the action . Thus the prediction belief can be understood as the preobservation distribution of the state at time while the belief would be the postobservation distribution. In general, the integral in (1) can not be solved analytically and different approximations must be used to simplify its computation. Among these simplifications, it is common to linearize the dynamics of the system as it is classically done in the EKF or to directly assume that the prediction belief is Gaussian distributed [20].
Measurement update. Given a new measurement of the state, , the belief at time can be obtained by filtering the prediction belief. The belief is recovered using the observation model of the system and applying the Bayes’ rule:
(2) 
This expression usually can not be solved in a closed form and several approximations are required to estimate the new belief. Linearizing the observation model or assuming Gaussianity are again common approaches
[20].Note that the belief at time can be directly expressed in a recursive manner using the previous belief, and the transition and observation models:
(3) 
We will show in Section IV that the same idea of recursion can be applied to the prediction belief, which is a key element for our algorithm GPSUM.
In general, the dynamics and observation models are considered known and given by parametric descriptions. However, in real systems it is often the case that these models are unknown and it is convenient to learn them using nonparametric approaches such as Gaussian processes. This proves specially beneficial when the actual models are complex and parametric approaches do not provide a fair representation of the system behavior [11, 3].
IiiB Gaussian processes
Gaussian processes (GPs) provide a flexible and nonparametric framework for function approximation and regression [21]
. In this paper, GPs are considered when modeling the dynamics of the system as well as its observation model. There are several advantages in using GPs over traditional parametric models. First, GPs can learn high fidelity models from noisy data as well as estimate the intrinsic noise in the system. Moreover, GPs can also quantify how certain are their predictions given the available data hence measuring the quality of the regression. For each point in the space, GPs provide the value of the expected output together with its variance. In practice, for each input considered, a GP returns a Gaussian distribution over the output space.
In classical GPs [21], the noise in the output is assumed to be Gaussian and constant over the input:
(4) 
where is the latent or unobserved function that we want to regress, is a noisy observation of this function at the input , and represents zeromean Gaussian noise with variance .
The assumption of constant Gaussian noise together with a GP prior on the latent function makes analytically inference possible for GPs (IIIB). In practice, to learn a GP model over you only need a set of training points, , and a kernel function, . Given a new input , a trained GP assigns a Gaussian distribution to the output that can be expressed as:
(5)  
where is a matrix that evaluates the kernel in the training points, ,
is a vector with
and is the value of the kernel at , . Finally, represents the vector of observations from the training set, andis the set of hyperparameters including
and the kernel parameters that are optimized during the training process.A notable property of GPs is that the expected variance of the output comes from the addition of two variances: and (IIIB). The first one, , is constant and represents the overall noise of the data. The second one, , depends on the input and is only related to the regression error.
IiiC Heteroscedastic Gaussian processes
Assuming that the noise of the process is constant over the input space is sometimes too restricting. Allowing some regions of the input to be more noisy than others is specially beneficial for those systems with converging and diverging dynamics. Algorithms where GPs incorporate inputdependent noise have proven useful in different context such as mobile robot perception [22], volatility forecasting [23] and robotic manipulation [3]. In Section V, we explore the benefits of combining GPSUM with inputdependent GPs to characterize the long term dynamics of planar pushing.
The extensions of GPs that incorporate inputdependent noise are often referred as Heteroscedastic Gaussian processes (HGPs). This implies that they can regress both the mean and the variance of the process for any element of the input space. Then, the main conceptual difference between GP and HGP regression is that for HGPs observations are assumed to be drawn from:
(6) 
where explicitly depends on compared to (4) where
is a random variable independent of
.Iv GPSUM Bayes filter
In this section we present GPSUM, discuss its main assumptions, and describe its computational complexity. Given that GPSUM is a GPBayes filter, our main assumption is that both the dynamics and the measurement models are represented by GPs. This implies that for any state and action on the system the probabilities and are available and are Gaussian.
To keep track of complex beliefs, GPSUM does not approximate them by single Gaussians, but considers the weaker assumption that they are well approximated by Gaussians mixtures. Given this assumption, in Section IVA we exploit that the transition and observation models are GPs to correctly propagate the prediction belief, i.e. the preobservation state distribution. In Section IVB we obtain a closeform solution for the belief expressed as a Gaussian mixture.
Iva Updating the prediction belief
The main idea behind GPSUM is described in Algorithm 1 and derived below. Consider (1) and (3), then the belief at time in terms of the prediction belief is:
(7) 
If the prediction belief at time is well approximated by a finite Gaussian mixture, we can write:
(8) 
where is the number of components of the Gaussian mixture and is the weight associated with the ith Gaussian of the mixture .
Given the previous observation and the action , the prediction belief at time can be recursively computed using the prediction belief at time together with the transition and observation models. If has the form of a Gaussian mixture, then we can take samples, , and approximate (9) by:
(10) 
Because the dynamics model is a GP, is the Gaussian , and is a constant value. As a result, we can take:
(11) 
and express the updated prediction belief again as a Gaussian mixture:
(12) 
In the ideal case where tends to infinity for all , the Gaussian mixture approximation for the prediction belief converges to the real distribution and thus the propagation over time of the prediction beliefs will remain correct. This property of GPSUM contrasts with previous GPBayes filters where the prediction belief is approximated as a single Gaussian. In those cases, errors from previous approximations inevitably accumulate over time.
Note that the weights in (11) are directly related to the likelihood of the observations. As in most samplebased algorithms, if the weights are too small before normalization, it becomes a good strategy to resample or modify the number of samples considered. In Section V we address this issue by resampling again from the distributions while keeping the number of samples constant.
IvB Recovering the belief from the prediction belief
After computing the prediction belief, we can use the observation to compute the belief as another Gaussian mixture using (7):
(13) 
Note that if could be normalized and expressed as a Gaussian distribution, then the belief at time would directly be a Gaussian mixture. In most cases, however, is not proportional to a Gaussian. For those case, we find different approximations in the literature (see Algorithm 2). For instance, the algorithm GPEKF [15] linearizes the observation model to express the previous distribution as a Gaussian.
In this work, we exploit the technique proposed by Deisenroth et al. [6] as it preserves the first two moments of and has proven to outperform GPEKF [6]. This approximation assumes that and are both Gaussians. Note that this is an approximation, and that is only true when there is a linear relation between and . Using this assumption and that is a GP, can be approximated as a Gaussian by analytically computing its first two moments [6]. As a result, we recover the belief as a Gaussian mixture.
It is important to note that this approximation is only necessary to recover the belief, but does not incur in iterative filtering error. GPSUM directly keeps track of the prediction belief, for which the approximation is not required.
IvC Computational complexity
The computational complexity of GPSUM depends on the number of Gaussians considered at each step. For simplicity, we will now assume that at each time step the number of components is constant, . Note that the number of samples taken from the prediction belief corresponds to the number of components of the next distribution. Propagating the prediction belief one step then requires taking samples from the previous prediction belief and evaluate times the dynamics and measurement models. The cost of sampling once a Gaussian mixture can be considered constant, , while evaluating each model implies computing the output of a GP and takes computations where is the size of data used to train the GPs [21]. Therefore the overall cost of propagating the prediction belief is where is the largest size of the training sets considered. Approximating the belief does not represent an increase in complexity as it also implies operations [6].
Consequently GPSUM’s time complexity increases linearly with the size of the Gaussian mixture while providing a more realistic approximation of the belief and better filtering results (see Section V). To further reduce the computational cost of GPSUM, sparse GPs can be considered. For instance, combining GPSUM with SSGPs [7] makes the evaluation of the GP models decrease from to with .
V Results
We evaluate the performance of our algorithm in two different systems. The first one considers a standard 1D benchmark for nonlinear state space models [6, 24] where our algorithm proves it can outperform previous GPBayes filters^{1}^{1}1The implementations of GPADF and GPUKF are based on [6] and can be found at https://github.com/ICLSML/gpadf.. The second case studies how uncertainty propagates in a real robotic system. We first learn the dynamics of planar pushing from real data with GP regression and then use GPSUM to propagate the system uncertainty overtime to capture the expected distribution of the object position.
Va Synthetic task: algorithm evaluation and comparison
We compare GPSUM with other existing GPBayes filters using the following 1D dynamical system:
(14) 
and observation model:
(15) 
illustrated in Figure 2
. The GP models for prediction and measurement are trained using 1000 samples uniformly distributed around the interval
. GPSUM uses the same number of Gaussian components over all time steps, . The prior distribution of is Gaussian with variance and mean . is modified 200 times to assess the filters in multiple scenarios and becomes specially interesting around where the dynamics are highly nonlinear. For each value of , the filters take 10 time steps. This procedure is repeated 300 times to average the performance of GPSUM, GPADF, GPUKF, and GPPF, described in Section II. For GPPF, the number of particles is the same as GPSUM components, .The error in the final state of the system is evaluated using 3 metrics. The most relevant is the negative loglikelihood, NLL, which measures the likelihood of the true state according to the predicted belief. We also report the rootmeansquare error, RMSE, even though it only uses the mean of the belief instead of its whole distribution. Similarly, the Mahalanobis distance, Maha, only considers the first two moments of the belief so we have to approximate the belief from GPSUM by a Gaussian. For the GPPF we only compute the RMSE given that particle filters do not provide closeform distributions. In all the metrics proposed, low values imply better performance.
GPADF  GPUKF  GPSUM  GPPF  

Error  
NLL  0.49 0.17  95.03 97.02  0.55 0.34   
Maha  0.69 0.06  2.80 0.72  0.67 0.04   
RMSE  2.18 0.39  34.48 23.14  2.18 0.38  2.27 0.35 
GPADF  GPUKF  GPSUM  GPPF  

Error  
NLL  9.58 15.68  1517.17 7600.82  0.24 0.11   
Maha  0.99 0.31  8.25 3.82  0.77 0.06   
RMSE  2.27 0.16  12.96 16.68  0.19 0.02  N/A 
From Table II and Table II, it is clear that GPSUM outperforms the other algorithms in all the metrics proposed and can be considered more stable as it obtains the lowest variance in most of the metrics. In the first time step, GPPF is already outperformed by GPSUM and GPADF, and after a few more steps in time, particle starvation becomes a major issue for GPPF as the likelihood of the observations becomes extremely low. For this reason, we did not report a RMSE value for the GPPF after 10 time steps. GPUKF performance is clearly surpassed by GPSUM and GPADF after 1 and 10 time steps.
In Figure 3 we compare the true state distributions (computed numerically) to the distributions obtained by GPADF, GPSUM, and a simplified version of GPSUM, Gauss GPSUM, that takes a Gaussian approximation of GPSUM at each time step. It becomes clear that by allowing nonGaussian beliefs GPSUM can assign higher likelihood to the actual state while better approximating the true belief. Instead, GPADF can only assign a single Gaussian wide enough to cover all the high density regions.
In Figure 4 we study the temporal evolution of the Negative LogLikelihood (NLL) metric for GPADF, GPSUM and Gauss GPSUM. As the number of steps increases, GPSUM and Gauss GPSUM tend to coincide because GPSUM tends to become more confident on the location of the true state and its belief becomes more Gaussian. Figure 4 also shows that GPADF worsens its performance over time. This is due to those cases where the dynamics are highly nonlinear, i.e. around zero, and the variance of GPADF increases over time. As a result, at the cost of larger computational expense, the expressiveness of its distributions and the lack of heavy assumptions makes GPSUM a good fit for those systems where multimodality and complex behaviors can not be neglected.
VB Real task: propagating uncertainty in pushing
Planar pushing is an underdetermined and sometimes undecidable physical interaction [25], except when making sufficient assumptions and simplifications [26, 27]. It has also been shown experimentally that uncertainty in frictional interaction yields stochastic pushing behavior [3, 2]. Moreover, the type of push has a strong influence in the amount of expected uncertainty, i.e., the level of ”noise” in the pushing dynamics is action dependent, a.k.a., heteroscedastic. This can be observed in Figure 5 where three different pushes repeated multiple times lead to very different distributions for the object position including multimodality.
A pushing controller could benefit from a model of the heteroscedasticity in pushing dynamics by preferring those pushes that lead to lower uncertainty. To this end, it is necessary an heteroscedastic pushing model for the dynamics together with an algorithm that reliably and efficiently propagates uncertainty over time. In this work, we use HGPs to model the dynamics of pushing as done by Bauza and Rodriguez [3], and GPSUM to propagate the uncertainty of the system.
As illustrated in Figure 6, we model planar pushing as a HGP that takes as inputs the contact point between the pusher and the object, the pusher’s velocity, and its direction. The output of the dynamics is the displacement of the object relative to the pusher’s motion. We train the HGP model with real data from the MIT pushing dataset [2]. In this setting, a cylindrical rod attached to an industrial arm acts as the pusher of a square object made of stainlesssteel, where the interaction between the robot and the object is approximated by a single contact point.
Because we are only concerned about simulating the propagation of uncertainty over time, we do not consider a measurement model. As a result, when using GPSUM the prediction belief and the belief coincide and all the components in the Gaussian mixtures have the same weights.
Given that the distribution of the object position tends to become nonGaussian overtime, GPSUM can obtain more accurate results than other algorithms. Figure 7 shows the resemblance between sampled trajectories from propagating the dynamics in time in a MonteCarlo fashion and the distributions obtained by GPSUM. It also becomes clear that the shape of the distributions recovered by GPSUM are ringshaped and even multimodal which can not be captured by standard GPBayes filters that predict single Gaussian distributions.
Being able to propagate the uncertainty of the object position over time exposes interesting properties of the planar push system. For instance in Figure 8 we observe two different pushes and how the belief for the object position evolves. It becomes clear that one of the pushes leads to more noisy distributions. Being able to recover these behaviors is specially useful when deciding what are the best pushes to exert. If our goal is to push an object to a specific region of the space, then it would be advantageous to consider those pushes that lead to narrower (lowvariance) distributions and avoid those that involve multimodal outcomes because they are often harder to control.
Vi Discussion and Future work
GPBayes filters are a powerful tool to model and track systems with complex and noisy dynamics. Most approaches rely on the assumption that the belief is Gaussian or can be iteratively approximated by a Gaussian. The Gaussian assumption is an effective simplification. It enables filtering with high frequency updates or in high dimensional systems. It is most reasonable in systems where the local dynamics are simple, i.e., linearizable, and when accurate observation models are readily available to continuously correct for complex or unmodelled dynamics.
In this paper we look at situations where the Gaussian belief is less reasonable. That is the case of contact behavior with nonsmooth local dynamics due to sudden changes in stick/slip or contact/separation, and is the case in stochastic simulation where, without the benefit of sensor feedback, uncertainty distributions naturally grow over time. We propose the GPSUM algorithm which considers the use of Gaussian mixtures to represent complex state distributions.
Our approach is samplebased in nature, but has the advantage of using a minimal number of assumptions compared to other GPfilters based on single Gaussian distributions or the linearization of the GPmodels. Since GPSUM preserves the probabilistic nature of a Bayes filter, it also makes a more effective use of sampling than particle filters.
When considering GPSUM, several aspects must be taken into account:
Number of samples. Choosing the appropriate number of samples determines the number of Gaussians in the prediction belief and hence its expressiveness. Adjusting the number of Gaussian over time might be beneficial in order to properly cover the state space. Similarly, highdimensional states might require higher values of to ensure a proper sampling of the prediction belief. Because of the samplebased nature of GPSUM, many techniques from samplebased algorithms can be effectively applied such as resampling or adding randomly generated components to avoid particle deprivation.
Likelihood of the observations. There is a direct relation between the weights of the beliefs and the likelihood of the observations. We can exploit this relationship to detect when the weight of the samples degenerates and correct it by resampling or modifying the number of samples.
Computational cost. Unlike nonsampling GPfilters, the cost of GPSUM scales linearly with the number of samples. Nevertheless, for nonlinear systems we showed that our algorithm can recover the true state distributions more accurately and thus obtain better results when compared to faster algorithms such as GPADF, GPUKF or GPPF.
GP extensions. The structure of GPSUM is not restricted to classical GPs for the dynamics and observation models. Other types of GPs such as HGPs or sparse GPs can be considered. For instance, combining GPSUM with SSGPs [7] would make the computations more efficient.
Future research will focus on combining GPSUM with planning and control techniques. Being able to detect multimodality or noisy actions can help to better navigate the complex and noisy dynamics of the system and reduce the final uncertainty in the state distribution.
References
 Dogar and Srinivasa [2010] M. R. Dogar and S. S. Srinivasa, “PushGrasping with Dexterous Hands: Mechanics and a Method,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2010, pp. 2123–2130.
 Yu et al. [2016] K.T. Yu, M. Bauza, N. Fazeli, and A. Rodriguez, “More than a Million Ways to be Pushed. A HighFidelity Experimental Data Set of Planar Pushing,” in IROS, 2016.
 Bauza and Rodriguez [2017] M. Bauza and A. Rodriguez, “A probabilistic datadriven model for planar pushing,” 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 3008–3015, 2017.
 Fazeli et al. [2017] N. Fazeli, E. Donlon, E. Drumwright, and A. Rodriguez, “Empirical Evaluation of Common Impact Models on a Planar Impact Task,” in IEEE International Conference on Robotics and Automation (ICRA), 2017.
 Paolini et al. [2014] R. Paolini, A. Rodriguez, S. S. Srinivasa, and M. T. Mason, “A DataDriven Statistical Framework for PostGrasp Manipulation,” IJRR, vol. 33, no. 4, 2014.
 Deisenroth et al. [2009] M. Deisenroth, M. Huber, and U. Hanebeck, “Analytic momentbased gaussian process filtering,” in ICML, 2009, pp. 225–232.
 Pan et al. [2017] Y. Pan, X. Yan, E. A. Theodorou, and B. Boots, “Prediction under uncertainty in sparse spectrum gaussian processes with applications to filtering and control,” in ICML, 2017, pp. 2760–2768.
 Kocijan et al. [2005] J. Kocijan, A. Girard, B. Banko, and R. MurraySmith, “Dynamic systems identification with gaussian processes,” Mathematical and Computer Modelling of Dynamical Systems, 2005.
 NguyenTuong et al. [2009] D. NguyenTuong, M. Seeger, and J. Peters, “Model learning with local gaussian process regression,” Advanced Robotics, vol. 23, no. 15, pp. 2015–2034, 2009.
 MurraySmith and Sbarbaro [2002] R. MurraySmith and D. Sbarbaro, “Nonlinear adaptive control using nonparametric gaussian process prior models,” IFAC Proceedings Volumes, vol. 35, no. 1, pp. 325–330, 2002.

Deisenroth et al. [2012]
M. Deisenroth, C. Rasmussen, and D. Fox, “Learning to control a lowcost manipulator using dataefficient reinforcement learning,” in
Robotics: Science and Systems, vol. 7, 2012, pp. 57–64.  Mukadam et al. [2016] M. Mukadam, X. Yan, and B. Boots, “Gaussian process motion planning,” in 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016, pp. 9–15.

Gregorčič and Lightbody [2008]
G. Gregorčič and G. Lightbody, “Nonlinear system identification:
From multiplemodel networks to gaussian processes,”
Engineering Applications of Artificial Intelligence
, 2008.  Ažman and Kocijan [2011] K. Ažman and J. Kocijan, “Dynamical systems identification using gaussian process models with incorporated local models,” Engineering Applications of Artificial Intelligence, 2011.
 Ko and Fox [2009] J. Ko and D. Fox, “Gpbayesfilters: Bayesian filtering using gaussian process prediction and observation models,” Autonomous Robots, vol. 27, no. 1, pp. 75–90, 2009.
 Stordal et al. [2011] A. S. Stordal, H. A. Karlsen, G. Nævdal, H. J. Skaug, and B. Vallès, “Bridging the ensemble kalman filter and particle filters: the adaptive gaussian mixture filter,” Computational Geosciences, vol. 15, no. 2, pp. 293–305, 2011.
 Kotecha and Djuric [2003] J. Kotecha and P. Djuric, “Gaussian sum particle filtering,” IEEE Transactions on signal processing, 2003.
 Blackman [2004] S. S. Blackman, “Multiple hypothesis tracking for multiple target tracking,” IEEE Aerospace and Electronic Systems Magazine, vol. 19, no. 1, pp. 5–18, Jan 2004.
 Koval et al. [2017] M. C. Koval, M. Klingensmith, S. S. Srinivasa, N. S. Pollard, and M. Kaess, “The manifold particle filter for state estimation on highdimensional implicit manifolds,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), May 2017, pp. 4673–4680.
 Thrun et al. [2005] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics (Intelligent Robotics and Autonomous Agents). The MIT Press, 2005.

Rasmussen and Williams [2006]
C. Rasmussen and C. Williams,
Gaussian Processes for Machine Learning
. MIT Press, 2006.  Kersting et al. [2007] K. Kersting, C. Plagemann, P. Pfaff, and W. Burgard, “Most likely heteroscedastic gaussian process regression,” in ICML, 2007.
 LazaroGredilla and Titsias [2011] M. LazaroGredilla and M. Titsias, “Variational Heteroscedastic Gaussian Process Regression,” in ICML, 2011.
 Kitagawa [1996] G. Kitagawa, “Monte carlo filter and smoother for nongaussian nonlinear state space models,” Journal of Computational and Graphical Statistics, vol. 5, no. 1, pp. 1–25, 1996.
 Mason [1986] M. T. Mason, “Mechanics and Planning of Manipulator Pushing Operations,” IJRR, vol. 5, no. 3, 1986.
 Goyal et al. [1991] S. Goyal, A. Ruina, and J. Papadopoulos, “Planar Sliding with Dry Friction Part 1 . Limit Surface and Moment Function,” Wear, vol. 143, 1991.
 Lynch and Mason [1996] K. M. Lynch and M. T. Mason, “Stable Pushing: Mechanics, Controllability, and Planning,” IJRR, vol. 15, no. 6, 1996.
Comments
There are no comments yet.