1 Introduction
It is a long lasting dream of humanity to build a machine that predicts the future. For long time intervals, this is likely going to stay a dream. But for shorter time spans this is not such an unreasonable goal. Humans, in fact, can predict rather reliably how, e.g., a video will continue over the next few seconds, at least when nothing extraordinary happens. In this work we aim at making a first step towards giving computers similar abilities.
We study the situation of a timevarying probability distribution from which sample sets at different time points are observed. Our main result is a method for learning an operator that captures the dynamics of the timevarying data distribution. It relies on two recent techniques: the embedding of probability distributions into a reproducing kernel Hilbert space, and vectorvalued regression. By extrapolating the learned dynamics into the future, we obtain an estimate of the future distribution. This estimate can be used to solve practical tasks, for example, learn a classifier that is adapted to the data distribution at a future time step, without having access to data from this situation already. One can also use the estimate to create a new sample set, which then can serve as a dropin replacement for an actual sample set from the future.
2 Method
We first formally define the problem setting of predicting the future of a timevarying probability distribution. Let be a data domain, and let for be a timevarying data distribution over . At a fixed point of time, , we assume that we have access to sequences of sets, , for , that are sampled i.i.d. from the respective distributions, . Our goal is to construct a distribution, , that is as close as possible to the so far unobserved , i.e. it provides an estimate of the data distribution one step into the future. Optionally, we are also interested in obtaining a set, , of samples that are distributed approximated according to the unknown .
Our main contribution is a regressionbased method that tackles the above problem for the case when the distribution evolves smoothly (Sections 2.1 and 2.2). We evaluate this method experimentally in Section 4. Subsequently, we show how the ability to extrapolate the distribution dynamics can be exploited to improve the accuracy of a classifier in a domain adaptation setting without observed data from the test time distribution (Section 5).
2.1 Extrapolating the Distribution Dynamics
We propose a method for extrapolating the distribution dynamics (EDD) that consist of four steps:

represent each sample set as a vector in a Hilbert space,

learn an operator that reflects the dynamics between the vectors,

apply the operator to the last vector in the sequence, thereby extrapolating the dynamics by one step,

(optionally) create a new sample set for the extrapolated distribution.
In the following we discuss the details of each step. See Figure 1 for a schematic illustration.
a) RKHS Embedding.
In order to allow the handling of arbitrary real data, we would like to avoid making any domainspecific assumptions, such as that the samples correspond to objects in a video, or parametric assumptions, such as Gaussianity of the underlying distributions. We achieve this by working in the framework of reproducing kernel Hilbert space (RKHS) embeddings of probability distributions [17]. In this section we provide the most important definitions; for a comprehensive introduction see [18].
Let denote the set of all probability distributions on with respect to some underlying algebra. Let be a positive definite kernel function with induced RKHS and feature map that fulfills for all . The kernel mean embedding, , associated with is defined by
(1) 
Since we assume (and therefore ) fixed in this work, we also refer to as ”the” RKHS embedding of . We denote by the image of under , i.e. the set of vectors that correspond to embedded probability distributions. For characteristic kernels, such as the Gaussian, the kernel mean map is an bijection between and , so no information is lost by the embedding operation [17]. In the rest of this section, we will use the term distribution to refer to objects either in or in , when it is clear from the context which ones we mean.
A useful property of the kernel mean map is that it allows us to express the operation of taking expected values by an inner product using the identity for any and .
For a set of i.i.d. samples from ,
(2) 
is called the empirical (kernel mean) embedding of . It is known that under mild conditions on , the empirical embedding, , converges with high probability to the true embedding, , at a rate of [1].
The first step of EDD consists of forming the embeddings, of the observed sample sets, . Note that for many interesting kernels the vectors cannot be computed explicitly, because the kernel feature map, , is unknown or would require infinite memory to be represented. However, as we will see later and as it is typical for kernel methods [11], explicit knowledge of the embedding vectors is also not required. It is sufficient that we are able to compute their inner products with other vectors, and this can be done via evaluations of the kernel function.
b) Learning the Dynamics.
We use vectorvalued regression [14] to learn a model of the process how the (embedded) distribution evolves from one time step to the next. Vectorvalued regression generalizes classical scalarvalued regression to the situation, in which the inputs and outputs are vectors, i.e. the learning of an operator. Again, we start by providing a summary of this technique, here following the description in [13].
As basis set in which we search for a suitable operator, we define a space, , of linear operators on in the following way. Let be the space of all bounded linear operators from to , and let be the nonnegative valued kernel defined by for any , where is the identity operator on . Then can be shown to be the reproducing kernel of an operatorvalued RKHS, , which contains at least the span of all rank1 operators, , for all , with . The inner product between such operators is for any , and the inner product of all other operators in can be derived from this by linearity and completeness.
As second step of EDD we solve a vectorvalued regression in order to learn a predictive model of the dynamics of the distribution. For this we assume that the changes of the distributions between time steps can be approximated by an autoregressive process, e.g. , for some operator , such that the for
are independent zeromean random variables. To learn the operator we solve the following leastsquares functional with regularization constant
:(3)  
Equation (3) has a closedform solution,  
(4) 
with coefficient matrix , where is the kernel matrix with entries , and
is the identity matrix of the same size, see
[13] for the derivation. Recently, it has been shown that the above regression on distributions is consistent under certain technical conditions [20]. Consequently, if , then the estimated operator, , will converge to the true operator, , when the number of sample sets and the number of samples per set tend to infinity.c) Extrapolating the Evolution.
The third step of EDD is to extrapolate the dynamics of the distribution by one time step. With the results of a) and b), all necessary components for this are available: we simply apply the learned operator, to the last observed distribution . The result is a prediction, , that approximates the unknown target, . From Equation (4) we see that can be written as a weighted linear combination of the observed distributions,
(5) 
for . The coefficients, , can be computed from the original sample sets by means of only kernel evaluations, because . The values of can be positive or negative, so
is not just an interpolation between previous values, but potentially an extrapolation. In particular, it can lie outside of the convex hull of the observed distributions. At the same time, the estimate
is guaranteed to lie in the subspace spanned by , for which we have sample sets available. Therefore, so we can compute expected values with respect to by forming a suitably weighted linear combinations of the target function at the original data points. For any , we have(6) 
where the last identity is due to the fact that is the RKHS of , which has is its feature map, so for all and . We use the symbol instead of to indicate that does not necessarily correspond to the operation of computing an expected value, because might not have a preimage in the space of probability distributions. The following lemma shows that can, nevertheless, act as a reliable proxy for :
Lemma 1.
Let and , for some , and . Then the following inequality holds for all with ,
(7)  
The proof is elementary, using the properties of the inner product and of the RKHS embedding.
Lemma 1 quantifies how well can serve as a dropin replacement of . The introduced error will be small, if all three terms on the right hand side are small. For the first term, we know that this is the case when the number of samples in is large enough, since is a constant, and we know that the empirical distribution, , converges to the true distribution, . Similarly, the second terms becomes small in the limit of many samples set and many samples per set, because we know that the estimated operator, , converges to the operator of the true dynamics, , in this case. Consequently, EDD will provide a good estimate of the next distribution time step, given enough data and if our assumptions about the distribution evolution are fulfilled (i.e. is small).
d) Generating a Sample Set by Herding.
Equation (6) suggests a way for associating a set of weighted samples with :
(8) 
where indicates not multiplication but that the sample appears with a weight . As we will show in Section 5, this representation is sufficient for many purposes, in particular for learning a maximummargin classifier. In other situations, however, one might prefer a representation of by uniformly weighted samples, i.e. a set such that . To obtain such a set we propose using the RKHS variant of herding [4], a deterministic procedure for approximating a probability distribution by a set of samples. For any embedded distribution, , herding constructs a sequence of samples, by the following rules,
(9)  
Herding can be understood as an iterative greedy optimization procedure for finding examples that minimize [2]. This interpretation shows that the target vector, , is not restricted to be an embedded distribution, so herding can be applied to arbitrary vectors in . Doing so for yields a set that can act as a dropin replacement for an actual training set . However, it depends on the concrete task whether it is possible to compute in practice, because it requires solving multiple preimage problems (9), which is not always computationally tractable.
A second interesting aspect of herding is that for any , the herding approximation always has a preimage in (the empirical distribution defined by ). Therefore, herding can also be interpreted as an approximate projection from to .
In Algorithm 2.1 we provide pseudocode for EDD. It also shows that despite its mathematical derivation, the actual algorithms is easy to implement and execute.
[t] kernel function sets for regularization parameter matrix with entries
vector with entries
weighted sample set
optional Herding step:
output size
sample set
2.2 Extension to NonUniform Weights
Our above description of EDD, in particular Equation (3), treats all given samples sets as equally important. In practice, this might not be desirable, and one might want to put more emphasis on some terms in the regression than on others. This effect can be achieved by introducing a weight, , for each of the summands of the leastsquares problems (3). Typical choices are , for a constant , which expresses a belief that more recent observations are more trustworthy than earlier ones, or , which encodes that the mean embedding of a sample set is more reliable if the set contains more samples.
As in ordinary least squares regression, perterm weights impact the coefficient matrix
, and thereby the concrete expressions for . However, they do not change the overall structure of as a weighted combination of the observed data, so herding and PredSVM (see Section 5) training remain possible without structural modifications.Illustration of Experiment 1: mixture of Gaussians with changing proportions (top), translating Gaussian (bottom left) and Gaussian with contracting variance (bottom right). Blue curves illustrate the RKHS embeddings
of the given sample sets (no shown). The orange curves are EDD’s prediction of the distribution at time , as output of the learned operator () and after additional herding ().3 Related Work
To our knowledge, the problem of extrapolating a timevarying probability distribution from a set of samples has not been studied in the literature before. However, a large of body work exists that studies related problems or uses related techniques.
The prediction of future states of a dynamical system or timevariant probability distribution is a classical application of probabilistic state space models, such as Kalman filters [9], and particle filters [7]. These techniques aim at modeling the probability of a timedependent system jointly over all time steps. This requires observed data in the form of time series, e.g. trajectories of moving particles. EDD, on the other hand, learns only the transitions between the marginal distribution at one point of time to the marginal distribution at the next point of time. For this, independent sample sets from different time points are sufficient. The difference between both approaches become apparent, e.g., by looking at a system of homogeneously distributed particles that rotate around a center. A joint model would learn the circular orbits, while EDD would learn the identity map, since the data distributions are the same at any time.
In the literature of RKHS embeddings, a line of work related to EDD is the learning of conditional distributions by means of covariance operators, which has also be interpreted as a vectorvalued regression task [12]. Given a current distribution and such a conditional model, one could infer the marginal distribution of the next time step [19]. Again, the difference to EDD lies in the nature of the modeled distribution and the training data required for this. To learn conditional distributions, the training data must consist of pairs of data points at two subsequent time points (essentially a minimal trajectory), while in the scenario we consider correspondences between samples at different time points are not available and often would not even make sense. For example, in Section 4 we apply EDD to images of car models from different decades. Correspondence between the actual cars depicted in such images do not exist.
A different line of work aims at predicting the future motion of specific objects, such as people or cars, from videos [10, 21, 22, 23]. These are modelbased approaches that target specifically the situation of learning trajectories of objects in videos. As such, they can make precise predictions about possible locations of objects at future times. They are not applicable to the generic situation we are interested, however, in which the given data are separate sample sets and the goal is to predict the future behavior of the underlying probability distribution, not of individual objects.
4 Experiments
We report on experiments on synthetic and real data in order to highlight the working methodology of EDD, and to show that extrapolating the distribution dynamics is possible for real data and useful for practical tasks.
Experiment 1: Synthetic Data.
First, we perform experiments on synthetic data for which we know the true data distribution and dynamics, in order to highlight the working methodology of EDD. In each case, we use sample sets of size and we use a regularization constant of . Where possible we additionally analytically look at the limit case , i.e. . For the RKHS embedding we use a Gaussian kernel with unit variance.
First, we set
, a mixture of Gaussians distribution with mixture coefficients that vary over time as
. Figure 2 (top) illustrates the results: trained on the first six samples sets (blue lines), the prediction by EDD (orange) match almost perfectly the seventh (dashed), with or without herding. In order to interpret this result, we first observe that due the form of the distributions it is not surprising that could be expressed as linear combination of the, provided we allow for negative coefficients. What the result shows, however, is that EDD is indeed able to find the right coefficients from the sample sets, indicating that the use of an autoregressive model is justified in this case.
Predicting the next time step of a distribution can be expected to be harder if not only the values of the density change between time steps but also the support. We test this by setting , i.e. a Gaussian with shifting location of the mean, which we call the translation setting. Figure 2 (bottom left) illustrates the last three steps of the total nine observed steps of the dynamics (blue) and its extrapolation (orange) with and without herding. One can see that EDD indeed is able to extrapolate the distribution to a new region of the input space: the mode of the predicted and lie right of the mode of all inputs. However, the prediction quality is not as good as in the mixture setting, indicating that this is in fact a harder task.
Finally, we study a situation where it is not clear on first sight whether the underlying dynamics has a linear model: a sequence of Gaussians with decreasing variances, , which we call the concentration setting. The last three steps of the nine observed steps of the dynamics and its extrapolation with and without herding are illustrated in Figure 2 (bottom right) in blue and orange, respectively. One can see that despite the likely nonlinearity, EDD is able to predict a distribution that is more concentrated (has lower variance) than any of the inputs. In this case, we also observe that the predicted distribution function has negative values, and that Herding removes those.
As a quantitative evaluation we report in Tables 1 and 2
how well the predicted distributions correspond to the ground truth ones as measured by the Hilbert space (HS) distance and the KullbackLeibler divergences, respectively. The latter is only possible for EDD after herding, when the prediction is a proper probability distribution (nonnegative and normalized). Besides EDD, we include the baseline of reusing the last observed sample set as a proxy for the next one. To quantify how much of the observed distance is due to the prediction step and how much is due to an unavoidable sampling error, we also report the values for a sample set
of the same size from the true distribution .The results confirm that, given sufficiently many samples of the earlier tasks, EDD is indeed able to successfully predict the dynamics of the distribution. The predicted distribution is closer to the true distribution than the most similar observed distribution, . For translation and concentration, the analytic results show that even for the difference is nonzero, suggesting that the true dynamics are not exactly linear in the RKHS. However, the residual is small compared to the measured quantities.








Experiment 2: Real World Data.
In a second set of experiments, we test EDD’s suitability for real data by applying to video sequences from [6]. The dataset consists of 1121 video sequences of six semantic categories, birthday, parade, picnic, show, sports, and wedding, from two sources, Kodak and YouTube. Each video is represented by a collection of spatiotemporal interest points (STIPs) with 162dimensional feature vectors.^{1}^{1}1http://vc.sce.ntu.edu.sg/index_files/VisualEventRecognition/features.html
For each video, except six that were less than one second long, we split the STIPs into groups by creating segments of 10 frames each. Different segments have different numbers of samples, because the STIPs were detected based on the response of an interest operator. Different video also show a strong diversity in this characteristics: the number of per STIPs per segment varies between 1 and 550, and the number of segments per video varies between 3 and 837.
As experimental setup, we use all segments of a movie except the last one as input sets for EDD, and we measure the distance between the predicted next distribution and the actual last segment. Table 3 shows the results split by data source and category for two choices of kernels: the RBF kernel, for , and the histogram intersection kernel, , both for
. For each data source and category we report the average and standard error of the Hilbertspace distance between distribution. As baselines, we compare against reusing the last observed segment, i.e. not extrapolating, and against the distribution obtained from merging all segments, i.e. the global video distribution. One can see that the predictions by EDD are closer to the true evolution of the videos than both baselines in all cases but two, in which it is tied with using the last observation. The improvement is statistically significant (bold print) to a 0.05 level according to Wilcoxon signed rank test with multitest correction, except for some cases with only few sequences.
5 Application: Predictive Domain Adaptation
We are convinced that being able to extrapolate a timevarying probability distribution into the future will be useful for numerous practical applications. As an example, we look at one specific problem: learning a classifier under distribution drift, when for training the classifier data from the time steps is available, but by the time the classifier is applied to its target data, the distribution has moved on to time . A natural choice to tackle this situation would be by using domain adaptation techniques [8]. However, those typically require that at least unlabeled data from the target distribution is available, which in practice might not be the case. For example, in an online prediction setting, such as spam filtering, predictions need to be made on the fly. One cannot simply stop, collect data from the new data distribution, and retrain the classifiers. Instead, we show how EDD can be used to train a maximum margin classifier for data distributed according to , with only data from to available. We call this setup predictive domain adaptation (PDA).
Let for , be a sequence of labeled training sets, where is the input space, e.g. images, and is the set of class labels. For any kernel on , we form a joint kernel, on and we apply EDD for . The result is an estimate of the next time step of the joint probability distribution, , as a vector, , or in form of a weighted sample set, .
To see how this allows us to learn a better adapted classifier, we first look at the situation of classification with 0/1loss function,
. If a correctly distributed training of size were available, one would aim for minimizing the regularized risk functional(10) 
where is a regularization parameter and is any feature map, not necessarily the one induced by . To do so numerically, one would bound the loss by a convex surrogate, such as the hinge loss, , which make the overall optimization problem convex and therefore efficiently solvable.
In the PDA situation, we do not have a training set , but we do have a prediction provided by EDD in the form of Equation (8). Therefore, instead of the empirical average in (10), we can form a predicted empirical average using the weighted samples in . This leads to the predicted regularized risk functional,
(11) 
that we would like to minimize.
BMW  Mercedes  VW 

In contrast to the expression (10), replacing the 0/1loss by the hinge loss does not lead to a convex upper bound of (11), because the coefficients can be either positive or negative. However, we can use that , and obtain an equivalent expression for the loss term with only positive weights, , where and . The constant plays no role for the optimization procedure, so we drop it from the notation for the rest of this section. Now bounding each 0/1loss term by the corresponding Hinge loss yields a convex upper bound of the predicted risk,
(12) 
Minimizing it corresponds to training a support vector machine with respect to the predicted data distribution, which we refer to as PredSVM. It can be done by standard SVM packages that support persample weight, such as libSVM
[3].Experiment 3: Predictive Domain Adaptation.
To demonstrate the usefulness of training a classifier on a predicted data distribution, we perform experiments on the CarEvolution [16] data set.^{2}^{2}2http://homes.esat.kuleuven.be/~krematas/VisDA/CarEvolution.html It consists of 1086 images of cars, each annotated by the car manufacturer (BMW, Mercedes or VW) and the year in which the car model was introduced (between 1972 and 2013).
The data comes split into source data (years 1972–1999) and target data (years 20002013). We split the source part further into three decades: 1970s, 1980s, 1990s. Given these groups, our goal is to learn a linear PredSVM to distinguish between the manufacturers in the target part. We also perform a second set of experiments, where we split the target set further into models from the 2000s and models from the 2010s, and we learn a linear PredSVM with the 1970s as target and the other tasks in inverse order as sources. As baseline, we use SVMs that were trained on any of the observed tasks, as well as an SVMs trained all the union of all source tasks. In all cases we choose the SVM parameter, , by fivefold cross validation on the respective training sets.
Table 4 summarizes the results for two different feature representations: Fisher vectors [15] and normalized DeCAF features [5]. In both cases, PredSVM is able to improve over the baselines. Interestingly, the effect is stronger for the DeCAF features, even though these were reported to be less affected by visual domain shifts. We plan to explore this effect in further work.
method  FVs  decaf 

1970s 2000s  39.3%  38.2% 
1980s 2000s  43.8%  48.4% 
1990s 2000s  49.0%  52.4% 
all 2000s  51.2%  52.1% 
PredSVM (temporal order)  51.5%  56.2% 
method  FVs  decaf 
2010s 1970s  33.5%  34.0% 
2000s 1970s  31.6%  42.7% 
1990s 1970s  46.1%  46.6% 
1980s 1970s  44.7%  33.5% 
all 1970s  46.1%  49.0% 
PredSVM (reverse order)  48.5%  54.4% 
6 Summary and Discussion
In this work, we have introduced the task of predicting the future evolution of a timevarying probability distribution. We described a method that, given a sequence of observed samples set, extrapolates the distribution dynamics by one step. Its main components are two recent techniques from machine learning: the embeddings of probability distributions into a Hilbert space, and vectorvalued regression. Furthermore, we showed how the predicted distribution obtained from the extrapolation can be used to learn a classifier for a data distribution from which no training examples is available, not even unlabeled ones.
Our experiments on synthetic and real data gave insight into the working methodology of EDD and showed that it is –to some extend– possible to predict the next state of a timevarying distribution from sample sets of earlier time steps, and that this can be useful for learning better classifiers. One shortcoming of our method is that currently it is restricted to equally spaced time steps and that the extrapolation is only by a single time unit. We plan to extend our framework to more flexible situations, including distributions with a continuous time parameterization.
Acknowledgements
This work was funded in parts by the European Research Council under the European Unions Seventh Framework Programme (FP7/20072013)/ERC grant agreement no 308036.
References

[1]
Y. Altun and A. Smola.
Unifying divergence minimization and statistical inference via convex
duality.
In
Workshop on Computational Learning Theory (COLT)
, 2006.  [2] F. Bach, S. LacosteJulien, and G. Obozinski. On the equivalence between herding and conditional gradient algorithms. In International Conference on Machine Learing (ICML), 2012.
 [3] C.C. Chang and C.J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011.

[4]
Y. Chen, M. Welling, and A. J. Smola.
Supersamples from kernel herding.
In
Uncertainty in Artificial Intelligence (UAI)
, 2010.  [5] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. DeCAF: A deep convolutional activation feature for generic visual recognition. In International Conference on Machine Learing (ICML), 2014.
 [6] L. Duan, D. Xu, I.H. Tsang, and J. Luo. Visual event recognition in videos by learning from web data. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 34(9):1667–1680, 2012.
 [7] N. J. Gordon, D. J. Salmond, and A. F. M. Smith. Novel approach to nonlinear/nonGaussian Bayesian state estimation. In IEE Proceedings F (Radar and Signal Processing), 1993.
 [8] J. Jiang. A literature survey on domain adaptation of statistical classifiers. http://sifaka.cs.uiuc.edu/jiang4/domain_adaptation/survey, 2008.
 [9] R. E. Kalman. A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82:35–45, 1960.

[10]
K. M. Kitani, B. D. Ziebart, J. A. Bagnell, and M. Hebert.
Activity forecasting.
In
European Conference on Computer Vision (ECCV)
, pages 201–214, 2012.  [11] C. H. Lampert. Kernel methods in computer vision. Foundations and Trends in Computer Graphics and Vision, 4(3):193–285, 2009.
 [12] G. Lever, L. Baldassarre, S. Patterson, A. Gretton, M. Pontil, and S. Grünewälder. Conditional mean embeddings as regressors. In International Conference on Machine Learing (ICML), 2012.
 [13] H. Lian. Nonlinear functional models for functional responses in reproducing kernel Hilbert spaces. Canadian Journal of Statistics, 35(4):597–606, 2007.
 [14] C. A. Micchelli and M. Pontil. On learning vectorvalued functions. Neural Computation, 17(1):177–204, 2005.
 [15] F. Perronnin, J. Sánchez, and T. Mensink. Improving the Fisher kernel for largescale image classification. In European Conference on Computer Vision (ECCV), 2010.
 [16] K. Rematas, B. Fernando, T. Tommasi, and T. Tuytelaars. Does evolution cause a domain shift? In ICCV Workshop on Visual Domain Adaptation and Dataset Bias, 2013.
 [17] A. Smola, A. Gretton, L. Song, and B. Schölkopf. A Hilbert space embedding for distributions. In International Conference on Algorithmic Learning Theory (ALT), 2007.
 [18] L. Song. Learning via Hilbert space embedding of distributions. PhD thesis, University of Sydney, 2008.
 [19] L. Song, J. Huang, A. Smola, and K. Fukumizu. Hilbert space embeddings of conditional distributions with applications to dynamical systems. In International Conference on Machine Learing (ICML), 2009.
 [20] Z. Szabo, A. Gretton, B. Poczos, and B. Sriperumbudur. Learning theory for distribution regression. arXiv:1411.2066 [math.ST], 2014.

[21]
J. Walker, A. Gupta, and M. Hebert.
Patch to the future: Unsupervised visual prediction.
In
Conference on Computer Vision and Pattern Recognition (CVPR)
, 2014.  [22] K. Yamaguchi, A. C. Berg, L. E. Ortiz, and T. L. Berg. Who are you with and where are you going? In Conference on Computer Vision and Pattern Recognition (CVPR), 2011.
 [23] B. D. Ziebart, N. Ratliff, G. Gallagher, C. Mertz, K. Peterson, J. A. Bagnell, M. Hebert, A. K. Dey, and S. Srinivasa. Planningbased prediction for pedestrians. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2009.