Learning from lions: inferring the utility of agents from their trajectories

by   Adam D. Cobb, et al.
University of Oxford

We build a model using Gaussian processes to infer a spatio-temporal vector field from observed agent trajectories. Significant landmarks or influence points in agent surroundings are jointly derived through vector calculus operations that indicate presence of sources and sinks. We evaluate these influence points by using the Kullback-Leibler divergence between the posterior and prior Laplacian of the inferred spatio-temporal vector field. Through locating significant features that influence trajectories, our model aims to give greater insight into underlying causal utility functions that determine agent decision-making. A key feature of our model is that it infers a joint Gaussian process over the observed trajectories, the time-varying vector field of utility and canonical vector calculus operators. We apply our model to both synthetic data and lion GPS data collected at the Bubye Valley Conservancy in southern Zimbabwe.


page 6

page 7

page 8


Identifying Sources and Sinks in the Presence of Multiple Agents with Gaussian Process Vector Calculus

In systems of multiple agents, identifying the cause of observed agent d...

Monotonic Gaussian Process for Spatio-Temporal Trajectory Separation in Brain Imaging Data

We introduce a probabilistic generative model for disentangling spatio-t...

Gaussian process with derivative information for the analysis of the sunlight adverse effects on color of rock art paintings

Microfading Spectrometry (MFS) is a method for assessing light sensitivi...

Spatio-temporal Gaussian processes modeling of dynamical systems in systems biology

Quantitative modeling of post-transcriptional regulation process is a ch...

Sequential Bayesian inference for spatio-temporal models of temperature and humidity data

We develop a spatio-temporal model to forecast sensor output at five loc...

Spatio-Temporal Coverage Enhancement in Drive-By Sensing Through Utility-Aware Mobile Agent Selection

In recent years, the drive-by sensing paradigm has become increasingly p...

1 Introduction

Inferring the possible cause of an agent’s behaviour from observing its actions is an important area of research across multiple domains. Generic solutions to this problem typically exploit Inverse Reinforcement Learning (IRL)

russell1998learning and include learning preference value functions from choices chu2005preference

, as well as utility, value or policy surfaces from observed actions in a space. In this paper we consider observed trajectories, influenced by a time- and space-varying utility function, and look to infer the spatio-temporal utility function from observed tracks. Our work is motivated by a desire to further understand animal interaction with an environment, fuelled by the existence of long-term GPS data. Instead of making deductions about likely tracks, given our knowledge of features in the environment, we look at inverting the task by inferring these significant features in the landscape from tracks. Our model constructs Gaussian processes (GPs) for jointly inferring a spatio-temporal vector field and canonical vector calculus operations, allowing us to estimate a time-varying map of sources and sinks of potential influence.

Representing the utility function using a GP model is applied to IRL applications in both levine2011nonlinear ; qiao2011inverse , which improve on previous IRL techniques that constrain the reward to be a linear combination of features abbeel2004apprenticeship ; ramachandran2007bayesian ; ziebart2008maximum . Instead of treating the problem of inferring characteristics of the utility function through finding an optimal policy that matches observed agent demonstrations, we take advantage of using a GP model to infer the distributions over trajectories and their joint derivatives. These distributions provide an estimate of significant attractors and repellers in an agent’s surrounding which influence the observed agent’s dynamics.

The use of GPs to model vector fields can be seen in wahlstrom2013modeling to directly model magnetic fields. They use divergence-free and curl-free kernels to enforce linear constraints on their models. In jidling2017linearly , a method for incorporating any linear constraint is presented. To the best of our knowledge, apart from the above work on modeling vector fields directly with Gaussian processes, the derivative properties of GPs are not currently exploited to infer vector calculus operations from lower order observations.

Previous techniques for studying how animals interact with the environment rely on using GPS data to build density maps to construct probability distributions of their likely locations

horne2007analyzing ; laver2008critical . Although more recent approaches have incorporated time lyons2013home into these models, current methods do not focus on inferring the driving force behind animal actions. This work looks to address this gap in the literature, providing not only a tool to aid in animal behaviour studies, but also as an addition to the growing body of work in inverse reinforcement learning.

The rest of the paper is organised as follows: Section 2 introduces the GP and its associated derivative manipulations for vector calculus. The model is discussed in Section 3, which is followed by a description of the synthetic data experiments in Section 4. The results of applying the model to the lion GPS data are presented in Section 5. Finally, we discuss the implications of our results and comment on future work in Section 6.

2 Preliminaries

As a requirement for our model, we introduce the Gaussian process, defined by its mean function and covariance function rasmussen2006gaussian

. The mean and covariance functions are parameterized by their hyperparameters and encode prior information into the model, such as smoothness, periodicity and any known underlying behaviour. In our work,

is set to zero as our data is preprocessed to have a zero mean. We define a function, distributed via a GP, as follows:


Manipulating Gaussian identities using Bayes’ rule gives formulae for the GP’s posterior mean,


and posterior covariance,


where is a test point under question and

is the noise variance hyperparameter.

Any affine transformation of Gaussian distributed variables remain jointly Gaussian distributed. As differentiation is an affine operation, applying this property to any collection of random variables distributed by a GP, gives jointly Gaussian distributed derivatives,

. For a test point and corresponding output , the derivatives associated with the GP in Equation (1) are distributed with posterior mean,


and posterior covariance,


We define Equation (4) as the predictive mean of the derivative with respect to any test points and Equation (5) as its corresponding variance.

The choice of covariance selected throughout this paper is the squared exponential kernel,


with and corresponding to a test point and training point respectively. The hyperparameter , is a diagonal matrix of input scale lengths, where each element determines the relevance of its corresponding dimension and the output scale, denoted by , controls the magnitude of the output roberts2013gaussian . Although the choice of kernel is somewhat arbitrary, our choice is motivated by the desire to obtain smooth measures over arbitrary derivative functions and the ease by which the kernel can be differentiated and used in combination with Equations (4) and (5). The following formulae define the squared exponential kernel for the first and second order derivatives mchutchon2013differentiating :


We note at this point that estimating derivatives using a joint GP model over the function and derivatives brook2016emission ; holsclaw2013gaussian offers a benign noise escalation in comparison to numerical differentiation.

2.1 Vector calculus with GPs

We define each dimension of the vector field, , at time to be modelled by a multi-input, multi-output GP with a three-dimensional input tuple consisting of , being the spatial and time components of our trajectory. This GP is constructed by introducing a separable kernel alvarez2012kernels , such that

contains an independently learned kernel for each output dimension. Applying Equation (4) to this GP model, by jointly inferring the derivatives in the and directions, gives the time dependent posterior for each random variable in the following tuple:

We combine these predictive derivatives using Equations (9) and (10) to infer probability distributions over the divergence and curl of the vector field :


By simple application of the appropriate operators, we may readily define the time-varying spatial Laplacian:


where is a scalar potential function. This (time-varying) Laplacian is of key importance as it defines sources and sinks in the spatial domain.

3 Model

Our model builds upon the theory introduced in section 2. The objective is to design a model that can indicate influential features in an agent’s landscape from observing the trajectories.

A trajectory for agent is defined as a collection of timestamped locations or tuples . The elements of the vector are referred to as and for the purposes of this model. We also make the assumption that each agent acts according to a utility or potential function , which is dependent on time and space. Whilst interacting with the environment, each agent aims to maximise this utility function at all times.

3.1 Fitting to the agent trajectories

The first stage of the model uses Equations (2) and (3) to fit a GP to each trajectory . Using a single GP with a separable kernel, Equation (12

) defines our GP prior, which is a joint distribution over the path and its derivatives with respect to time, where we denote

and as the second-order time derivatives in the and directions:


For each of the and components of , a set of hyperparameters are learned for the separable kernel. The input space of this GP model is time and the output space consists of the and components of . If an agent trajectory, , consists of data points, we can use the posterior, , to infer expectations for the derivatives at each of the data points.

Second-order derivatives are inferred at this stage, as we make the tacit assumption that an agent acts in accordance with a second-order dynamical system, i.e. the agent obeys Newtonian dynamics. This assumption means that an agent’s underlying utility induces a “force” of influence on the agent, thus generating an acceleration (we here take the ‘influence-mass’ of the agent as unity). More formally, this induces an acceleration equal to the derivative of the agent utility:

When dealing with a multi-agent system of homogeneous agents, a trajectory model can be calculated for each agent to form the set of joint distributions,

From this GP model we are able to jointly predict the velocity and acceleration at any point on an agent’s trajectory for agents. The next layer of our model combines these to construct a probability distribution over the extended agent environment.

3.2 Inferring the vector field and Laplacian

In order to infer the gradient of the utility function, , the set of inferred second derivatives for all agents is propagated through a second GP, which also has a separable kernel model, as below:


The vector consists of two random variables that model the acceleration in the two axes and the superscript label ‘post’ refers to the calculated posterior mean and covariance. Equation (13) combines the multiple agent paths into one model and enables predictions to be made at different points in space that are not constrained to a single agent trajectory as in Equation (12). The input-output pairs for this GP model are the and values in each that correspond to the and values.

The Newtonian assumption made in Section 3.1 is formally included as

The partial derivatives from Equation (13) can thence be combined to calculate a distribution of the divergence of . It follows that this divergence is proportional to the Laplacian under the same assumption,

In particular, our interest lies in the estimation of the Laplacian of the utility function, as it indicates sources and sinks of the potential function in the environment. In this context, we regard sinks as agent attractors and sources as agent repellers.

3.3 Metric for locating significant attractors and repellers: Kullback–Leibler divergence

Our metric of change from prior field to posterior field is measured via the Kullback-Leibler (KL) divergence kullback1951information . The motivation for selecting the KL divergence comes from its ability measure a distance between two distributions by taking into account both the mean and variance. This provides a natural indication of the informativeness of spatial locations, at given times, and in the context of our application, offers a measure of trajectory-influencing locations.

Given the model at time

, each point in space has an associated potential field distribution, defined via the GP posterior as a univariate normal distribution. The KL divergence can be readily calculated as the difference between two univariate normal distributions, namely the prior and posterior

duchi2007derivations , as below:



We refer back to Equation (9) and (11) in order to calculate the following prior at location in space-time:

where, The hyperparameters and are the output and input scale lengths of the -part of the separable kernel in the GP model, with and corresponding to the -part.

As our interest lies in determining attractors and repellers in the field, a further addition to the KL divergence in Equation (14) is to multiply it by the sign of the posterior mean of the Laplacian. This multiplication carries over the prediction of negative sinks and positive sources, whilst measuring the divergence from the zero prior mean. We refer to this trivial extension as the signed KL divergence.

3.4 Hyperparameter optimisation

Optimisation of the hyperparameters is achieved through maximisation of the GPs marginal log-likelihood using scaled conjugate gradient approaches. We initialise hyperparameters for spatial models with domain knowledge, where available. In our example, lions are dynamic objects with maximal accelerations, which places natural priors on the relationships between spatial movement and time variation.

4 Application to synthetic data

We apply our model to synthetic data where the true utility function and its derivatives are known, so as to test the performance of the approach. The experiment consists of a multi-agent system of four identical agents, whose dynamics we observe. Our goal is to infer, from trajectories alone, the underlying potential value function.

4.1 Utility function

The utility function for the test example is defined as a two-component Gaussian mixture model, where each Gaussian has a time-varying covariance that changes harmonically according to:


where and are constants and , are the distribution means. We also denote

as the identity matrix. The first and second order derivatives are calculated using the known derivatives of a multivariate normal distribution

petersen2008matrix :

These derivatives are then used in conjunction with the vector calculus operations in Equations (9) and (11) to calculate the true Laplacian of the utility function.

4.2 Agent model

Agents are modelled according to a second order dynamical system, whereby, at each time step , the acceleration , velocity and position are given by:

The random variable, , corresponds to additive noise distributed according to a two-dimensional multivariate Gaussian and the variable is the update increment.

4.3 Experimental results

Four agents were initialised at locations and with a velocity of zero. The parameters of the utility function in Equation (15) were and , . The model stepped through time-steps and the inferred Laplacian of the utility function was calculated from the four agent trajectories in . The subset of frames displayed in Figure 1 show how the attractors of the true utility functions are tracked across time by both the posterior inferred Laplacian and the signed KL divergence. The uncertainty in our model is displayed through the signed KL divergence. The black and white markers denote the location of the dominant attractor, at that time step, given by the ground truth in the first row. For all three rows, regions of attraction are indicated by blue and it can be seen that the model is able to track the true attractors across time.

In Figure 2, the left-hand plot displays the mean KL divergence taken over time, showing the emergence of the average location and size of the attractors. The right-hand plot shows each agent’s path and how their trajectories are drawn to the sink locations. This figure demonstrates that our model has correctly inferred the sink locations without prior knowledge of their location.

Figure 1: Top row: Laplacian of true utility function. Middle row: inferred Laplacian of utility function. Bottom row: signed KL divergence. The location of the global minimum of the true Laplacian is indicated via the black and white marker. The predicted locations of the sinks, given by both the signed KL divergence and the posterior Laplacian, align well with their true locations.
Figure 2: The mean signed KL divergence across time is shown in the left plot and the corresponding agent trajectories are shown in the right plot. Each agent path is denoted via a separate color. The true locations of the attractors are indicated by the black markers. Taking the average over the frames of data allows us to see the mean inferred position of attractors in the agents’ field, which coincide with the known positions.

5 Lion trajectories

In this section, we examine how our model can be used to determine drivers in the landscape that impact an animal’s behaviour. Previous techniques for studying how animals interact with the environment rely on using GPS data to build density maps to construct probability distributions of their likely locations horne2007analyzing ; laver2008critical . Although more recent approaches have incorporated time lyons2013home into these models, current methods do not focus on inferring the driving force behind animal actions and instead simply show where an animal is likely to be found. Therefore we apply our model to a subset of lion GPS data to show a possible way of inferring the location of influential features in a lion’s landscape. This perspective differs from traditional ecological techniques, which rely on knowledge of the environment to test a hypothesis. In particular, our focus is to build from work by Valeix et al. (2010) valeix2010key that show waterholes are key attractive features on lions’ trajectories, using knowledge of waterhole locations. Inverting this perspective, applying our model to this data set aims to infer locations of waterholes that are important to a particular lion, without supplying this as a prior.

The model is applied to GPS data collected from lions living at the Bubye Valley Conservancy (BVC) in southern Zimbabwe. Figures 3 and 4 display the results of applying the model to data points covering days and hours of a particular lion’s GPS readings. The and locations have been normalised in keeping with the relative geographic information system (GIS) data. In both figures the locations of waterholes are included to show their correspondence with predicted regions of attraction. Figure 3 shows the model inferring attractor locations at different time steps, where the units are in hours since the start of the data set. These regions can be seen to overlap the known waterhole locations at certain time steps. In Figure 4 the left-hand plot superimposes the GIS data of waterholes on the mean signed KL divergence produced from the lion trajectory, where the mean is taken across time. The corresponding lion trajectory GPS readings are shown in the right-hand plot. The mean signed KL divergence indicates that there are two waterholes located in areas near local minima, leading to the reasonable assumption that they may behave as influential attractive features. In this way, we can infer decision-making strategies behind observed trajectories. Richer models could investigate further effects such as location of prey and other lions in the ecosystem.

Figure 3: Top row: inferred Laplacian of utility function. Bottom row: signed KL divergence. Each column represents the model at a given hour since the start of the subset of lion data. The relative location of the waterholes are denoted by the black and white markers. Blue regions indicate attractive regions. The signed KL divergence infers attractors occuring at different times that coincide with the waterholes, without prior knowledge of the waterhole locations.
Figure 4: The left plot displays the mean of the signed KL divergence across time to give an average estimate of the lion attractor locations. The blue areas indicate large negative values corresponding to the sinks. The right-hand plot contains the lion trajectory data. The black markers give the relative locations of waterholes, provided by the GIS data. This data set covers GPS readings that were collected over days and hours. The spacial scale for this figure is km.

6 Conclusion

Through applying layers of Gaussian Processes combined with vector calculus, the model provides a useful way of inferring significant features in an agent’s surroundings that influence its behaviour. We demonstrate the results of applying this model to a synthetic data set, where the true attractors are known. Our model successfully finds the attractors that determine the agents’ trajectories. Furthermore, applying the model to a subset of lion GPS data allows us to determine a spatio-temporal influence measure which highlights the two waterholes that the lion visits most frequently. Unlike previous work in this domain, such information was not provided to the model. Furthermore, unlike density estimation based methods, our model makes predictions that depend on dynamic decisions made by the lion.

Further work will extend this model to propagate uncertainty from the GPS readings through to the final Laplacian predictions, adapting techniques used in damianou2013deep ; girard2003gaussian . We aim to extend our model to considerably larger data sets through utilising sparse GP models as needed titsias2009variational .


We would like to thank the shareholders of Bubye Valley Conservancy for allowing us to conduct field research on the conservancy and the management staff for assistance in the field. Adam Cobb is sponsored by the AIMS CDT (http://aims.robots.ox.ac.uk) and the EPSRC (https://www.epsrc.ac.uk).


  • [1] Stuart Russell. Learning agents for uncertain environments. In

    Proceedings of the eleventh annual conference on Computational learning theory

    , pages 101–103. ACM, 1998.
  • [2] Wei Chu and Zoubin Ghahramani. Preference Learning with Gaussian Processes. In

    Proceedings of the 22nd international conference on Machine learning

    , pages 137–144. ACM, 2005.
  • [3] Sergey Levine, Zoran Popovic, and Vladlen Koltun. Nonlinear Inverse Reinforcement Learning with Gaussian processes. In Advances in Neural Information Processing Systems, pages 19–27, 2011.
  • [4] Qifeng Qiao and Peter A Beling. Inverse Reinforcement Learning with gaussian Process. In American Control Conference (ACC), 2011, pages 113–118. IEEE, 2011.
  • [5] Pieter Abbeel and Andrew Y Ng. Apprenticeship Learning via Inverse Reinforcement Learning. In Proceedings of the twenty-first international conference on Machine learning, page 1. ACM, 2004.
  • [6] Deepak Ramachandran and Eyal Amir. Bayesian Inverse Reinforcement Learning. Urbana, 51(61801):1–4, 2007.
  • [7] Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum Entropy Inverse Reinforcement Learning. In AAAI, volume 8, pages 1433–1438. Chicago, IL, USA, 2008.
  • [8] Niklas Wahlström, Manon Kok, Thomas B Schön, and Fredrik Gustafsson. Modeling Magnetic Fields Using Gaussian Processes. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 3522–3526. IEEE, 2013.
  • [9] Carl Jidling, Niklas Wahlström, Adrian Wills, and Thomas B Schön. Linearly constrained Gaussian processes. arXiv preprint arXiv:1703.00787, 2017.
  • [10] Jon S Horne, Edward O Garton, Stephen M Krone, and Jesse S Lewis. Analyzing animal movements using Brownian bridges. Ecology, 88(9):2354–2363, 2007.
  • [11] Peter N Laver and Marcella J Kelly. A Critical Review of Home Range Studies. Journal of Wildlife Management, 72(1):290–298, 2008.
  • [12] Andrew J Lyons, Wendy C Turner, and Wayne M Getz. Home range plus: a space-time characterization of movement over real landscapes. Movement Ecology, 1(1):2, 2013.
  • [13] Carl Edward Rasmussen. Gaussian Processes for Machine Learning. 2006.
  • [14] Stephen Roberts, M Osborne, M Ebden, Steven Reece, N Gibson, and S Aigrain. Gaussian processes for time-series modelling. Phil. Trans. R. Soc. A, 371(1984):20110550, 2013.
  • [15] Andrew McHutchon. Differentiating Gaussian Processes. http://mlg.eng.cam.ac.uk/mchutchon/DifferentiatingGPs.pdf, 2013.
  • [16] PR Brook, A Karastergiou, S Johnston, M Kerr, RM Shannon, and SJ Roberts. Emission-rotation correlation in pulsars: new discoveries with optimal techniques. Monthly Notices of the Royal Astronomical Society, 456(2):1374–1393, 2016.
  • [17] Tracy Holsclaw, Bruno Sansó, Herbert KH Lee, Katrin Heitmann, Salman Habib, David Higdon, and Ujjaini Alam. Gaussian Process Modeling of Derivative Curves. Technometrics, 55(1):57–67, 2013.
  • [18] Mauricio A Alvarez, Lorenzo Rosasco, Neil D Lawrence, et al. Kernels for Vector-Valued Functions: a Review. Foundations and Trends® in Machine Learning, 4(3):195–266, 2012.
  • [19] Solomon Kullback and Richard A Leibler. On Information and Sufficiency. The annals of mathematical statistics, 22(1):79–86, 1951.
  • [20] John Duchi. Derivations for Linear Algebra and Optimization. Berkeley, California, 2007.
  • [21] Kaare Brandt Petersen, Michael Syskind Pedersen, et al. The Matrix Cookbook. Technical University of Denmark, 7:15, 2008.
  • [22] Marion Valeix, Andrew J Loveridge, Zeke Davidson, Hillary Madzikanda, Hervé Fritz, and David W Macdonald. How key habitat features influence large terrestrial carnivore movements: waterholes and African lions in a semi-arid savanna of north-western Zimbabwe. Landscape Ecology, 25(3):337–351, 2010.
  • [23] Andreas Damianou and Neil Lawrence. Deep Gaussian Processes. In Artificial Intelligence and Statistics, pages 207–215, 2013.
  • [24] Agathe Girard, Carl Edward Rasmussen, Joaquin Quinonero Candela, and Roderick Murray-Smith. Gaussian Process Priors With Uncertain Inputs Application to Multiple-Step Ahead Time Series Forecasting. Advances in neural information processing systems, pages 545–552, 2003.
  • [25] Michalis K Titsias. Variational Learning of Inducing Variables in Sparse Gaussian Processes. In AISTATS, volume 5, pages 567–574, 2009.