Gaussian Processes for Analyzing Positioned Trajectories in Sports

07/05/2019 ∙ by Yuxin Zhao, et al. ∙ Ericsson The Chinese University of Hong Kong, Shenzhen 5

Kernel-based machine learning approaches are gaining increasing interest for exploring and modeling large dataset in recent years. Gaussian process (GP) is one example of such kernel-based approaches, which can provide very good performance for nonlinear modeling problems. In this work, we first propose a grey-box modeling approach to analyze the forces in cross country skiing races. To be more precise, a disciplined set of kinetic motion model formulae is combined with data-driven Gaussian process regression model, which accounts for everything unknown in the system. Then, a modeling approach is proposed to analyze the kinetic flow of both individual and clusters of skiers. The proposed approaches can be generally applied to use cases where positioned trajectories and kinetic measurements are available. The proposed approaches are evaluated using data collected from the Falun Nordic World Ski Championships 2015, in particular the Men's cross country 4×10 km relay. Forces during the cross country skiing races are analyzed and compared. Velocity models for skiers at different competition stages are also evaluated. Finally, the comparisons between the grey-box and black-box approach are carried out, where the grey-box approach can reduce the predictive uncertainty by 30% to 40%.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 18

page 22

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 Background

Thanks to the emerging technologies in positioning, numerous positioned data are available and open to relevant research topics. Such data are usually analyzed to figure out different aspects that are hidden in the data, and machine learning is often used to explore the data distributions, study dependences between different attributes, provide modeling of patterns and make predictions. One specific use case is in sports, where the positions and kinetic measurements of athletes can be recorded. Such data analytics is crucial for both the coach and public audience, since they can provide valuable insights into the performance of athletes, which will further assist in deploying attacks and defenses in matches, monitoring physical condition of athletes and so on.

One interesting area is to use approaches in data analytics to analyze the kinetics of athletes in a certain sport, which may include studies on cause of motion, namely forces and torques, human movements with respect to the amount of time taken to carry out the activity and possible interactions between the athletes and their equipments and environments. All of these studies belong to the sports biomechanics research area, which are aiming to prevent injuries and improve performance of athletes. There have been numerous research conducted on sports biomechanics. For instance, (Bartlett, 2007) provides a thorough introduction on sport biomechanics including the geometry of sports movements, forces in sport and how they are related by the law of kinetics and how these are related to the human body in a anatomical way. In (Pohjola, 2014), the effectiveness of different forces in cross country skiing are evaluated by conducting real measurements. However, most research focuses on explicit analysis from physical and biological points of view, and so far, the data-driven approaches, such as machine learning, haven’t been fully explored.

Another interesting topic in data analytics is the flow modeling and prediction. In literature, various studies arise around this area, which include human motion patterns (Ellis et al., 2009), traffic flow modeling and prediction (Xie et al., 2010), moving patterns of a swarm of animals (Reece et al., 2011), and so on. Flow modeling and prediction can be done both on-line with real-time data, for instance the streaming probe data (Herrin, 2010), or off-line with cached/stored data, for instance the recorded data from surveillance cameras (Ellis et al., 2009). However, to the best of our knowledge, these have never been explored under sport scenario, where a huge amount of positioned trajectories can be available.

Historically, the most widely used machine learning approaches include neural networks

(Smith and Demetsky, 1994)

, support vector machines (SVM)

(Zhang and Xie, 2008), and Gaussian Processes (GP) (MacKay, 1997; Rasmussen and Williams, 2006). GP is one important class of kernel-based learning methods. First, it is good at exploring the relationship between a set of variables given a training dataset. Second, GP perfectly fits in the Bayesian framework, which allows for explicit probabilistic interpretation of model outputs. All these advantages make GP a powerful tool to address complex nonlinear regression and classification problems. However, the standard GP is computationally demanding when the dataset is large and its size grows with time. To remedy this drawback, a plethora of low-complex GP algorithms have been proposed over the last decade. Representative solutions include (1) reduced-rank approximations of the covariance matrix (Rasmussen and Williams, 2006) and (2) sparse representation of the complete training dataset (Quiñonero Candela and Rasmussen, 2005) and (3) partition of the complete dataset into smaller subsets and fusion of all local GP experts (Shen et al., 2006), (Deisenroth and Ng, 2015) and (4) stochastic variational inference approximated GP (Titsias, 2009) and (5) recursive processing based GP including a grid based algorithm (Huber, 2013, 2014) and a series of state-space model based algorithms (Särkkä et al., 2013). In this paper, we narrow down our focus to the recursive processing based algorithms as they are more attractive for on-line applications.

1.2 Contribution

In this work, we apply GP regression for modeling and prediction in a sport use case, but the proposed framework is generic for other use cases with positioned device trajectories and kinetic measurements, for instance, to model the temporal and/or spatial aspects of motions, speed, and data flows. In this work, we have used the data trajectories from the Falun Nordic World Ski Championships 2015 and the Men’s cross country relay race, km. The contribution of the work can be summarized as:

  1. A grey-box modeling is proposed to perform force analysis. In a grey-box modeling approach, the internal working mechanism of the system is partially known. To be more specific, the force model in this work is formulated by combining the known deterministic motion kinetics with Gaussian processes regression to accounts for the unknown forces in skiing races. The model can be further used to investigate the performance of a specific skier and to study the differences between various skiing techniques.

  2. A black-box modeling approach is proposed to model the ground speed. In the black-box modeling approach, the internal working mechanism of the system is completely unknown. In this work, both the standard and a grid based on-line Gaussian process regression (Huber, 2013)

    are proposed to provide a model specifying the relationships between the speed and position for each individual. For group of individuals, clustering is performed based on a number of features extracted from the training data.

  3. Not limited to ski race, the proposed approaches are also applicable to various sport activities, such as track and field, car race, horse race, and so on. This can be utilized both on-line for private coaching/public use with real-time data and off-line for analysis with recorded data in batch.

1.3 Paper Organization and Notations

The remainder of this paper is organized as follows: Section 2 proposes an approach for force analysis in sports, which combines the kinetics of motion with Gaussian processes. Section 3 introduces a modeling algorithm for positioned user trajectories based on both standard and the grid based on-line GP. Section 4 introduces a novel strategy for clustering skiers based on selected features and the aggregated flow modeling for each cluster. Section 5 provides detailed descriptions of the dataset. Section 6 validates and compares the proposed algorithms in various scenarios with real data. Lastly, Section 7 concludes the work.

Throughout this paper, matrices are presented with uppercase letters and vectors with boldface lowercase letters. The operator stands for vector/matrix transpose and stands for the inverse of a non-singular square matrix. The operator stands for the Euclidean norm of a vector.

denotes a Gaussian distribution with mean

and variance

.

2 Force Analysis for a Single Individual

The effective force applied by an athlete during a sports competition is complex to analyze. With global navigation satellite system (GNSS) trackers on the athlete, it is possible to obtain trajectories with time series of position and velocity estimates. Typically, the vertical position estimate from GNSS is uncertain. Instead, the horizontal position estimate can be used together with ground height information to estimate the vertical position. Based on kinetic relations, it is possible to model the athlete motion with the effective force as unknown. Hence, we propose to combine the information from kinetic models with Gaussian process regression to estimate the latent forces. To be more specific, we propose a generic way for analyzing forces of athletes at certain stages of the competition.

We begin by analyzing a cross country skiing scenario, which can be easily extended to other sports with similar moving patterns. In cross country skiing, the skier moves forward by applying forces through poles and skis. In this process, there is also friction from the ice surface and the resistance from air. In order to ease the analysis, we propose the simplified force models as illustrated in Fig. 1 and 2.

Figure 1: Force model for uphill.
Figure 2: Force model for downhill.

For uphill, there is a propulsive force going forward, while at the same time, in Fig. 1 denotes both the friction from ice and the resistance from air, which is opposite to the moving direction. The mass of skier is denoted by , and is the gravity of Earth and is the track incline angle. In Fig. 2, the force model for downhill is illustrated, where usually there is no propulsive force, but only air resistance and friction from ice (which is denoted by in Fig. 2). The skier instead makes use of the gravity to move forward with a decline angle of . The skier has to adjust his/her posture to reduce the air resistance. It should be noted that in a real practice, the forces are more complex than what have been illustrated in Fig. 1 and 2.

Based on the force models, in what follows, we analyze uphill and downhill scenarios, respectively. Since the forces are changing during different competition stages, the whole track is divided into small segments and we further assume the moving direction, the propulsive forces from a specific skier, air resistance and friction remain the same during each segment. In addition, the time for the skier to finish the segment is measured as . Hence, according to Newton’s law of motion, the change in velocity per time unit in an uphill segment can be formulate as

(1)

where the mass of skier is usually known and the incline angle can be computed as the slope of the track is known. The change in velocity per time unit can be denoted as . In modern races, the velocity and position of athletes can be measured at certain fixed time stamps. However, it is typically difficult to measure the propulsive forces, since it comes from both skis and poles. Furthermore, the air resistance and friction on ice are almost impossible to measure, as they depend on many factors, such as the posture of skier, the temperature of ice, and the force from skier that is perpendicular to the track surface. Considering such complexities, we propose to model the resultant force, which is , by a Gaussian process, which is formulated as

(2)

where denotes the distance traveled from the beginning of the track, and it uniquely determines the position of skier when the track is predefined. is additive Gaussian noise with zero mean and variance . Hence, the full kinetic is given as

(3)

which consists of both the known gravity and the unknown forces. The function follows a Gaussian process Rasmussen and Williams (2006)

(4)

with mean and kernel function . In order to train the Gaussian process regression model, a set of training dataset is required, which is denoted by , where is constructed by for

. The joint distribution of all observations

is given by

(5)

where

The parameters can be estimated by maximizing the likelihood function given in (5). The detailed explanations are given in Appendix A. Given a new input location and the training set , the resultant force can be estimated by

(6)

where

(7a)
(7b)

To analyze the downhill segment, as shown in Fig. 2, the following holds according to the law of motion:

(8)

where is modeled by

(9)

and follows a Gaussian process as given in (4). Then, we follow the same procedure applied to estimate the resultant force for uphill. Instead, the training observations for are constructed as .

3 GP Based Flow Modeling and Prediction for a Single Individual

In previous section, we have proposed a grey-box modeling approach for analyzing the forces in, for instance, skiing races. The grey-box modeling approach explores the known kinetic model based on the physical laws and combines it with Gaussian process regression models which are used to account for the unknown forces. It is also possible to perform the modeling of input and output relationship by a black-box approach, where it is not necessary to know explicitly the internal working mechanics. In this section, we propose a black-box approach to analyze the relationship between the ground speed and the position of an individual skier.

Concretely, we estimate the ground speed, , for a single skier at a specific position. Since the individual follows a predefined track in this scenario, the position of an individual at time can be uniquely translated into the distance traveled on the track since the start of the race, denoted by herein. With the definitions given above, the following flow model is formulated

(10)

where is the underlying flow model and is additive noise, which is assumed to be Gaussian distributed with zero mean and variance . The focus of this section is to use GP regression to infer the underlying flow model in (10) and predict the ground speed value at any input .

3.1 Standard Gaussian Process Regression

In this subsection, the standard Gaussian process (SGP) will be introduced and applied to the problem formulated above. Without specifying the time, the previously defined function can be approximated by a GP, which is given by

where is the mean function (we assume in this work) and is the covariance/kernel function.

In the training phase, a dataset denoted as, , is collected. Considering the additive noise, the joint distribution of the observed ground speed measured at different distances is given by

where , and , and can be easily constructed as given in (5). When there comes a novel value of , we compute according to (Rasmussen and Williams, 2006)

the Gaussian posterior probability of a ground speed value at a new

by

(11)

where

(12a)
(12b)

and .

The SGP deals with the training data in a batch manner. The corresponding computational complexity scales as and the memory requirement scales as . Next, we apply the grid based on-line Gaussian process (OGP) (Huber, 2013) to derive an on-line ground speed model.

3.2 Grid Based On-line Gaussian Process Regression

The notations, if not re-defined, will follow those used for the SGP. For simplicity and easier comparison with the SGP, we imagine that the training data arrives one by one in time, namely we have a new data point , at time instance .

In grid based on-line GP, a set of grids is introduced to represent some predefined reference distances on track. The corresponding “clean” ground speed (without the additive white Gaussian noise) values at these grids are latent variables . We denote . For notational brevity in the sequel, is short for , and its mean and covariance matrix are denoted by and , respectively. Our aim is to compute the posterior distribution of at any time instance () given the training data . The main steps of the grid based OGP (Huber, 2013) are summarized as follows:

  1. Initialization: Set initial mean vector and the covariance matrix . Compute the inverse of and store it for use later on. Here the prior mean is set to be a vector of all zeros (of size ) and the prior covariance matrix is set to be

    (13)
  2. Recursive Processing: For each , do the following computations:

    (14a)
    (14b)
    (14c)
    (14d)
    (14e)
    (14f)

    After the recursive processing through (14a)–(14f), we have

    (15)
  3. Prediction: At the end of the training phase, namely assumed in this specific example, the posterior distribution of a noisy speed observation at a novel input position , given and , can be approximated by

    (16)

    where

    (17a)
    (17b)

    Herein, is short for and is short for .

The detailed derivations of (14a)–(14f) can be found in (Huber, 2013) and the derivations of (17a) and (17b) are given in Appendix B. It is easy to verify that the computational complexity scales as for in the initialization step, for and at any time instance in the recursive processing step. The computational complexity of and scales as . The computational complexity for prediction in the third step scales as . As compared to the SGP, the grid based OGP is able to reduce the overall computational complexity from to with . Moreover, when we have a new observation pair at time after the training phase, it requires only complexity to compute and , which is essential for on-line learning.

Apart form the reduced computational complexity, there are several other benefits of using OGP as compared to the SGP. For instance, model fitting can be performed in parallel to measurement collection, and we can stop collecting more data when the posterior distribution of ground speed at the predefined grids converges. To summarize, OGP is more flexible to use and more adaptive to new arrival data. While if the underlying model is time invariant and the computational cost is secondary, the SGP using all available training data for both hyper-parameter optimization and prediction will intuitively give the best modeling results.

3.3 Kernel Selection

Kernel function is a key component of GP, as it encodes the assumptions about the function which we wish to learn. The kernel function reflects the similarity between data points (Rasmussen and Williams, 2006). In this subsection, the selection of different kernels will be discussed. One classic kernel function is the Squared Exponential (SE) kernel, defined by

where is the variance of the function and is the length scale which determines how rapidly the function varies with . The SE kernel is considered as the most widely used kernel. However, it implies a stationary model which forbids structured extrapolation (Klenske et al., 2016). In some specific cases, this kernel function may show poor performance in prediction, for instance, in sport races where there is periodic pattern over laps. Considering this, it is more appropriate to adopt a periodic kernel which can reflect the similarities between different laps. However, strict periodicity is too rigid, because there may be some deviations in each lap (e.g., due to the strength loss of the individual, strategies used in competition, etc). Hence, we adopt a local periodic (LP) kernel, which is a product of an SE kernel and a periodic kernel:

(18)

where is the length scale of the periodic kernel and is the period-length. This kernel considers two inputs are similar if they are similar under both the SE and the periodic kernels. If , this allows encoding a decay in the covariance over several oscillations (Klenske et al., 2016). The key benefits of this kernel is that it outperforms SE kernel in prediction when the distance from the data is increasing as illustrated in (Klenske et al., 2016, Fig. 4).

Lastly, we note that the LP kernel in (18) is not necessary optimal in our application, but as will be shown in our simulations it gives very good modeling and prediction results. Interested readers can refer (Duvenaud et al., 2013; Wilson and Adams, 2013; Yin et al., 2018) for strategies for selecting an optimal kernel from the training data.

3.4 Hyperparameters Determination

Given the SGP model and the kernel in 3.1 and 3.3

respectively, the hyperparameters to be calibrated are

The likelihood function of the observed ground speed with respect to the hyperparameters can be written as follows:

(19)

Here, the maximum-likelihood estimate (MLE), , is derived. Details are given in Appendix A.

In the OGP, we assumed that the parameters are known before the recursive process starts. This can be the case when some historical/expert knowledge is available or a small set of the training data can be used to train the parameters like we did for the SGP. Huber demonstrated in (Huber, 2014) that these parameters can be learned on-line as well.

4 Aggregated Flow Modeling And Prediction for Multiple Individuals

In this section, we investigate aggregated flow modeling and prediction for multiple individuals that are clustered. The classic way of clustering data sequences is to extract some common features from the data sequences and then perform K-means algorithm or expectation-maximization (EM) algorithm

(Bishop, 2006) based upon some distance metric. Principle component analysis (Bishop, 2006) can be used to reduce the dimension of the feature space before running the K-means or EM algorithm. The drawback of these classic methods is that the number of clusters need to be prescribed before we conduct clustering. A more sophisticated way is to combine the Dirichlet process with Gaussian processes in a Bayesian framework, which is capable of modeling an infinite number mixture stochastic processes (see for instance (Hensman et al., 2015)). One example of sequence clustering will be given in Section 6.3.

4.1 Flow Modeling and Prediction for Multiple Individuals

The data of all individuals in the same cluster will be aggregated to form a new dataset, denoted as . The ground speed will be modeled as a function of the distance on track plus some noise terms:

(20)

where is an additive white Gaussian noise with zero mean and variance , and is an additive correlated Gaussian noise with zero mean and variance . However, at two positions, namely and , is assumed to be correlate, accounting for the interactive effects between individuals. The kernel function is selected as . Compared to the flow model for an individual, the correlations between individuals in one cluster are important since their performance may affect each other during the competition. Such effects can be considered as correlated noise and thus be modeled as given in (20). Let be a GP with the mean function and the kernel function , the joint distribution of observations is given by

where and

and . Correspondingly, the posterior probability of an observed ground speed value at a novel input is given by

(21)

where

(22a)
(22b)

Similarly, the MLE method is applied to train the hyperparameters , and more details are given in Appendix A.

Figure 3: Map of two tracks.
Figure 4: Altitude maps for track 1 and 2.

5 Data Description

The data used in this work was gathered during the men’s kilometers relay race in the 2015 Nordic World Ski Championships, Falun, Sweden. The dataset contains primarily the longitude, latitude, distance on track, ground speed and sampling time instances for individuals from national teams. The individuals who did not finish the competition have been excluded from the dataset. The data was conducted regularly at a frequency of 1 Hz. A distance on track is essentially calculated by ®TrackingMaster from the longitude and latitude of a skier obtained from global positioning system (GPS). ®TrackingMaster is a software developed by Swiss Timing to receive raw GPS data from GPS modules and to convert the raw GPS data into data related to the course. The positioning uncertainty of GPS in a wide open outdoor environment can be as low as 5 meters, hence is ignored in our work. The individuals compete on track for relay and , while on track for relay and . The two tracks are illustrated in Fig. 3 with their coordinates expressed using the World Geodetic System (WGS). Each track is 2.5 kilometers in length. In each relay, an individual has to finish kilometers on one track (i.e., laps). The length of data is different due to the various finishing time between individuals. It should be noted that on track 1, skiers are applying the classic style, while on track 2 they use skating style.

Altitude maps, see for instance Fig.4 and 4, are readily available before the race. For individual force analysis, we apply the approach to the killer hill (i.e., highlighted with red color) and steepest downhill segments (i.e., highlighted with green color). For flow model of multiple individuals, the data of each individual is first segmented. Then, the data segments in the killer hill and steepest downhill segments are extracted. In the same relay, data from each segment is aggregated over all individuals to be used for group clustering and flow modeling.

6 Results

6.1 Force Analysis

In this section, we investigate the relationship between performance and behavior of individual skiers. It is based on the force analysis introduced in Section 2. The force analysis is done for both the killer hill and the steepest downhill segments of two tracks.

First of all, the estimated forces on track , the killer hill segment, are plotted in Fig. 5 to 7, with each figure representing i.e., one skier. Fig. 8 to 10 depict the estimated forces at the steepest downhill segment. It is noted that for the steepest downhill segment, most parts are declining areas, while there are some small areas that are either inclining or rather flat. In addition, positive forces in the plots indicate resultant forces are acting in alignment with the moving direction, while the negative ones indicate the resultant forces are acting opposite to the moving direction. In total, we compare the forces on both the killer hill and steepest downhill for three different skiers, namely, individual A (best performance), B (competing with individual A) and C (fell behind).

Figure 5: Resultant forces: Individual A, killer hill, track 1
Figure 6: Resultant forces: Individual B, killer hill, track 1
Figure 7: Resultant forces: Individual C, killer hill, track 1
Figure 8: Resultant forces: Individual A, steepest downhill, track 1
Figure 9: Resultant forces: Individual B, steepest downhill, track 1
Figure 10: Resultant forces: Individual C, steepest downhill, track 1

From all the plots for track , where classic style is applied, we have the following observations: (I) Skier A and B have stronger forces on average at the killer hill segment and hence outperform skier C;(II) Larger forces/frictions are estimated at sharper slopes, for instance, between to meters at killer hill and around and meters at steepest downhill;(III) Different strategies were applied, for instance, skier A has larger forces in lap 1 and 4, where skier B has larger forces in lap 2 and 3, and skier C has almost evenly distributed forces for all 4 laps. To further verify the observations, histograms of the estimated resultant forces for two segments are shown in Fig. 11 and 12. Due to space limitation, we only show the comparison between individual A and B, who were competing with each other. It can be observed that for the killer hill, skier A and B distribute forces differently over 4 laps. At the steepest down hill, individual B experience larger negative forces, especially for lap 1, 2 and 4. It should be noted that individual B has much larger weight than individual A, which may result in higher friction when declining.

Figure 11: CDF of resultant forces: killer hill, track 1
Figure 12: CDF of resultant forces: steepest downhill, track 1

Similarly, the estimated resultant forces are evaluated on track , which are illustrated in Fig. 13 to 18 for both a killer hill and a steepest downhill segment, for individual D (best performance), E (competing with individual D), F (fell behind), respectively. The histogram of individual D and E are also compared for both segments in Fig. 19 and 20. It can be observed that for the killer hill, individual D has larger positive forces than E (especially in lap 4, where individual D performed an overtaking of individual E). It is also noted that the higher the weight is (e.g., individual D is heavier than E, and individual F is the lightest), the larger friction the individual will experience when declining (e.g, see Fig. 16 to 18 between and meters). Compared with forces on track , where the classic style is applied, the forces are much smaller on track

, this is probably due to the fact that different skiing technique (i.e., skating style) is used. It is also worth mention that for the steepest downhill segment, the forces almost have a uniform distribution between

and Newton on track .

Figure 13: Resultant forces: Individual D, killer hill, track 2
Figure 14: Resultant forces: Individual E, killer hill, track 2
Figure 15: Resultant forces: Individual F, killer hill, track 2
Figure 16: Resultant forces: Individual D, steepest downhill, track 2
Figure 17: Resultant forces: Individual E, steepest downhill, track 2
Figure 18: Resultant forces: Individual F, steepest downhill, track 2
Figure 19: CDF of resultant forces: killer hill, track 2
Figure 20: CDF of resultant forces: steepest downhill, track 2

6.2 Individual Flow Model and Prediction

The ground speed versus distance on track for one specific individual is depicted in Fig. 21. The speed in four laps shows a periodic pattern, while difference can also be observed between different laps. For instance, better performance in lap 4 has been observed, when the individual sprint for the last lap. In order to evaluate the goodness of fit and compare the predictions made by different models, the data from the first three laps are used for training and the data from the last lap is used for validation. The results for the SGP and the OGP with different kernels have been shown in Fig. 21 and 22, respectively. For the OGP in Fig. 22, grid points are uniformly selected within the race distance, i.e., 10 kilometers.

Figure 21: Flow model (first laps) and prediction (lap ) for SGP: LP kernel.
Figure 22: Flow model (first laps) and prediction (lap ) for OGP: LP kernel.

From Fig. 21 and 22, we can see that both GPs provide good fit for the training data. Both SGP and OGP give good performance in prediction using the LP kernel. We can also see that the OGP with shows similar performance as the SGP with training data of size . For further comparisons between the two GPs, see for instance Zhao et al. (2016)

. The mean predictive standard deviation for the speed difference between two consecutive time instants (i.e.,

) is compared in Table 1, where the comparison is for skier E, in the killer hill segments for all 4 laps. For the black-box approach for individual skier, we compute the predictive variance for and , namely, , and according to (12b). The predictive variance for the speed difference is computed as . The average standard derivations for the killer hill segment is then computed by taking the mean value of during the killer hill time interval. For the grey-box approach, the predictive variance for can be computed from and the force model given in (3), yielding

(23)

Similarly, the mean of the standard deviation during the killer hill segment is registered in Table 1. From the comparison, we have observed larger predictive standard deviation for the black-box approach. This may due to the fact that in black-box approach, the model is completely unknown, which leads to larger ambiguity in the prediction.

. Lap Black-box (m/s) Grey-box (m/s) 1 0.4148 0.3022 2 0.4116 0.2433 3 0.4204 0.3066 4 0.4069 0.2207

Table 1: Comparison between black-box and grey-box approach: the mean predictive standard derivation for the speed difference

6.3 Aggregated Flow Modeling (Multiple Individuals)

Clustering is performed first for individuals on the same track in the same relay. Here we focus on the killer hill and steepest downhill segments as shown in Figure 4 where the individuals may perform differently. Clustering of the ground speed curves of two individuals is similar to clustering of line-of-sight and non-line-of-sight signal waveforms described in (Wymeersch et al., 2012)

. The features considered in this work are: maximum, minimum, variance and mean value of speed, energy, skewness and kurtosis. With the features extracted from the corresponding data segments, clustering is performed as described in

(Zhao et al., 2016, Section V.B). The number of clusters is in this evaluation.

Figure 23: Flow models of individuals in lap track : killer hill.
Figure 24: Flow models of individuals in lap track : killer hill.
Figure 25: Flow models of individuals in lap track : steepest downhill.
Figure 26: Flow models of individuals in lap track : steepest downhill.

After clustering, the data of individuals in the same cluster are aggregated and the GP model is applied to the aggregated data as proposed in Section 4.1. The flow models for relay on track are illustrated in Fig. 23 to 26. Due to space limitation, we only show the results for lap and . Besides, the segments for individuals who perform worse (i.e., individual A) and better (i.e., individual B) in the whole competition are also plotted. In the killer hill, there is no significant difference between two clusters in lap . However, the differences between two clusters become more distinct in lap . This is reasonable since at the beginning of the race, all individuals may move in one cluster with similar speed. In the final stage, the difference is larger since some individuals may sprint and some may fall behind due to exhaustion. In the steepest downhill, the two clusters perform quite similarly in all laps so that one cluster is enough to model all individuals. In addition, the variance of the model becomes larger from lap to . It indicates that in the steepest downhill segment, all individuals have quite uniform speed at the beginning of the race. As the race progresses, the speed of different skiers varies.

Besides, it is observed that individuals that outperform in the killer hill have better final results in the whole competition (e.g., individual B, especially in lap , has outperformed others in cluster ). The individuals that perform worse in the killer hill have worse final results (e.g., individual A, especially in lap , has much worse performance than others in cluster ). This indicates that the performance in the killer hill is a more crucial factor than the steepest downhill in determining the final results.

Fig. 27 and 28 show a comparison between the flow models for lap and in both killer hill and steepest downhill segments. In the killer hill, all individuals almost maintain similar speed in lap as in lap . However, for the steepest downhill, the average speed of all individuals are lower in lap than in lap . It is probably due to different track conditions (e.g., weather condition) in lap and . In lap , there may be more protrusions on the track than in lap (i.e., the track is less smooth in lap ). Hence, the performance on the steepest downhill is greatly affected in lap , while for the killer hill, the performance is mainly determined by the slope of the track.

Figure 27: Flow models of individuals in lap and track : killer hill.
Figure 28: Flow models of individuals in lap and track : steepest downhill.

6.4 Discussions on Grey-box and Black-box Modeling

So far we have shown the results for both the grey-box and black-box modeling approach. It is clear to see that the grey-box approach explores the partially known physical models, and the unknown part in the model are formulated by Gaussian process. In the black-box modeling approach, the model appears as random function and it can be trained based on the training inputs and outputs. We further compute and compare the predictive variance for the ground speed difference for both approaches. The grey-box approach yields smaller predictive variance on average. This is due to the fact that part of the model is deterministically known in grey-box approach. Hence, the ambiguity in the model is reduced. In the black-box approach, the model is completely unknown and random, which leads to larger uncertainty in prediction.

7 Conclusions

In this work, we have proposed a grey-box modeling approach for force analysis in skiing races. By analyzing the forces for different skiers, we conclude that they apply different strategies over multiple laps. For instance, skier A exerts larger forces in lap 1 and 4, where skier B exerts larger forces in lap 2 and 3, and skier C has almost evenly distributed forces for all 4 laps. Skiers having better performance are good at maintaining propulsive force at inclining area, while the declining performance is mainly determined by the friction on ice and air resistance, and larger weights lead to larger negative forces when declining. In addition, skiers having better performance are good at maintaining propulsive force at inclining area, while the declining performance is mainly determined by the friction on ice and air resistance.

Besides, a black-box modeling approach using Gaussian process has been proposed for flow modeling. Both the standard GP and a grid based on-line GP with the local periodic kernel manifest to be powerful in modeling and predicting the performance of individuals. In particular, the grid based on-line GP reduces the computational complexity greatly while maintains similar performance. Moreover, on-line GP is more appropriate for real-time analytics where data come in sequentially. Then, clustering of individuals are performed based on the killer hill and steepest downhill segments. Moreover, the aggregated flow models for clusters of individuals have been developed and the results reveal that the individuals may behave differently in the killer hill, while follow a similar flow model in the steepest downhill.

Finally, by comparing the two approaches, the grey-box approach is preferred given the real physical model is partially known, since it leads to reduced predictive variance. The black-box modeling approach, compared with the grey-box approach, is simpler and suitable when the relationship between the input and output are not clearly presented.

Appendix A

A.1 Hyperparameters for Force Analysis

The ML estimates for GP in (4) can be obtained by maximizing the likelihood function, cf.(5), with respect to , which is equivalent to

(24)

Various existing numerical methods can be adopted to solve this minimization problem, such as the limited-memory BFGS (LBFGS) quasi-Newton method (Rasmussen and Williams, 2006) and the conjugate gradient (CG) method. In this work, the former is adopted, which requires the first-order derivatives of the likelihood function. The first-order derivatives of can be given as

(25a)
(25b)
(25c)

where

Here we use to denote for brevity.

A.2 Hyperparameters for Individual Model

The maximum-likelihood estimate of the GPR model parameters, , can be obtained by maximizing the Gaussian prior likelihood function similarly.The first-order derivatives can be given as the same form as in (25), except that

A.3 Hyperparameters for Aggregated Model

Similarly, the ML estimate for the multiple skiers GP model parameters can be obtained by maximizing the likelihood function, cf.(4.1), with respect to .The first-order derivatives of the cost function, , have similar formats as given in (25), except that

Appendix B

Imagine that is also a training dataset despite that is latent. Given a novel input, , the posterior distribution of observing a noisy , given , can be easily obtained as follows:

(27)

where

(28a)
(28b)

The posterior distribution of , given and , can be computed analytically via the following marginalization:

and approximated with reduced computational complexity, like in (Snelson and Ghahramani, 2006), by

Since both and are Gaussian distributed, applying Lemma A.1 in (Särkkä, 2013) yields eventually (17a) and (17b).

Acknowledgements.
This work is funded by European Union FP7 Marie Curie training programme on Tracking in Complex Sensor Systems (TRAX) with grant number 607400 from 2014 to 2017. This work is also funded by ELLIT project, which is a strategic research environment funded by the Swedish government in 2010. Feng Yin is mainly funded by Shenzhen Science and Technology Innovation Council under Grant JCYJ20170307155957688, Guangdong Province Pearl River Talent Team under Grant 2017ZT07X152, and partly by the Shenzhen Fundamental Research Fund under Grant (Key Lab) ZDSYS201707251409055.

References

  • Bartlett [2007] R. Bartlett. Introduction to Sports Biomechanics: Analysing Human Movement Patterns. London: Routledge, 2007.
  • Bishop [2006] C. M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2006. ISBN 0387310738.
  • Deisenroth and Ng [2015] M. P. Deisenroth and J. W. Ng. Distributed Gaussian processes. arXiv preprint arXiv:1502.02843, 2015.
  • Duvenaud et al. [2013] D. Duvenaud, J. R. Lloyd, R. Grosse, J. B. Tenenbaum, and Z. Ghahramani. Structure discovery in nonparametric regression through compositional kernel search. arXiv:1302.4922v4, 2013.
  • Ellis et al. [2009] D. Ellis, E. Sommerlade, and I. Reid. Modelling pedestrian trajectory patterns with Gaussian processes. In

    Proc. of IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops)

    , pages 1229–1234, Sep. 2009.
    doi: 10.1109/ICCVW.2009.5457470.
  • Hensman et al. [2015] J. Hensman, M. Rattray, and N. D. Lawrence. Fast nonparametric clustering of structured time-series. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37:383–393, 02 2015.
  • Herrin [2010] R. J. Herrin. Real-Time Traffic Modeling and Estimation with Streaming Probe Data using Machine Learning. PhD thesis, UC Berkeley: Industrial Engineering & Operations Research, 2010.
  • Huber [2013] M. Huber. Recursive Gaussian process regression. In Proc. Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pages 3362–3366, May 2013.
  • Huber [2014] M. Huber. Recursive Gaussian process: On-line regression and learning. Pattern Recognition Letters, 45:85–91, 2014. ISSN 0167-8655.
  • Klenske et al. [2016] E.D. Klenske, M.N. Zeilinger, B. Scholkopf, and P. Hennig. Gaussian process-based predictive control for periodic error correction. IEEE Transactions on Control Systems Technology, 24(1):110–121, Jan 2016. ISSN 1063-6536. doi: 10.1109/TCST.2015.2420629.
  • MacKay [1997] D. J. C. MacKay. Gaussian processes – a replacement for supervised neural networks?, 1997.
  • Pohjola [2014] M. Pohjola. Analysing effectiveness of force application in ski skating using force and motion capture data : a model to support cross-country skiing research and coaching. Master’s thesis, University of Jyväskylä, 2014.
  • Quiñonero Candela and Rasmussen [2005] J. Quiñonero Candela and C. E. Rasmussen. A unifying view of sparse approximate Gaussian process regression. J. Mach. Learn. Res., 6:1939–1959, Dec. 2005. ISSN 1532-4435.
  • Rasmussen and Williams [2006] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA, USA, 2006.
  • Reece et al. [2011] S. Reece, R. Mann, I. Rezek, and S. Roberts. Gaussian process segmentation of co-moving animals. Proc. AIP Conference, 1305(1):430–437, 2011.
  • Särkkä [2013] S. Särkkä. Bayesian Filtering and Smoothing. Cambridge University Press., 2013.
  • Särkkä et al. [2013] S. Särkkä, A. Solin, and J. Hartikainen.

    Spatiotemporal learning via infinite-dimensional Bayesian filtering and smoothing: A look at Gaussian process regression through Kalman filtering.

    IEEE Signal Processing Magazine, 30(4):51–61, July 2013. ISSN 1053-5888. doi: 10.1109/MSP.2013.2246292.
  • Shen et al. [2006] Y. Shen, A. Y. Ng, and M. Seeger. Fast Gaussian process regression using KD-trees. In Proc. Advances in Neural Information Processing Systems. MIT Press, 2006.
  • Smith and Demetsky [1994] B. L. Smith and M. J. Demetsky. Short-term traffic flow prediction: neural network approach. Transportation Research Record, 1453, 1994.
  • Snelson and Ghahramani [2006] E. Snelson and Z. Ghahramani. Sparse Gaussian processes using pseudo-inputs. In Proc. Int. Conf. Advances in Neural Information Processing Systems, pages 1257–1264. MIT press, 2006.
  • Titsias [2009] M. K. Titsias. Variational learning of inducing variables in sparse Gaussian processes. In

    Proc. Artificial Intelligence and Statistics

    , pages 567–574, 2009.
  • Wilson and Adams [2013] A.G. Wilson and R. P. Adams. Gaussian process kernels for pattern discovery and extrapolation. In Proceedings of International Conference on Machine Learning, pages 1067–1075, Atlanta, USA, 2013.
  • Wymeersch et al. [2012] H. Wymeersch, S. Marano, W.M. Gifford, and M.Z. Win. A machine learning approach to ranging error mitigation for UWB localization. IEEE Transactions on Communications, 60(6):1719–1728, June 2012.
  • Xie et al. [2010] Y. Xie, K. Zhao, Y. Sun, and D. Chen. Gaussian processes for short-term traffic volume forecasting. Transportation Research Record: Journal of the Transportation Research Board, 2165:69–78, 2010.
  • Yin et al. [2018] F. Yin, X. He, L. Pan, T. Chen, Z.-Q. Luo, and S. Theodoridis. Sparse structure enabled grid spectral mixture kernel for temporal Gaussian process regression. In Proceedings of International Conference on Information Fusion, pages 47–54, Cambridge, UK, July 2018.
  • Zhang and Xie [2008] Y. Zhang and Y. Xie. Forecasting of short-term freeway volume with v-support vector machines. Transportation Research Record: Journal of the Transportation Research Board, 2024(1):92–99, 2008.
  • Zhao et al. [2016] Y. Zhao, F. Yin, F. Gunnarsson, F. Hultkratz, and J. Fagerlind. Gaussian processes for flow modeling and prediction of positioned trajectories evaluated with sports data. In Proc. 19th International Conference on Information Fusion (FUSION), pages 1461–1468, July 2016.