Identifying cross country skiing techniques using power meters in ski poles

04/23/2019 ∙ by Moa Johansson, et al. ∙ Chalmers University of Technology 0

Power meters are becoming a widely used tool for measuring training and racing effort in cycling, and are now spreading also to other sports. This means that increasing volumes of data can be collected from athletes, with the aim of helping coaches and athletes analyse and understanding training load, racing efforts, technique etc. In this project, we have collaborated with Skisens AB, a company producing handles for cross country ski poles equipped with power meters. We have conducted a pilot study in the use of machine learning techniques on data from Skisens poles to identify which "gear" a skier is using (double poling or gears 2-4 in skating), based only on the sensor data from the ski poles. The dataset for this pilot study contained labelled time-series data from three individual skiers using four different gears recorded in varied locations and varied terrain. We systematically evaluated a number of machine learning techniques based on neural networks with best results obtained by a LSTM network (accuracy of 95 strokes), when a subset of data from all three skiers was used for training. As expected, accuracy dropped to 78 two skiers and tested on the third. To achieve better generalisation to individuals not appearing in the training set more data is required, which is ongoing work.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In a professional cross country ski race, as in many other sports, the first thing the athletes do after crossing the finish line is often to switch off their smart sports-watch. Why?

The development of a wide range of sensors and products such as GPS-sensors, heart-rate monitors, motion sensors and power sensors have made it possible to record a vast amount of data from athletes, providing a rich source of information to help coaches and athletes measure, analyse and understand training load, racing efforts and technique. Sports like cycling has lead the way among the endurance sports, as it its relatively easy to equip a bicycle with various sensors, for instance, to accurately measure the power in each pedal stroke. Using power meters to steer training effort has become common not only for professional cyclists and coaches, but also for more ambitions recreational riders [1]. Given the relative ease at which large volumes of data can be recorded from sensors, we believe that machine learning has the potential to provide valuable tools for assisting data analysis in sports.

In this project, we have collaborated with Skisens AB, a spin-off company from Chalmers University of Technology, which produces a power meter for cross-country skiing, mounted inside the handle of the pole. Unlike cycling where all power comes from the legs via the pedals, in skiing the proportion of power measured in the poles depends on skiing technique. Broadly speaking, the skiing techniques may be divided into classical style and freestyle, each regulated by rules in competition. Furthermore, the two styles can each be broken down into several sub-techniques. The most effective sub-technique will depend on the terrain, the snow conditions and the individual strengths of the skier (we give a brief introduction to cross-country skiing techniques in section 2). In order for an athlete and/or coach to accurately analyse the effort based on data recorded from a race it is therefore valuable to be able to get an automated classification of which technique was used where during the race. This work focuses on free-style technique, however, the methods may be applied also to classical style.

We use a dataset provided by Skisens, containing data from three skiers using Skisens handles while roller-skiing using different techniques in varied terrain. The dataset and data pre-processing is described in section 3. We have evaluated three frequently used kinds of deep neural network

classifiers on this dataset: A convolutional neural network (CNN), a Long-Short Term Memory (LSTM) network

[4], and finally a bi-directional LSTM (BLSTM) model [2], described in more detail in section 4. The set up of our study is inspired by Hammerla et al. [3], who experimented thoroughly with these kinds of deep neural network to classify a variety of human movements using data from wearable sensors (e.g. household activities, physical exercise as well as gait abnormalities arising in Parkinson’s disease). We have experimentally evaluated the models in two experiments (see section 5): the first used a subset of data from all skiers for training in which the LSTM model reached the best accuracy (95% on unseen test data), and a second experiment where this model was trained on data from two skiers and evaluated on the third. As expected, the accuracy for the LSTM model then dropped to 78% on unseen test data.

There has been several previous works aiming at classifying cross-country skiing technique using a variety of sensors. Marshland et. al equipped cross-country skiers with a sensor unit attached on the skiers back, and observed that there were sufficient regularities in the sensor data which would motivate the development of algorithmic techniques for technique identification [7]

. This has been followed by several studies using different combinations of sensors and machine learning techniques, with promising results. Stöggl et al. used accelerometer data from a mobile phone attached to a belt around the chest of the skier and a Markov chain model to classify strokes

[5, 12]. When trained and tested on the same individuals, their algorithm reached an accuracy of 90.3% 4.1%, which dropped to 86.0% 8.9% when trained on collective data. Rindal et al. used wearable inertial measurement units (IMUs) attached to the skiers arms and chest, together with gyroscopes attached to the skiers arms, to classify classical skiing techniques [9]

. The gyroscopes helped identifying each stroke cycle, and the IMU-data was used to train a neural network classifier reaching an accuracy of 93.9%. Sakurai et al. also used data from several IMUs attached to the skis and poles to construct a decision tree classifier both for classical and skating techniques

[10, 11]. Recently, Jang et al. conducted a study using wearable gyroscope sensors to identify both classical and skating techniques and a deep machine learning model combining CNN and LSTM layers [6]. The best results were obtained with sensors attached to both hands, both feet and the pelvis, which reached an accuracy of 80% when two skiers were used for training, and an unseen for testing, rising to between 87.2% to 95.1% (depending on terrain) when three skiers were used for training, and a forth unseen one for testing.

The main difference between our work and the above ski technique classifiers is that we do not use any dedicated wearable sensors for the task, but simply explore if we can identify technique using only the sensors already present in the Skisens pole for measuring power. Our sensor data only records the movements of the hands, and does not include any sensors on the body or on the skis, which would make the task easier. Nevertheless, we reach comparable or better accuracy results. Another advantage of using deep neural networks is that they do not require hand crafted features to be passed to the model.

2 Background: Cross country skiing techniques

In cross country skiing, several different sub-techniques can be used by the skier, with each technique corresponding to different motion patterns. The most commonly used skiing techniques are divided into two subgroups – classical style and freestyle. In this work we focused on four freestyle techniques: double poling (which may also be used in classical style races), and three skating techniques refereed to as Gear 2, Gear 3 and Gear 4 following the notation in [8]111We note that the notation varies between different countries, these techniques are sometimes also referred to as V1, V2 and V2a. See [8] for a discussion., and illustrated in figures 14. There is also a Gear 1, which is rarely used in practice except in extremely steep terrain, and a Gear 5 which only uses the legs and no poling. These styles were not included in this study.

Figure 1: Double Poling

In double poling, as illustrated in figure 1, the skier mostly uses the upper body by moving the arms in parallel. In classical style racing, double poling is the fastest gear, primarily used in horizontal or gentle down-hill terrain, when the velocity is already high, and the skier is not in need of using the legs. In freestyle racing, double poling is not much used, except under special conditions, when there is little space to use the legs in a masstart race or if the snow is very icy, making it difficult to use the legs.

Figure 2: Gear 2 skating, leading with right hand.
Figure 3: Gear 3 skating

In skating Gear 2 the motion pattern of the skier is asymmetric, with the skier leading with one arm (see figure 2), performing one double pole push for every second leg push. The skier may alternate which arm is leading. Gear 2 is mostly used in uphill or horizontal terrain when the friction is high. Gear 3 is characterised by the skier preforming one double pole push for each leg push (see figure 3). Gear 3 is mostly used in the translation between uphill and downhill, or in horizontal terrain when the skier wants to accelerate to higher speed. The last skating style considered for this work is Gear 4 (see figure 4), which has the same relationship between arms and legs as Gear 2, however the techniques differs in how the poling is preformed: in Gear 4 the skier does the poling symmetrically with respect to each side. Gear 4 is mostly used in horizontal terrain when the snow ski friction is low. We note that double poling and Gear 3 have arm-motion patterns that looks considerably like each other. This raised the question of whether separating these two sub-techniques would be more difficult, based on arm movements only.

Figure 4: Gear 4 skating

3 The Dataset

The dataset, provided by Skisens AB, consists of data from three individuals (male, experienced recreational skiers) using Skisens ski pole handles with sensors. The data was collected on roller skis on different days, in varied terrain and under varied conditions. There were both uphill and downhill sections as well as turns. Each skier used the three different skating styles (Gear 2, Gear 3 and Gear 4) plus double poling. For each gear there are a number of disjoint data segments, where each segment is a continuous time-series of data during which the skier only uses a specified style. The data collected is summarised in Table 1. Data was recorded at 50 Hz (50 samples per second), hence when we refer to time-steps, these are data-points recorded 0.02 seconds apart. After pre-processing the raw data (see section 3.1), we extracted a dataset containing 1671 individual strokes222Of which 252 strokes in Gear 2, 473 in Gear 3, 360 in Gear 4 and 585 strokes using double poling..

No. Data Unit
1 Time second
2 Force in the left pole Newton
3 Pole-ground angle of the left pole degrees
4 - 6 Left angular velocity rad/s
7 - 9 Left acceleration m/s
10 Force in the right pole Newton
11 Pole-ground angle of the right pole degree
12-14 Right angular velocity rad/s
15-17 Right acceleration m/s
Table 1:

Description of the dataset columns used for machine learning. The coordinate system for the vectors of acceleration and angular velocity is relative to the pole with (a) First axis: Pointing right (orthogonal to pole), (b) Second axis: Pointing down (parallel to pole), and (c) Third axis: Pointing forward (orthogonal to pole)

We remark that the data recorded also included the GPS position of the skier, but we choose not to include this information as a feature, as the different techniques naturally had been used at distinct road segments (as some techniques are more natural to use e.g. in uphill terrain. If this was included, the models would end up basing their predictions primarily on GPS-position, ignoring the other features, which would lead to poor performance on unseen data recorded in a different location.

3.1 Data pre-processing

To prepare the data set for machine learning, we applied some pre-processing techniques described below. First, the data was smoothed to reduce short-term random variation and irregular noise and split into single strokes. Secondly, as the data originally formed long time series where different techniques were used, we split them into shorter segments containing one stroke each, with the objective to learn the label for each such segment. Each such one-stroke segment was defined to include the time sequence from the moment when the skier lifts the pole, followed by the next ground contact phase until the skier lifts the pole in the air again. The splitting was implemented by iterating over the entire time series, and splitting, when the force changes from having magnitude larger than a threshold

, to smaller than , where

(motivated by inspection of the data). Naturally, not all strokes are of the same length time-wise, hence to make all samples the same length (fitting the input to the classifier) each strike sequence was (if needed) zero padded to have the fixed length of

time steps. As shown in table 1, there are 16 data values recorded for each time-step. Hence, each stroke is represented by a matrix of size 140 x 16.

For efficient implementation of a machine learning algorithm, the categories for the skiing techniques (double poling, gears 2-4) are represented in numerical form, using one-hot encoding, where a new binary variable is added for each of the four category.

4 Machine learning models

We experimented with three different types of deep machine learning models for stroke classification: a long short term memory network (LSTM) [4], a bidirectional long short term memory network (BLSTM) [2]

, and a one dimensional convolutional neural network (CNN). The models were implemented in Python using the Keras/TensorFlow libraries

333https://www.tensorflow.org/guide/keras. The code is available online444https://github.com/moajohansson/ai-in-sports.

4.1 Long short-term Memory (LSTM)

An LSTM network [4]

is a type of recurrent neural network, which, unlike for instance CNNs, is able to pass some information along from previous steps in e.g. a time sequence. LSTM’s contain special memory gates which enable some long-term dependencies to also be captured by the network during training, addressing a weakness of standard recurrent neural networks which might suffer from vanishing error gradients during training. LSTMs are suitable for time series data, and have successfully been used in for example many natural language tasks.

Figure 5:

Network architecture for the LSTM model, with an LSTM cell with two dense layer. The light blue boxes indicates layers in the network, and the number of neurons in each layer is stated inside the brackets in each layer.

The LSTM model in our experiment combines an LSTM cell with two dense layers (see fig. 5). The input of the LSTM model is a sequence of data points, each corresponding to one pole push. The first layer of the LSTM model is an LSTM cell with 126 neurons, chosen experimentally from the set

for minimising the error on the validation set. The second layer of the model is a dense layer with 140 neurons, which is connected to a dense layer with 4 neurons and a softmax activation function. These two layer’s can be interpreted as a weighted majority vote, it weights the importance of each 140 time steps and then gives one result of the most likely gear for the entire pole push. Besides using a layer for majority voting, as in the model above, we also examined the performance when performing majority voting after the model had classified each of the time steps separately in the pole push. However, employing weighted majority voting as layer in the model improved the accuracy on validation data with almost

, in comparison to performing majority voting after classifying each time step.

4.2 Bi-directional LSTM Model

The BLSTM network [2], has similar network architecture as the LSTM network. The difference is that the LSTM network passes information only in the forward direction, whereas the BLSTM network passes information in both the forward and backward direction. Hence, a BLSTM cell specified with same number of neurons as an LSTM cell, but uses twice as many weights.

Figure 6: The network architecture for the BLSTM model with one BLSTM layer and two dense layers.

Our BLSTM model consist of one BLSTM cell and two dense layers, see fig. 6. Experimentally minimising validation set error suggested setting the number of neurons in the BLSTM cell to 64. Further, the number of neurons in the two dense layers was chosen to be 140 and 4 respectively, as in the LSTM model.

4.3 Convolutional Neural Network Model

CNN are a deep neural network architecture which has primarily been used for image processing. The CNN network employs a convolutional operator which performs a kind of down-sampling, as illustrated in fig. 7. For image processing, two dimensional CNNs are typically used, but as we here deal with time-series, we employ a one-dimensional CNN acting in the time-dimension. As seen in fig. 7, the kernel size determines how many of the input elements will be weighted and summed together in each convolutional operation, while the stride determines how many steps to move the kernel for each operation.

Figure 7: The convolutional operator in a one dimensional CNN network, with kernel size 3 and stride 1.

Our CNN model consists of two one dimensional convolutional layers and two dense layers (see fig. 8

), as well as max-pooling and global max-pooling layers. The latter two layers are used for down-sampling, locally and globally.

Figure 8: The network architecture for the CNN model.

Based on experimental evaluation minimising error on the validation data we choose filters in each convolution layer. The model performance using one convolutional layer was also tested, but the model using two convolutional layers performed better on validation data. Similarly, the kernel size was set to , and the pool-size in the max-pooling layer was also set to 5. The number of neurons in the two dense layers was chosen to be 140 and 4 respectively, as in the LSTM model.

5 Experiments and Results

In this section we present classification results for the three models (LSTM, CNN, BLSTM) described above. The experiments were run on a Macbook Air with an Intel Core i5 1,7 GHz processor and 4GB of memory.

Experiment 1:

We trained the models on a subset of the data containing samples from all three skiers, and evaluated on another unseen subset as test data. We suspect that the same person performs strokes in the same techniques in a relatively consistent manner, hence the strokes in the test set are likely to be quite similar to something from the training set. A motivation for this kind of experiment is envisaging an application using Skisense-sensors which is personalised to the owner, who initially “calibrates” the product by skiing in specified gears to collect personal training data.

Experiment 1 was performed for all three models described above, using five-fold cross-validation, with each fold containing approximately the same number of strokes and the same proportion of strokes in each gear (folds 1-4 of 329 strokes, fold 5 of 355 strokes, from the total dataset of 1671 strokes).

Model Accuracy
LSTM 0.95
CNN 0.90
BLSTM 0.95
Table 2: Accuracy results for experiment 1, using five fold cross-validation

The results are promising, with between 90-95% correct classifications on average over the five folds, as summarised in Table 2. We note that the CNN model performed slightly worse than the other two, and also that the performance differed more over the different folds for the CNN model. We suspect that the CNN model suffered more than the LSTM-based models from the relatively small dataset. We note that the LSTM-based models also contains more trainable parameters than the CNN-model, so more experimentation is needed with different CNN architectures. Training takes longer for the LSTM and BLSTM models, approximately 1-2 hours on the laptop computer used, compared to around 10 minutes for the CNN model. We note that for a larger study, we would use modern hardware which would considerably speed up training.

In fig. 9

the confusion matrix for the LSTM model is presented (the other two models had very similar results). We note that Gear 4 and double poling were the easiest to classify, while Gear 3 was the hardest. This was somewhat surprising, as the arm movements of gear 4 and double poling are visually quite similar.

Figure 9: Confusion matrix for LSTM model, experiment 1.

Experiment 2:

Experiment 1 does not test the models’ capability to generalise to a person it has not seen before. This was somewhat difficult to test, due to the small dataset. However, we did a second experiment with the best-performing model from Experiment 1 (the LSTM model) where we trained on data from two skiers, and evaluated on unseen data from the third individual. This was expected to be harder, as the model would have to generalise, and ideally learn how an ”average” stroke in each technique would be represented by the sensor data. As expected, performance dropped to 78%. We believe that this could be improved by training on a larger dataset with samples from many individuals, and performing a larger study is future work.

6 Discussion and Further Work

We have conducted a pilot study using data from sensors fitted to ski pole handles to predict which technique or gear the skier is using. The pilot experiment aimed at classifying time-series for single strokes, as these are easy to identify from the power data recorded from the poles (near-zero readings indicating when the poles are in the air). We have not yet attempted the task of passing in continuous sequences of skiing strokes and identifying gear changes. This is an interesting problem, as some previous work, e.g. [9], report that mis-classifications of single strokes often happen near change points.

For this study we only had access to data from three individuals, resulting in a dataset of merely 1671 strokes, which is on the small side for deep learning. This was noticeable in Experiment 2, where, unsurprisingly, classification accuracy dropped when the model was presented with an unseen skier. We are however encouraged by the results in this study to gathering a larger dataset and performing a larger evaluation in the near future. Most other works in cross-country skiing technique classification come from the sports science domain, and often include only a few individuals in the studies (e.g. 10 skiers in

[9], four skiers in [6]). Furthermore, these studies often primarily focus on reaching high accuracy for these specific individuals (often elite athletes). Experiments are often in the style of our Experiment 1, i.e. the training data and test data contain the same individuals. As future work, it would be very interesting to apply deep learning techniques to a much larger dataset, containing both professionals and recreational skiers and investigate whether one can train a model to generalise well enough on all individuals, without taking small individual variations into account. This is particularly relevant from the perspective of Skisens, as they are interested in including technique classification together with their ski-pole sensors in for example a smart sports watch. Ideally, one would like to have a pre-trained model which does an acceptable job out of the box, and possibly then adapts to the individual user, without having to be trained from scratch.

References

  • [1] H. Allen and A. Coggan. Training and Racing with a Power Meter. Velo Press, 2010.
  • [2] A. Graves and J. Schmidhuber. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks, 2005.
  • [3] N. Y. Hammerla, S. Halloran, and T. Ploetz. Deep, convolutional, and recurrent models for human activity recognition using wearables. In

    IJCAI’16 Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence

    , 2016.
  • [4] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 1997.
  • [5] A. Hols and A. Jonasson. Classification of movement patterns in skiing. In Frontiers in Artificial Intelligence and Applications: Twelfth Scandinavian Conference on Artificial Intelligence, volume 257, 2013.
  • [6] J. Jang, A. Ankit, J. Kim, Y. Jang, H. Kim, J. Kim, and S. Ziong. A unified deep-learning model for classifying the cross-country skiing techniques using wearable gyroscope sensors. Sensors, 18(11), 2018.
  • [7] F. Marshland, K. Lyons, J. Anson, G. Waddington, C. Macintosh, and D. Chapman. Identification of cross-country skiing movement patterns using micro-sensors. Sensors, 12(4), 2012.
  • [8] J. Nilsson, P. Tveit, and O. Eikrehagen. Effects of speed on temporal patterns in classical style and freestyle cross-country skiing. Sports Biomechanics, 3(1), 2004.
  • [9] O. Rindal, T. Seeberg, J. Tjønnås, P. Haugnes, and Ø. Sandbakk. Automatic classification of sub-techniques in classical cross-country skiing using a machine learning algorithm on micro-sensor data. Sensors, 18(2), 2017.
  • [10] Y. Sakurai, F. Zenya, and Y. Ishige. Automated identification and evaluation of subtechniques in classical-style roller skiing. Journal of Sports Science and Medicine, 13, 2014.
  • [11] Y. Sakurai, F. Zenya, and Y. Ishige. Automatic identification of subtechniques in skating-style roller skiing using inertial sensors. Sensors, 16, 2016.
  • [12] T. Stöggl, A. Holst, A. Jonasson, E. Andersson, T. Wunch, C. Norström, and H.-C. Holmberg. Automatic classification of the sub-techniques (gears) used in cross-country ski skating employing a mobile phone. Sensors, 14, 2014.