With the rapid development of digital cameras and proliferation of social media sharing, there is also an explosive growth of available figure skating sports videos in both the quantity and granularity. Every year there are over 20 international figure skating competitions held by International Skating Union (ISU) and hundreds of skaters participated in them. Most of the high-level international competitions, such as ISU championships and ISU Grand Prix of Figure Skating are broadcast on the worldwide broadcaster, for instance CBC, NHK, Eurosport, CCTV. Over 100 figure skating videos are uploaded in Youtube and Dailymotion a day during the season.
The analysis of figure skating sports videos also have many real-world applications, such as automatically scoring the players, highlighting shot generation, and video summarization. By the virtue of the state-of-the-art deep architectures and action recognition approaches, the techniques of analyzing figure skating sports videos will also facilitate statistically comparing the players and teams, analyzing player’s fitness, weaknesses and strengths assessment. In terms of these sport statistics, professional advice can be drawn and thus help the training of players.
Sports video analytics and action recognition in general have been extensively studied in previous works. There exist many video datasets, such as Sports-1M , UCF 101, HMDB51 , FCVID  and ActivityNet . These datasets crawled the videos from the search engines (e.g., Google, or Bing) or the social media platforms (e.g., YouTube, Flickr, etc). The videos are crowdsourcingly annotated. On these video datasets, the most common efforts are mainly made on video classification [19, 21], video event detection , action detection and so on.
Remarkably, inspired by Pirsiavash et al. , this paper addresses a novel task of learning to score figure skating sport videos, which is very different from previous action recognition task. Specifically, in this task, the model must understand every clip of figure skating video (e.g., averagely 4400 frames in our Fis-V dataset) to predict the scores. In contrast, one can easily judge the action label from the parts of videos in action recognition. For example, one small video clip of capturing the mistake action of the player, will significantly negatively affect the final scores in our task. Thus the model must fully understand the whole video frames and process the varying length of videos.
Quite a few works have been devoted to learning to score figure skating videos. The key challenges come from several aspects. First, different from consumer videos, figure skating videos are the professional sports videos with the longer length (averagely 2 minutes and 50 seconds). Second, the scores of figure skating videos should be contributed by the experts or referees; in contrast, the labels of previous classification/detection based video analysis tasks are collected in a crowdsourcing way. Third, not all video segments can be useful to regress the scores, since the referees only take account into scores those clips of technical movements (TES) or a good interpretation of music (PCS).
To address these challenges, we propose an end-to-end framework to efficiently learn to predict the scores of figure skating videos. In particular, our models can be divided into two complementary subnetworks, i.e., Self-Attentive LSTM (S-LSTM) and Multi-scale Convolutional Skip LSTM (M-LSTM). The S-LSTM employs a simple self-attentive strategy to select important clip features which are directly used for regression tasks. Thus the S-LSTM mainly learns to represent the local information. On the other hand, the M-LSTM models the local and global sequential information at multi-scale. In M-LSTM, we utilize the skip LSTM to efficiently save the total computational cost. Both two subnetworks can be directly used as the models for prediction, or integrated into a single framework for the final regression tasks. Our models are evaluated on two figure skating datasets, namely, MIT-skate  and our own Fis-V video dataset. The experiments results validate the effectiveness of our models.
To further facilitate the research of learning to score figure skating videos, we contribute the Figure Skating Video (Fis-V) dataset to the community. The Fis-V dataset has the videos of high quality as well as the scores labeled. Specifically, our videos in Fis-V are captured by professional camera devices. The high standard international figure skating competition videos are employed as the data source to construct Fis-V dataset. The example video frames of this dataset are shown in Fig. 1. Each video snapshots the whole performance of one skater only; the irrelevant parts towards the skater (such as warming up, bowing to the audience after the performance) are pruned. Thus the length of each video is about 2 minutes and 50 seconds. Totally, we collect 500 videos of 149 professional figure skating players from more than 20 different countries. We also gather the scores given by nine different international referees in the competitions.
Contributions. We highlight the three contributions. (1) The proposed Self-Attentive LSTM can efficiently learn to model the local sequential information by a self-attentive strategy. (2) We propose a Multi-scale Convolutional Skip LSTM model in learning the local and global information at multi-scale, while it can save the computational cost by skipping some video features. (3) We contribute a high quality figure skating video dataset – Fis-V dataset. This dataset is more than 3 times bigger than the existing MIT-skate dataset. We hope this dataset can boost the research of learning to score professional sports videos.
The rest of this paper is organized in such a way. Sec. II compares some related work. We describe the details of constructing the dataset in Sec. III. The methodology of solving the specific task is discussed in Sec. IV. We finally give the experimental results in Sec. V. The whole paper is concluded in Sec. VI.
Ii Related Work
The sheer volume of video data makes the automatic video content understanding difficulty intrinsically. Very recent, deep architectures have been utilized to extract feature representations effectively in the video domain. While the development of image representation techniques has matured quickly in recent years [11, 60, 59, 33, 6], more advanced architectures were conducted for video understanding [27, 43, 8]
, including Convolutional Networks (ConvNets) with Long Short-Term Memory (LSTMs)[7, 36] and 3D Convolutional Networks  for visual recognition and action classification, two-stream network fusion for video action recognition [36, 42], Convolutional Networks learning spatiotemporal features [12, 50]. We discuss these previous works in each subsection.
Ii-a Video Representation
. The success of deep learning in video analysis tasks stems from its ability to derive discriminative spatial-temporal feature representations directly from raw data tailored for a specific task[16, 51].
Directly extending the 2D image-based filters 
to 3D spatial-temporal convolutions may be problematic. Such spatial-temporal Convolutional Neural Networks (CNN), if not learned on large-scale training data, can not beat the hand-crafted features. Wanget al.  showed that the performance of 3D convolutions is worse than that of state-of-the-art hand-crafted features. Even worse, 3D convolutions are also computationally expensive; and it normally requires more iterations to train the deep architectures with 3D convolutions than those without.
To reduce such computational burden, Sun et al. proposed to factorize spatial-temporal convolutions . It is worth noting that videos can be naturally considered as an ensemble of spatial and temporal components. Motivated by this observation, Simonyan and Zisserman introduced a two-stream framework, which learn the spatial and temporal feature representations concurrently with two convolutional networks . Such a two stream approach achieved the state-of-the-art performance on many benchmarks. Furthermore, several important variants of fusing two streams are proposed, such as [54, 9, 55, 64, 56, 65, 2, 62]
Most recently, C3D , and SENet  have been proposed for powerful classification models on videos and images. C3D  utilized the spatial-temporal convolution kernels and stacked them into a deep network to achieve a compact representation of videos. C3D has been taken as a more effective structure of preserving temporal information than 2D CNNs. SENet  adopted the “Squeeze-and-Excitation” block, which integrates the channel-set features, stressing the independencies between channels. In this work, we employ the C3D as the basic video feature representation.
Ii-B Video Fusion
In video categorization systems, two types of feature fusion strategies are widely used, i.e., the early fusion and the late fusion. Multiple kernel learning 
was utilized to estimate fusion weights[4, 35], which are needed in both early fusion and late fusion. To efficiently exploit the relationships of features, several more advanced feature fusion techniques were conducted. An optimization framework in  applied a shared low-rank matrix to reduce noises in the fusion. An audio-visual joint codebook proposed by Jiang el al.  discovered and fused the correlations of audio and visual features for video classification. The dynamic fusion is utilized in  as the best feature combination strategy.
With the rapid growth of deep neural networks, the combination of multiple futures in neural networks gradually comes into sight. In multimodal deep learning, a deep de-noised auto-encoder 48]
were employed to fuse the features of different modalities. More recently, Recurrent Neural Networks have also been utilized to fuse the video representation. Wuet al.  modeled videos into three streams including frames, optical flow and audio spectrogram and fuse classification scores adaptively from different streams with learned weights. Ng et al.  employed time domain convolution or LSTM to handle video structure and use late fusion after the two-stream aggregation. Comparing with this work, we propose a fusion network to efficiently fuse the local and global sequential information learned by the self-attentive and M-LSTM models.
Ii-C Sports Video Analysis
Recently, the sports video analysis has been tropical in the research communities . A common and important unit in sports video analysis is the action, or a short sequence of actions. There are various works that assess how well the people perform actions in different sports, including an application of automated video assessment demonstrated by a computer system that analyzes video recordings of gymnasts performing the vault ; a probabilistic model of a basketball team playing based on trajectories of all players 
; the trajectory-based evaluation of multi-player basketball activity using Bayesian network34].
The tasks of learning to score the sports have less been studied with only two exceptions [41, 39]. Pirsiavash et al.  introduced a learning-based framework evaluating on two distinct types of actions (diving and figure skating) by training a regression model from spatiotemporal pose features to scores obtained from expert judges. Parmar et al. 
applied Support Vector Regression (SVR) and Long Short-Term Memory (LSTM) on C3D features of videos to obtain scores on the same dataset. In both[41, 39], the regression model is learned from the features of video clips/actions to the sport scores. Comparing with [41, 39], our model is capable of modeling the nature of figure skating. In particular, our model learns to model both the local and global sequential information which is essential in modeling the TES and PCS. Furthermore, our self-attentive and M-LSTM model can alleviate the problem that figure skating videos are too long for an ordinary LSTM to get processed.
Iii Figure Skating Video (Fis-V) Dataset
Our figure skating video dataset is designed to study the problem of analyzing figure skating videos, including learning to predict scores of each player, or highlighting shots generation. This dataset would be released to the community under necessary license.
Iii-a Dataset construction
Data source. To construct the dataset, we search and download a great quantity of figure skating videos. The figure skating videos come from formal high standard international skating competitions, including NHK Trophy (NHK), Trophee Eric Bompard (TEB), Cup of China (COC), Four Continents Figure Skating Championships (4CC) and so on. The videos of our figure skating video dataset are only about the playing process in the competitions. Note that the videos about figure skating may also be included in some previous datasets (e.g., UCF 101, HMDB51 , Sports-1M  and ActivityNet ), which are constructed by searching and downloaded from various search engines (e.g., Google, Flickr and Bing, etc), or the social media sharing platforms (e.g. Youtube, DailyMotion, etc.). Thus the data sources of those datasets are different from ours. We thus emphasize the better and more consistent visual quality of our TV videos from the high standard international competitions than those consumer videos downloaded from the Internet. Additionally, the consumer videos about figure skating may also include the practice videos.
Selection Criteria. We carefully select the figure skating videos used in the dataset. We assume the criterion of scores of figure skating should be consistent for the high standard international skating competitions. Thus to maintain standard and authorized scoring, we select the videos only from the highest level of international competitions with fair and reasonable judgement. In particular, we are using the videos from ISU Championships, ISU Grand Prix of Figure Skating and Winter Olympic Games. Totally we have the videos about 149 players from more than 20 different countries. Furthermore, in figure skating competitions, the mark scheme is slightly changing every season, and very different for men and women. To make the scores more comparable, only the competition videos about ladies’ singles short program happened over the past ten years are utilized in our figure skating video dataset. We also collect the ground-truth scores given by nine different referees shown in each competition.
Not rush videos. The rush videos often refer to those unedited videos, which normally contain redundant and repetitive contents. The videos about figure skating in previous datasets may include those unedited and “rush” parts about the players, such as warming up, bowing to the audience after the performance, and waiting for scores at the Kiss&Cry. These parts may be not necessarily useful to help judge the scores of the performance of figure skating. In contrast, we aim at learning the model of predicting the scores purely from the competition performance of each player, rather than from the “rush” parts. Thus those unedited parts are pruned in our videos. More interestingly and importantly, in the sports videos of multiple players, the videos have to track, locate and transit different players. Our figure skating video has about only one player, and the whole video is only tracking, and locating the player over her whole performance as shown in Fig. 1.
Iii-B Pre-processing and Scoring
. We initially downloaded 100 hour videos; and the processing procedure is thus needed to prune some low quality videos. In particular, we manually select and remove the videos that are not fluent nor coherent. To make sure the figure skating videos exactly correspond to the ground-truth scores, we manually processed each video by further cutting the redundant clips (e.g. replay shots or player’s warming up shots). We only reserve the video from the exact the beginning of each performance, to the moment of ending pose, with duration of about 2 minutes and 50 seconds. Particularly, this time slot also meets the duration of skating stipulated by the International Skating Union, which is 2 minutes and 40 seconds within 10 seconds plus or minus for ladies’ singles short program. Each video has about 4300 frames with the frame rate 25. Thus both the number of frames and videos are far larger than the dataset released in.
Scoring of figure skating. We carefully annotated each video with the skater and competition, and labeled it with two scores, namely, Total Element Score (TES) and Total Program Component Score (PCS). These scores are given by the mark scheme of figure skating competition. Specifically, these scores measure the performance of skater at each stage over the whole competition. The score of TES is used to judge the difficulty and execution of all technical movement; and PCS aims at evaluating the performance and interpretation of the music by the skaters. Both the TES and PCS are given by nine different referees who are the experts on figure skating competition. Note that the same skater may receive very different scores at different competition due to her performance. Finally we gather 500 videos about ladies’ singles short program, and each video comes with the ground-truth scores. We randomly split the dataset into 400 training videos and 100 testing ones.
Iii-C Data Analysis
Apart from learning a score prediction model by using this dataset, we conduct statistical analysis and have some interesting finding. In particular, we compute the Spearman correlation and Kendall tau correlation between TES and PCS over different matches (in Fig. 2) or different players (in Fig. 3). More specific, we take the TES and PCS values of all skaters in each match, and compute the correlations as shown in Fig. 2. These values reflect how the TES and PCS are correlated across different matches. On the other hand, we take the same skater TES and PCs values of all matches she took, and calculate their correlations in Fig. 3.
As shown in Fig. 2, we find that in over a half of all matches, the Toal Element Score (TES) has little correlation with Total Program Component Score (PCS). This is reasonable, since the TES and PCS are designed to measure two quite different perspectives of the skater’s performance in the whole competition. In other words, TES and PCS should be relatively independent distributed. In a few matches, we indeed observe the high correlation between TES and PCS as in Fig. 2. We attribute this high correlation to the subjectivity of referees, i.e., referees would think that the skaters who can complete difficult technical movements (TES) are also able to interpret the music well (PCS). Furthermore, the weak correlations between TES and PCS are also shown in Fig. 3.
Fis-V dataset Vs. MIT-skate dataset. Comparing with the existing MIT-skate dataset , our dataset has larger data scale (i.e., more than 3 times videos), higher annotation quality (i.e., For each video, we provide both PCS and TES scores provided, rather than a single total score), and collecting more update-to-date figure skating videos (i.e., our videos come from 12 competitions from 2012 to 2017) than the MIT-skate dataset. Particularly, all of videos in MIT-Skate are from competitions happened before 2012, which makes the dataset somehow outdated, since the scoring standards of figure skating competitions is constantly changing in the international competitions. We think a qualified figure skating video dataset should be updated periodically.
In this section, we present our framework of learning to score the figure skating videos. We divide the whole section into three parts. Sec. IV-A discusses the problem setup and the video features we are using. We discuss how to get video level representation in Sec. IV-B. Finally, the video fusion scheme of learning to score will be explained in Sec. IV-C.
Iv-a Problem Setup
Weakly labeled regression. In figure skating matches, the referees will incrementally add the TES with the progress of the whole competition on-the-fly. Once the player finished one particular technical movement, the corresponding TES and PCS scores will be added. Ideally, we want the scores of each technical movement; but in the real situation, it is impossible to get the incrementally added scores synchronized with each video clip. Thus, we provide the final scores of TES and PCS; and the tasks of predicting these scores can be formulated as weakly labelled regression tasks. In our tasks, we take the prediction of TES and PCS as two independent regression tasks.
Video Features. We adopt deep spatial-temporal convolution networks for more powerful video representation. We extract deep clip-level features off-the-shelf from 3D Convolutional Networks, which are pre-trained on large-scale dataset. In particular, We use the 4096 dimensional clip-based feature from the fc6 layer of C3D  pre-trained on Sports-1M 
, which is a large-scale dataset containing 1,133,158 videos which have been annotated automatically with 487 sports labels. We use the sliding window of size 16 frames over the video temporal to cut the video clips with the stride as 8.
Iv-B Self-Attentive LSTM (S-LSTM)
We propose a self-attentive feature embedding to selectively learn to compact feature representations. Such representations can efficiently model the local information. Specifically, since each video has about 4300 frames with 2 minutes and 50 seconds duration, the total computational cost of using all C3D features would be very heavy. On the other hand, the trivial practice is to employ max or average pooling operator to merge these features into video-level representations. However, not all video clips/frames contribute equally to regressing the final scores. Thus in order to extract a more compact feature representation, we have to address two problems properly,
The features of clips that are important to difficulty technical movements should be heavy weighted.
The produced compact feature representations should be the fixed length for all the videos.
To this end, a self-attentive embedding scheme is proposed here to generate the video-level representations. In particular, suppose we have a dimensional C3D feature sequence of a video , we can compute the weight matrix ,
where and indicates the softmax and hyperbolic tangent function respectively. The can ensure the computed weights sum to 1.
We implement the Eq (1
) as a 2-layer Multiple Layer Perceptron (MLP) without the bias of
hidden neurons. Thus the dimension of weightsand are and , and the dimension of is . The compact representation is computed as . Each row of matrix can be interpreted as a specific focus point on the video, maybe a key action pattern; the stands for the diversity of descriptions. Therefore multiplying feature matrix with helps us extract all such patterns, resulting in a shorter input sequence, with dimension of .
The resulting embedding is further followed by a 1-layer Long Short-Term Memory (LSTM) with the
LSTM cell. The output LSTM is further connected to a 1-layer fully connected layer with 64 neurons to regress the TES and PCS scores. We use the Mean Square Error (MSE) as the loss function to optimize this self-attentive LSTM. A penalty term is added to the MSE loss function in order to encourage the diversity of learned self-attentive feature embedding. The form of the penalty is,
where is the Frobenius norm.
is the identity matrix.
The self-attentive LSTM is for the first time proposed here to address the regression tasks. We highlight several differences with previous works. (1) The attention strategy has been widely utilized in previous works [63, 45, 29, 44]. In contrast, the self-attentive strategy simply uses the final output of video sequences. Similar strategy has also been used in the NLP tasks . (2) Comparing with , our self-attentive LSTM is also very different. The output of self-attentive feature embedding is used as the input of LSTM and fully connected layer for the regression tasks. In contrast,  utilized the attention strategy to process the output of LSTM, and directly concatenate the feature embedding for the classification tasks.
Iv-C Multi-scale Convolutional Skip LSTM (M-LSTM)
The self-attentive LSTM is efficient in modeling the local sequential information. Nevertheless, it is essential to model the sequential frames/clips containing the local (technical movements) and global (performance of players), since in principle, TES scores the technical movement, and the PCS reflects the whole performance of the player. In light of this understanding, we propose the multi-scale convolutional skip LSTM (M-LSTM) model.
As an extension of LSTM, our M-LSTM learns to model the sequential information at multiple scale. Specifically, the dense clip-based C3D video features give the good representations of local sequential information. To facilitate abstracting the information of multiple scale, our M-LSTM employs several parallel D convolution layers with different kernel sizes, as shown in Fig. 4. The kernel with small size of filters can aggregate and extract the visual representation of action patterns lasting seconds in the videos. The kernel of large size of filters will try to model the global information of the videos. However, in practice, quite different from the videos used for video classification (e.g., UCF101 ), our figure skating videos are quite longer. Thus the total frames of our figure staking videos still make the training process of LSTM difficulty in capturing long term dependencies.
To solve this issue, we further propose the skipping RNN strategy here. Particularly, we propose the revised skip LSTM structure. An origin LSTM works as follows:
where are the input, forget and output gates. indicates sigmod function. is the input of LSTM; the hidden state and cell state of LSTM are denoted as and respectively. and are the learning weights of parameters. In skip LSTM, a binary state update gate, is added, which is used to control the update of cell state and hidden state. The whole new update rule is as follows,
is sigmoid function,denotes element-wise multiplication. indicates the round function. and are the values of corresponding state and if . is the accumulated error if not updating the control variable . Furthermore, different from , our model revises the update rule of and to prevent the network from being forced to expose a memory cell which has not been updated, which would result in misleading information, as shown in Fig. 5.
The key ingredient of our skip LSTM lies in Eq (7). By using the round function, our M-LSTM can skip some less significant update if . By virtue of this way, our M-LSTM can model even longer term data dependencies.
The whole structure of our M-LSTM is also illustrated in Fig. 4. Since the skip LSTM is used to discard redundant information, we only connect it to the convolution layers with small-size kernels, and apply the common LSTM after other convolution layers. The outputs at the final time-step of all parallel LSTMs are then concatenated and transmitted to a fully connected layer to regress the prediction scores.
Thus, with this M-LSTM architecture, we can actually have the best of both world: the multi-scale convolutional structures can extract the local and global feature representations from videos; the revised skip LSTM can efficiently skip/discard the redundant information that is not essential in learning the local and global information. The final LSTM outputs of different scales are still concatenated and learned by the nonlinear fully connected layer for the regression. The effectiveness of our M-LSTM is validated in the experiments.
V-a Settings and Evaluation
Datasets. We evaluate our tasks in both MIT-skate  and our Fis-V dataset. MIT-skate has 150 videos with 24 frames per second. We utilize the standard data split of 100 videos for training and the rest for testing. In our Fis-V dataset, we introduce the split of 400 videos as training, the rest as testing.
. As for the evaluation, we use the standard evaluation metrics, the spearman correlation– proposed in  and . This makes the results of our framework directly comparable to those results reported in [41, 39]. Additionally, to give more insights of our model, the Mean Square Error (MSE) is also utilized here to evaluate the models. In MIT-skate, the published results are trained on the final scores; so these scores have been used to evaluate our framework.
Experimental Settings. For self-attention LSTM subnetwork, we set , and the hidden size of LSTM is also set as . The batch size is 32. For M-LSTM subnetwork, we use the hidden size of for both types of LSTM layers, the other parameter setting is depicted in Fig. 423] algorithm with learning rate of
. The whole framework is trained on 1 NVIDIA 1080Ti GPU card and can get converged by 250 epochs. It totally takes 20 minutes to train one model. We augment the videos by the horizontal flipping on frames. As the standard practice, the Dropout is set as 0.7 and only used in fully connected layers; batch normalization is added after each convolution layer in our model. Our model is an end-to-end network; so, we directly use the C3D feature sequences of training data to train the model with the parameters above.
Addtitionally, we donot fine-tune the C3D features in our model, due to the tremendous computational cost. Literally, our videos are very long (averagely 4400 frames), but only around 400 videos. So if we want to finetune C3D with Fis-V or MIT-skate dataset, we need to forward pass and backpropagate on this relatively large C3D model 400 iteration for each video. This requires huge computational cost. On the other hand, we have observed overfitting in our training if the hyperparameter is not well tuned, due to the small dataset size (only 400 videos). Thus adding C3D into training graph would make the training process more difficult.
Competitors. Several different competitors and variants are discussed here. Specifically, we consider different combinations of the following choices:
Using frame-level features: We use the 2048 dimensional feature from the pool5 layer of the SENet , which is the winner of ILSVRC 2017 Image Classification Challenge.
Using max or average pooling for video-level representation.
Using different regression models: SVR with linear or RBF kernels are utilized for regression tasks.
LSTM and bi-LSTM based models. We use the C3D-LSTM model depicted in . Note that due to very long video sequence and to make a more fair comparison, we set the hidden size of LSTM as , adopt an option of bi-directional LSTM, and use a multi-layer regressor same as our models. This C3D-LSTM model is extended to using SENet features, or by using bi-directional LSTM.
Results of the spearman correlation. We report the results in Tab. I. On MIT-skate and our Fis-V dataset, we compare several variants and baselines. We highlight that our framework achieves the best performance on both datasets, and outperform the baselines (including [41, 28, 39]) clearly by a large margin. This shows the effectiveness of our proposed framework. We further conduct the ablation study to explore the contributions of each components, namely, M-LSTM and S-LSTM. In general, the results of M-LSTM can already beat all the other baselines on both datasets. This is reasonable, since the M-LSTM can effectively learn the local and global information with the efficient revised skip LSTM structure. Further, on MIT-skate dataset the S-LSTM is complementary to M-LSTM, since we can achieve higher results. The performance of M-LSTM and S-LSTM is very good on Fis-V dataset.
Results of different variants. As the regression tasks, we further explore different variants in Tab. I. By using the C3D features, we compare different pooling and Regression methods.
(1) Max Vs. Avg pooling
. Actually, we donot have conclusive results which pooling method is better. The max pooling has better performance than average pooling on MIT-skate dataset, while the average pooling can beat the maximum pooling on Fis-V dataset. This shows the difficult intrinsic of the regression tasks.
(2) RBF Vs. Linear SVR. In general, we found that the linear SVR has better performance than the RBF SVR. And both methods have lower performance than our framework.
(3) SENet Vs. C3D. On Fis-V dataset, we also compare the results of using SENet features. SENet are the static frame-based features, and C3D are clip-based features. Note that within each video, we generally extract different number of SENet and C3D features; thus it is nontrivial to directly combine two types of features together. Also the models using C3D features can produce better prediction results than those from SENet, since the figure skating videos are mostly about the movement of each skater. The clip-based C3D features can better abstract this moving information from the videos.
(4) TES Vs. PCS. With comparable models and features, the correlation results on PCS are generally better than those of TES. This reflects that the PCS is relatively easier to be predicted than TES.
Results of the mean square error. On our Fis-V dataset, we also compare the results by the metrics of MSE in Tab. II. In particular, we find that the proposed M-LSTM and S-LSTM can significantly beat all the other baseline clearly by a large margin. Furthermore, we still observe a boosting of the performance of PCS by combining the M-LSTM and S-LSTM.
Interestingly, we notice that on TES, the combination of S-LSTM+M-LSTM doesnot have significantly improved over the M-LSTM or S-LSTM only. This is somehow expected. Since the TES task aims at scoring those clips of technical movements. This is relatively a much easier task than the PCS task which aims at scoring the good interpretation of music. Thus only features extracted by M-LSTM or S-LSTM can be good enough to learn a classifier for TES. The combination of both M-LSTM and S-LSTM may lead to redundant information. Thus the S-LSTM+M-LSTM can not get further improvement over M-LSTM or S-LSTM on TES.
Ablation study on Self-Attentive strategy. To visualize the self-attentive mechanism, we compute the attention weight matrix of a specific video. We think that if a clip has high weight in at least one row of , then it means that this clip shows an important technical movement contributing to the TES score, otherwise it is insignificant. We show a pair of example clips (16 frames) in Fig. 6. From Fig. 6 we can find the clip in the top two rows with higher attention weights is showing “hard” action, for instance, “jumping on the same foot within a spin”, while movement in the bottom clips would not.
In this paper, we present a new dataset – Fis-V dataset for figure skating sports video analysis. We target the task of learning to score of each skater’s performance. We propose two models for the regression tasks, namely, the Self-Attentive LSTM (S-LSTM) and the Multi-scale Convolutional Skip LSTM (M-LSTM). We also integrate the two proposed networks in a single end-to-end framework. We conduct extensive experiments to thoroughly evaluate our frameworks as well as the variants on MIT-skate and Fis-V dataset. The experimental results validate the effectiveness of proposed methods.
-  F. R. Bach, G. R. Lanckriet, and M. I. Jordan. Multiple kernel learning, conic duality, and the smo algorithm. In ICML, 2004.
-  Hakan Bilen, Basura Fernando, Efstratios Gavves, Andrea Vedaldi, and Stephen Gould. Dynamic image networks for action recognition. In CVPR, 2016.
-  Víctor Campos, Brendan Jou, Xavier Giró-i Nieto, Jordi Torres, and Shih-Fu Chang. Skip rnn: Learning to skip state updates in recurrent neural networks. ICLR, 2018.
-  L. Cao, J. Luo, F. Liang, and T. S. Huang. Heterogeneous feature machines for visual recognition. In ICCV, 2009.
-  N. Dalal, B. Triggs, and C. Schmid. Human detection using oriented histograms of flow and appearance. In ECCV, 2006.
-  V. Delaitre, J. Sivic, and I. Laptev. Learning person-object interactions for action recognition in still images. In NIPS, 2011.
-  J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In CVPR, 2015.
A. A. Efros, A. C. Berg, G. Mori, and J. Malik.
Recognizing action at a distance.
IEEE International Conference on Computer Vision, pages 726–733, 2003.
-  C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional two-stream network fusion for video action recognition. In CVPR, 2016.
-  A.S Gordon. Automated video assessment of human performance. In AI-ED, 1995.
-  Gupta, Kembhavi A., and L.S Davis. Observing human-object interactions: Using spatial and functional compatibility for recognitions. In IEEE TPAMI, 2009.
-  G.W.Taylor, R.Fergus, Y.LeCun, and C.Bregler. Convolutional learning of spatio-temporal features. In ECCV, 2010.
-  Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In CVPR, 2015.
-  Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In arxiv, 2017.
-  H.Wang, A. Kläser, C. Schmid, and C.-L. Liu. Action recognition by dense trajectories. In CVPR, 2011.
-  Shuiwang Ji, Wei Xu, Ming Yang, and Kai Yu. 3d convolutional neural networks for human action recognition. In ICML, 2010.
-  W. Jiang, C. Cotton, S.-F. Chang, D. Ellis, and A. Loui. Short-term audio-visual atoms for generic video concept classification. In ACM MM, 2009.
-  Yu-Gang Jiang, Zuxuan Wu, Jun Wang, Xiangyang Xue, and Shih-Fu Chang. Exploiting feature and class relationships in video categorization with regularized deep neural networks. In IEEE TPAMI, 2017.
-  Yu-Gang Jiang, Guangnan Ye, Shih-Fu Chang, Daniel Ellis, and Alexander C. Loui. Consumer video understanding: A benchmark database and an evaluation of human and machine performance. In ACM International Conference on Multimedia Retrieval, 2011.
-  Marko Jug, Janez Perš, Branko Dežman, and Stanislav Kovačič. Trajectory based assessment of coordinated human activity. In International Conference on Computer Vision Systems, pages 534–543. Springer, 2003.
-  Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. Large-scale video classification with convolutional neural networks. In CVPR, 2014.
-  Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. Large-scale video classification with convolutional neural networks. In CVPR, 2014.
-  Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
-  Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
-  H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. HMDB: A large video database for human motion recognition. In ICCV, 2011.
I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld.
Learning realistic human actions from movies.
IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8, 2008.
-  I. Laptev and P. Perez. Retrieving actions in movies. In ICCV, 2007.
-  Quoc V Le, Will Y Zou, Serena Y Yeung, and Andrew Y Ng. Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 3361–3368. IEEE, 2011.
-  Zhenyang Li, Efstratios Gavves, Mihir Jain, and Cees GM Snoek. Videolstm convolves, attends and flows for action recognition. arXiv preprint arXiv:1607.01794, 2016.
-  Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130, 2017.
-  D. Liu, K.-T. Lai, G. Ye, M.-S. Chen, and S.-F. Chang. Sample-specific late fusion for visual category recognition. In CVPR, 2013.
-  Zach Lowe. Lights, cameras, revolution. Grantland, March, 2013.
S. Maji, L. Bourdev, and J. Malik.
Action recognition from a distributed representation of pose and appearance.In CVPR, 2011.
-  A. McQueen, J. Wiens, and J. Guttag. Automatically recognizing on-ball screens. In MIT Sloan Sports Analytics Conference (SSAC), 2014.
-  P. Natarajan, S. Wu, S. Vitaladevuni, X. Zhuang, S. Tsakalidis, U. Park, and R. Prasad. Multimodal feature fusion for robust event detection in web videos. In CVPR, 2012.
-  J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snippets: Deep networks for video classification. In CVPR, 2015.
-  J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Ng. Multimodal deep learning. In ICML, 2011.
-  P. Over, G. Awad, M. Michel, J. Fiscus, W. Kraaij, and A. F. Smeaton. Trecvid 2011 – an overview of the goals, tasks, data, evaluation mechanisms and metrics. In Proceedings of TRECVID 2011, 2011.
-  Paritosh Parmar and Brendan Tran Morris. Learning to score olympic events. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on, pages 76–84. IEEE, 2017.
-  M. Perse, M. Kristan, J. Pers, and S Kovacic. Automatic evaluation of organized basketball activity using bayesian networks. In Citeseer, 2007.
-  H. Pirsiavash, C. Vondrick, and Torralba. Assessing the quality of actions. In ECCV, 2014.
-  M. Yang S. Ji, W. Xu and K. Yu. Convolutional two-stream network fusion for video action recognition. In NIPS, 2014.
-  S. Sadanand and J.J. Corso. Action bank: A high-level representation of activity in video. In CVPR, 2012.
-  Pierre Sermanet, Andrea Frome, and Esteban Real. Attention for fine-grained categorization. arXiv, 2014.
-  Attend Show. Tell: Neural image caption generation with visual attention. Kelvin Xu et. al.. arXiv Pre-Print, 2015.
-  Karen Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS, 2014.
-  Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. UCF101: A dataset of 101 human actions classes from videos in the wild. CRCV-TR-12-01, 2012.
-  N. Srivastava and R. Salakhutdinov. Multimodal learning with deep boltzmann machines. In NIPS, 2012.
-  Lin Sun, Kui Jia, Dit-Yan Yeung, and Bertram E Shi. Human action recognition using factorized spatio-temporal convolutional networks. In CVPR, 2015.
-  D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3d convolutional networks. In ICCV, 2015.
-  Du Tran, Lubomir D Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. C3d: Generic features for video analysis. In ICCV, 2015.
-  H. Wang and C. Schmid. Action recognition with improved trajectories. In ICCV, 2013.
-  Heng Wang and Cordelia Schmid. Action recognition with improved trajectories. In ICCV, 2013.
-  L. Wang, Y. Qiao, and X. Tang. Action recognition with trajectory-pooled deep-convolutional descriptors. In CVPR, 2015.
-  Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool. Temporal segment networks: Towards good practices for deep action recognition. In ECCV, 2016.
-  Xiaolong Wang, Ali Farhadi, and Abhinav Gupta. Actions ~ transformations. In CVPR, 2016.
-  Yunbo Wang, Mingsheng Long, Jianmin Wang, and Philip S. Yu. Spatiotemporal pyramid network for video action recognition. In CVPR, 2017.
-  Z. Wu, Y.-G. Jiang, X. Wang, H. Ye, and X. Xue. Multi-stream multi-class fusion of deep networks for video classification. In ACM Multimedia, 2016.
-  W. Yang, Y. Wang, and G.: Mori. Recognizing human actions from still images with latent poses. In CVPR, 2010.
-  B. Yao and L Fei-Fei. Action recognition with exemplar based 2.5d graph matching. In ECCV, 2012.
-  G. Ye, D. Liu, I.-H. Jhuo, and S.-F. Chang. Robust late fusion with rank minimization. In CVPR, 2012.
-  H. Ye, Z. Wu, R.-W. Zhao, X. Wang, Y.-G. Jiang, and X. Xue. Evaluating two-stream cnn for video classification. In ACM ICMR, 2015.
-  Yunan Ye, Zhou Zhao, Yimeng Li, Long Chen, Jun Xiao, and Yueting Zhuang. Video question answering via attribute-augmented attention network learning. In SIGIR, 2017.
-  Bowen Zhang, Limin Wang, Zhe Wang, Yu Qiao, and Hanli Wang. Real-time action recognition with enhanced motion vector cnns. In CVPR, 2016.
-  Wangjiang Zhu, Jie Hu, Gang Sun, Xudong Cao, and Yu Qiao. A key volume mining deep framework for action recognition. In CVPR, 2016.