Unsupervised Temporal Feature Aggregation for Event Detection in Unstructured Sports Videos

by   Subhajit Chaudhury, et al.

Image-based sports analytics enable automatic retrieval of key events in a game to speed up the analytics process for human experts. However, most existing methods focus on structured television broadcast video datasets with a straight and fixed camera having minimum variability in the capturing pose. In this paper, we study the case of event detection in sports videos for unstructured environments with arbitrary camera angles. The transition from structured to unstructured video analysis produces multiple challenges that we address in our paper. Specifically, we identify and solve two major problems: unsupervised identification of players in an unstructured setting and generalization of the trained models to pose variations due to arbitrary shooting angles. For the first problem, we propose a temporal feature aggregation algorithm using person re-identification features to obtain high player retrieval precision by boosting a weak heuristic scoring method. Additionally, we propose a data augmentation technique, based on multi-modal image translation model, to reduce bias in the appearance of training samples. Experimental evaluations show that our proposed method improves precision for player retrieval from 0.78 to 0.86 for obliquely angled videos. Additionally, we obtain an improvement in F1 score for rally detection in table tennis videos from 0.79 in case of global frame-level features to 0.89 using our proposed player-level features. Please see the supplementary video submission at https://ibm.biz/BdzeZA.


page 1

page 2

page 4

page 5


Convolutional Temporal Attention Model for Video-based Person Re-identification

The goal of video-based person re-identification is to match two input v...

Temporal Context Aggregation for Video Retrieval with Contrastive Learning

The current research focus on Content-Based Video Retrieval requires hig...

Running Event Visualization using Videos from Multiple Cameras

Visualizing the trajectory of multiple runners with videos collected at ...

Towards Structured Analysis of Broadcast Badminton Videos

Sports video data is recorded for nearly every major tournament but rema...

Table Tennis Stroke Recognition Using Two-Dimensional Human Pose Estimation

We introduce a novel method for collecting table tennis video data and p...

Ice hockey player identification via transformers

Identifying players in video is a foundational step in computer vision-b...

Knock, knock. Who's there? – Identifying football player jersey numbers with synthetic data

Automatic player identification is an essential and complex task in spor...

I Introduction

Recent growth in sports-related content has enabled teams with a plethora of data for concise and robust sports analytics to improve their gameplan. However, analyzing an exhaustive list of such videos by manual inspection is an insurmountable task that would require an enormous amount of manpower and resources. A viable solution to this problem can be to automatically extract interesting points in the game as a preliminary stage by a machine learning system. Human analysts can then use the automatically extracted video excerpts to provide their expert views on selected portions of the video. For example, in the game of soccer, the task of detecting

corner kicks can be automated by an image-based sports analytics system while the more subtle details like player positioning, kick angle, opponent strength, etc. can be analyzed by humans, thus providing a speed up to the entire analytics process.

Fig. 1: Overview of the proposed event detection system for table tennis videos. We perform a temporal feature aggregation and clustering-based player retrieval and data augmentation by using a multi-modal image translation model. Training is performed on the augmented data to learn a rally detector.

With current breakthroughs in image classification [20, 8, 10], object recognition [17, 18, 15] and activity recognition [21, 26], it is feasible to use such representation learning-based systems for auto-tagging and segmenting sports videos. There have been notable works in the direction of automated tagging in the sports videos [31, 30, 5], which uses motion-based segmentation models and activity recognition using deep learning features for key-point extraction in videos. While these methods provide useful tools for sports analytics, they tend to primarily focus on structured video settings, like TV broadcasts or special camera setup and assume that a large number of labels are available at training time. However, a major portion of sports-based contents on the Internet are unstructured, captured from arbitrary shooting angles. Thus, the existing methods are not suitable for handling such videos captured in-the-wild and our main focus in this paper is to study the challenges in analyzing such unstructured videos to develop a robust and accurate event detection system in such scenarios.

Fig. 2: Our method handles arbitrary camera poses compared to structured settings like front or top camera in prior works.

In this paper, we focus on the problem of rally detection in table tennis videos captured from arbitrary camera angles with real-time performance ( FPS) to enable the human analysts with game summary soon after the game ends. Figure 1

shows an overview of our pipeline for rally detection. We train a detector model that uses player-level image features to classify whether a window of video frames belong to a rally event or not. We identify two major problems related to our application. Firstly, we do not have access to player bounding box annotations during training and hence we need to perform player retrieval from each image frame in an unsupervised manner without any user intervention. Secondly, our dataset has high variation in camera pose, but limited variations in appearance features such as color, which makes it liable to overfitting. Figure 

2 shows the comparison between structured and unstructured settings of video capture for sports applications.

For the first problem, our method finds all “person” bounding boxes in each frame using a pre-trained object detector. These are then ranked in the order of likelihood of being a player using a heuristic scoring mechanism. However, such a heuristic ranking method is weak and returns multiple noisy detections (like background audience, other players, coaches, etc.). To address this issue, we perform temporal feature aggregation to boost the performance of the weak heuristic score. The idea is that aggregation of person re-identification features of the cropped player bounding boxes from multiple temporally separated frames would form two dominant clusters in a latent representation space and proximity of candidate detections to these cluster centers can be used for boosting simple heuristics-based method. To address the second issue of preventing our model from overfitting to appearance features in the training set, we perform data augmentation using Multimodal UNsupervised Image Translation (MUNIT) [11]. The translation model preserves contents of the training samples (like player pose and actions) while changing the color distribution in the sample according to a randomly drawn style code, thus increasing variability in appearance-based features.

Experimental evaluations indicate that our proposed temporal feature aggregation method can improve the precision of player retrieval. Similarly, our rally detector model trained using player-level features can improve F1 Score from 0.79 to 0.89 when compared to the baseline global frame-level feature-based detectors. In summary, we make the following 3 major contributions in this paper:

  • We propose a novel algorithm for heuristic score boosting using temporal feature aggregation based outlier rejection for player retrieval without user intervention.

  • We propose a data augmentation method using an image translation model retaining the base content of the samples while increasing variability in appearance features to improve model generalization to unseen test cases.

  • We present a real-time automatic rally scene detection method from unstructured table tennis videos shot at arbitrary camera angles.

Ii Related Works

Image-based sports analysis has been extensively used for information extraction from various sports. Literature in this area can broadly be categorized into three approaches: (1) Using general action recognition methods, (2) Detecting a location of the ball or shuttle, (3) Analyzing the characteristic of the players. We provide the details of each area below,

Using general action recognition: Methods in this category use techniques from action recognition literature [21, 26, 13] to detect important events in sports videos. Ramanathan et al[16]

proposed a model that learns to detect events and key actors in basketball games by tracking players using Recurrent Neural Networks (RNN) and attention. Singh 

et al[23] tried to perform action recognition using feature trajectories of images from the first-person view camera. Another class of methods based on the ActivityNet [3] dataset performs temporal segmentation as a sub-problem. However such methods use complex network architectures and multiple features like RGB+optical flow, which increases computational complexity at inference. However, our application requires fast real-time performance allowing the coach or analyst to obtain the game summary as the game progresses. Thus such methods are not suitable in the scope of this application.

Detecting the location of ball: Reno et al[19]

used Convolutional Neural Network (CNN) to detect the trajectory of a tennis ball in a video sequence with high accuracy. Dierickx 

[2] detected a 3D trajectory of a shuttlecock to determine the different stroke types in badminton. Vučković et al[28] assessed tactics in squash by considering ball locations when the shots are played. Tamaki et al[25] tried to detect the ball of table tennis by two RGB cameras or a single RGB-D camera with known corner position of the table and camera parameters, whereas in our problem setting we aim to detect events in single RGB video without such meta-information.

Analyzing characteristics of players: Weeratunga et al[29] proposed a method to analyze badminton tactics from video sequences. It is based on players’ trajectory during the play. They manually segmented the video into meaningful sub-sequences, represented the individual trajectories numerical strings, and used k-Nearest Neighbor (k-NN) to classify and analyze player movement tactics. Ghosh et al[4] temporally segmented the rallies in the video using scores shown on the TV broadcasting screen and refined the recognized result using a tennis scoring system. They used a system to provide a simple interface for humans to retrieve and view a relevant video segment in the match. Ghosh et al[5] improved the temporal segmentation of the point using machine learning, both traditional and deep learning techniques, to classify a frame into a point or a non-point frame.

Our work has two major differences from the above prior works. Firstly, since we do not have access to player bounding boxes annotations during training, we propose a novel unsupervised temporal feature clustering method for high precision player retrieval. Secondly, our dataset shows a high degree of pose variation with limited color variation making our model liable to overfitting, which we address by using a Generative Adversarial Networks (GAN) based image translation method.

Iii Table tennis video dataset

Our video dataset consists of 72 videos of table tennis matches in total, with each video being approximately 1 hour long. Every video is for singles game The camera always points at the table with arbitrary elevation and azimuth of camera pose. We roughly define videos as straight where the azimuth is between to and oblique if it is greater than by manual inspection. Among all the videos, were classified as straight videos and other were oblique videos. We divided the total number of videos to 60 training, 6 testing and 6 validation samples. 6 testing and validation videos were divided into 3 straight and 3 oblique videos. Some screen-shots from the videos in our datasets are shown in Figure 2 (c).

In table tennis, sports analysts are mostly interested in detecting rallies, which is defined as the duration when the ball is in play on the table. Our problem setting involves taking a raw input video as input and performing a temporal segmentation of those rally scenes in real-time. During training, we assume that annotations in the form of start and end time of rallies are only provided and no annotated bounding box information showing the player locations on the video are provided.

Iv Proposed Method

Iv-a Unsupervised player retrieval

In the presence of annotated player bounding boxes, prior method [5] trains an object detector specifically for player detection. In our application, we do not have access to annotated bounding boxes (BB) during training. Our application demands that we do not use any form of user interaction to annotate player BBs and hence we opt for a completely unsupervised approach for player retrieval. We use a pre-trained object detector to detect all BBs corresponding to “person” class from each frame. Next, we use a heuristic score based ranking method to obtain the top two scoring BBs as the players. However, such a heuristics based score is weak and produce low precision rates. We propose a feature clustering-based approach to boost the performance of the heuristic score by identifying two dominant clusters in detected ROIs across multiple temporally separated frames, where we refer the cropped image inside each BB as Region Of Interest (ROI).

Iv-A1 Table detection

Players usually occupy a position close to the table during a game and hence it is important to extract the location of the table as our first step. We find that the table is located close to the image center for most straight videos. However, this is not always the case for oblique videos, thus requiring a general table detection method for finding the distance between players and the table center. We used a MASK R-CNN [6] network pre-trained on MS-COCO [14] dataset to obtain the mask corresponding to the table by detecting the labels of “bench” class. Table centers for each video are subsequently computed by weighted mean as, and , where is the mask value (which is 1 for occupancy and 0 else) and , are absicca and ordinate of the pixel on the image.

Iv-A2 Heuristic score for player retrieval

We use a pre-trained object detector network trained on MSCOCO [14] dataset and use it for person BB detection in each frame. As seen in Figure 3, multiple person BBs are detected by the object detector and our objective is to only retain the two BBs corresponding to the players.

Let be the ROI for the

frame. We assign a score to each ROI based on some heuristics. Firstly, we found that in most cases, the probability that detected ROI is “Person” (

) is higher for the player BBs compared to the other peripheral detections. Secondly, players usually occupy the playing area in close proximity to the table center  and hence player BB distance from the table center should be less compared to other BBs. We use a weighted combination of these two factors to compute a score for the bounding box in each frame, given as


where , are positive coefficeints and is the center of BB. Top two bounding boxes having the highest score in each video frame, are chosen for player retrieval. However, this heuristics based method produces considerable amount of false positive detections reducing the precision of player retrieval. Next, we describe the clustering based score boosting method for improving the precision of detection.

Fig. 3: Illustration of unsupervised player retrieval from video frames. Top: Showing multiple “Person” bounding box from pretrained object detector. Bottom: Showing samples obtained using temporal aggregation and clustering. We succesfully cluster players as two dominant clusters (top 2 rows) with highest scores and outliers as the third cluster (bottom row).

Iv-A3 Score boosting by temporal feature aggregation

The heuristic scoring method described in the previous section is a weak metric for ranking detected BBs for players extraction resulting in numerous outlier retrievals as shown in Figure 3. To mitigate this problem, we exploit temporal information in retrieved BB distribution across frames and perform a clustering-based outlier rejection steps to boost the performance above heuristic score. The idea is to detect clusters on all detected BBs across the temporal axis of the video and extract which two dominant clusters have the highest cumulative score distribution. Using this information, we update the heuristic score to include a similarity measure to the cluster centers which is then used to rerank the detected BBs to remove major outliers.

We uniformly sample frames for each video assuming that in most frames the two players will be present close to the table (while in the game). For each frame, we crop all the “person” ROIs and rank them using the heuristic score based method retaining the top two candidates. These ROIs are pooled together across frames to get a total of frames and stored in a buffer,

. The goal now is to perform clustering and estimate the two cluster centers corresponding to the two players.

From the cropped ROIs we extract person reidentification features. Since such features are invariant for varying human poses, features corresponding to the players are likely to form clusters for each person. We use a part model based person reidentification feature extractor [24] which exhibits state-of-the-art performance. Similarly features corresponding to outliers also form one or many small clusters. We represent the person reidentification transformation as and the features are computed as . Thus we represent the features space distribution as a mixture of Gaussians, where the maximum number of component Gaussians, is unknown,


The objective is to discover the player cluster centers

, which can be used to boost the weak heuristic scores for better player retrieval. We initially cluster the feature space using Expectation Maximization, with the number of clusters set to 3. It is assumed that two clusters with the highest cumulative heuristic score will be the player clusters. However, we found some cases where the two player ROIs are very similar (due to similar jerseys, lighting conditions, etc.) and they are clustered together as the first dominant cluster with outlier ROIs detected as the second cluster. To alleviate this situation, we use a fitness measure that ensures that two dominant clusters have almost equal score distribution. Thus, if one cluster contains player ROIs, which have a high score on average, and the second cluster has predominantly outliers having low scores, the fitness measure will be much smaller than the healthy value of 1. In such a case, we again perform clustering eliminating the feature points in the lowest scoring cluster. This method is described in algorithm


0:  Video input, low and high thresholds for fitness measure
1:  Extract video frames,
2:  for  from to  do
3:     - Extract top two BB candidates, using heuristic score and store the corresponding ROIs and score to buffer, and
4:  end for
5:  Extract person re-identification features for the ROIs,
6:  Initialize fitness score, , number of cluster,
7:  while  or  do
8:     - Compute Expectation Maximization on person re-identification features , to find cluster centers,
9:     - Compute mean score of all the points that belong to cluster and sort them in descending order to get and sorted cluster center as
10:     - Compute fitness score, , set . Eliminate the feature points for the lowest scoring cluster.
11:  end while
12:  Compute boosted score for the bounding box for image frame as , where is the person reid feature for ROI.
Algorithm 1 Temporal feature aggregation for heuristic score boosting

The above algorithm extracts the player clusters having the highest cumulative score. In our experiments, we found that K-Means and Expectation Maximization can be used interchangeably to yield similar results. The old heuristic score is updated by a distance measure from the dominant player cluster centers,

, thus boosting the performance by using additional appearance based similarity measure. Usually, the value of is kept high to give more weight to the component of the score computing distance from cluster centers. The temporal feature clustering is performed as a pre-processing step and the boosted score from the above algorithm is used to rank the detected BBs in each frame for player retrieval. This process is illustrated in Figure 3.

Fig. 4:

Sample generated images using GAN-based multi-modal image translation model. Top row shows true training samples and last two rows shows generated images with same content information but randomly sampled style vectors.

Iv-B Training samples

We extract training samples by extracting small contiguous sub-video windows around the annotated start and endpoints for each rally in the video. We use windowing around the start point for positive samples and those around the endpoint for negative samples. This forces the model to produce high detection probability at the start of the rally and reduce the probability score at the endpoints. Additionally, we also use randomly sampled points in the “non-rally” regions to mine negative examples. Since the variation in the negative samples is much larger (players walking, resting, or any other action) compared to the positive samples (which mostly constitutes of a serve, shots played followed by the end of rally), the number of negative samples in our dataset is five times the number of positive samples. We used a window of size seconds around the annotated points for sample extraction.

Our goal is to learn a binary classifier that detects whether a sample is in a rally or not. During inference, this model is used in a sliding window fashion to detect the temporal location of rallies in testing videos. We classify the extracted features using a sigmoid activated LSTM [9] based video activity recognition network inspired from [27]. We found that bidirectional LSTM did not give us much improvement in performance and hence we used uni-directional LSTM in our application.

Iv-C Multi-modal image translation for improved generalization

The number of obliquely angled videos in our training set is limited to 16 compared to 44 straight capture videos. Although there is a high variation of camera pose in these few oblique videos, there is limited variation in the color distribution which poses a risk of over-fitting to the training set.

To alleviate this problem, we present a data augmentation scheme that introduces variability in image appearance across training samples by artificially generating images using GANs. Our objective is to only introduce variations in appearance while keeping the content (the players pose and action) of the samples unchanged. Multi-modal Unsupervised Image Translation (MUNIT) model [11] is used for data generation. We arrange the positive training samples for straight and oblique videos into two domains and image translation is performed between them.

Average Precision (AP)
Heuristic Boosted
Oblique 0.78 0.86
Straight 0.95 0.98
Combined 0.86 0.92
(a) Player retrieval Average Precision (AP) for heuristic and boosted scores by temporal aggregation.
(b) Percentage of videos having precision greater than threshold on -axis. More % at higher threshold is better.
Fig. 5: Evaluation of unsupervised player retrieval.

Given two domains , the main idea is to factorize the latent representations for an image into a content code and a style code , using the encoder networks as , where and are the content and style encoder respectively for the domain. With as the decoder for the domain and

are prior distribution of style code for respective domains, training is performed by minimizing three kinds of loss: image reconstruction loss, latent space reconstruction loss and adversarial loss for cross-domain distribution matching. During inference, image-to-image translation is performed by using content encoder of one domain, and a randomly drawn style code from the other domain, as

and vice versa. For more details on the image translation model, we refer the reader to the original paper by Huang et al. [11]. Figure 4 shows generated samples for two randomly drawn style codes for both directions of straight to oblique videos and vice versa.

To perform data augmentation, we generate one copy of synthetic data for all positive training samples ensuring that the style code remains the same for all image frames in each sample. As a result of this data augmentation, we get a double number of positive samples improving the variability in appearance features in the learning data. It is to be noted that we can use an arbitrarily large number of style codes to improve appearance variability even more, however that would require more memory resources.

V Experimental Results

In this section, we explain the baseline methods used and discuss the quantitative and qualitative results. We perform several experiments to seek answers to the following questions,

  • Does the temporal feature aggregation based score boosting improve the precision of player retrieval?

  • Are player-level features better for rally detection than frame-level global features?

  • Does data augmentation using a multi-modal image translation model improve F1 score by reducing over-fitting?

  • What is the precision and recall of the trained model in terms of temporal segmentation accuracy?

AUROC Oblique Straight Combined
Pr R F1 Pr R F1 Pr R F1
Frame-level (st) 0.950 0.51 0.79 0.590 0.81 0.92 0.86 0.62 0.85 0.71
Frame-level (+ obl) 0.978 0.56 0.92 0.70 0.70 0.99 0.82 0.62 0.95 0.75
Proposed (player-level, st) 0.984 0.93 0.76 0.83 0.80 0.92 0.85 0.85 0.84 0.85
Proposed (+ obl) 0.989 0.91 0.9 0.91 0.82 0.85 0.83 0.86 0.88 0.87
Proposed (+ aug) 0.993 0.87 0.95 0.91 0.80 0.89 0.84 0.84 0.92 0.88
TABLE I: Sample-wise evaluation on test set. First two rows use frame-level (baseline), last three rows use player-level (proposed) features. “+” means addition of train setting to previous row. Proposed models give better F1 score and AUROC compared to baseline. (“Pr”: precision, “R”: Recall. “st” and “obl”: training with straight and oblique videos).

V-a Experimental settings

We set up baseline methods for evaluating the performance of our method. To examine the necessity of using unsupervised player retrieval, we consider two cases. In the first case, where we do not use player BB information and train our detector on global frame-level features and in the second case we use our proposed player-level features to train the detector as described in Section IV-B. We perform training on the above models in both cases of straight camera captured videos and on the combination of straight and obliquely angled videos. Additionally, we also train with multimodal image translation based data augmentation.

We used VGG19 [22]

networks, pre-trained on the ImageNet dataset


for feature extraction. Additional, we also tried ResNet 

[7] but VGG19 produced better results. Video frames were resized to RGB images, for extracting global frame-wise features in case of the baseline methods. For player-level features, we used the YOLO object detector [17] pre-trained on MSCOCO dataset [14]. For the LSTM based classifier, we use hidden layers, each with and -activated units respectively, followed by a fully connected layer of one unit output which is sigmoid activated. We used a frameskip of

frames. Therefore, each data point in our training is a tensor of size

which after computing VGG features is transformed to size dimensional tensor.

V-B Evaluation of unsupervised player retrieval

To evaluate the precision of our proposed unsupervised player retrieval, we randomly select 10 oblique and 10 straight videos. For each video, we uniformly sample frames and perform extraction of two players for each such frame using heuristic score method and clustering based boosted score method. We manually compute the number of false positive (FP) detections (images that are not players, eg. coaches, audience, etc.) from the cropped image ROI. Precision is defined as , where in this case. We report the average precision value for oblique and straight video cases in Table 5(a).

The proposed score boosting method has a high player retrieval rate compared to the heuristic score for every setting. Particularly for the oblique case, the score-boosting method significantly improves the precision of player detection from 0.78 to 0.86, showing that heuristic score based method produces numerous outlier detections that are successfully removed by the proposed method. Additionally, we also plot the percentage of videos (both oblique and straight) having precision greater than thresholds of 0.75 to 0.95 in intervals of 0.5 as shown in Figure

5 (b). The proposed score boosting method is shown to have a greater percentage of videos having high precision making it suitable for our application of unsupervised player extraction.

V-C Evaluation of rally detection

In our experiments, we use two kinds of evaluations to compare the efficiency of our method to the baseline. We use precision, recall, and F1-score as the metric of comparison in both cases.

V-C1 Sample-wise evaluation

Here, we use our learned classifier to predict the class labels on the test samples. Performance on test set is shown in Table I and Figure 6(a) shows the ROC plot. Our proposed player-level feature-based method produces an F1 score of 0.87 compared to 0.75 for the model trained on frame-level features and has the best AUROC when compared to baseline. Thus, player-level features generalize better compared to global frame-level features. Ablation studies show that adding oblique videos in the training set, improves performance on oblique test cases, while performance on straight cases decreases. The proposed data augmentation method is also shown to improve F1 score. A possible reason for the limited improvement may be attributed to the lack the color variation between original training and test sets in our particular dataset.

(a) ROC-Curve
(b) Termporal Evaluation (Serve)
(c) Termporal Evaluation (End of rally)
Fig. 6: Evaluation of proposed method, (a) Our method has best AUROC score compared to baseline methods. Plot of F1 score at various thresholds for (b) Serve point detection (c) End-of-rally detection.

V-C2 Temporal segmentation evaluation

This evaluation is tailored to find how well the detector localizes rally detection in time. For that purpose, we first apply the learned rally detector in a sliding window fashion to the entire length of the input video and record the confidence of the rally detection at each timestep. The sliding window method provides contiguous confidence regions in time depicting the duration of a single rally which are thresholded and the rising and falling edges are detected as serve points and the end-of-rally point. To evaluate the serve and end-of-rally detection accuracy, we consider a tolerance window around the human-annotated serve point and end-of-rally point of 3 seconds. Point detections that lie within such a tolerance window are considered as true positive detections. The other positively detected points are false positives.

We use a range of threshold values and plot the corresponding F1 scores in Figure 6 (a) and Figure 6 (b) for serve and end-of-rally detections respectively. Our proposed model, trained on both straight and oblique videos, consistently performs better than the baseline method for all thresholds thus showing its superiority in performance. Our proposed method outperforms the baseline models (using frame-level features) due to robust local player-level features. The frame-level methods are inefficient for capturing robust image features and therefore are not invariant to viewpoint transformations. We believe that the baseline methods overfit to the dominant mode of the overall distribution(straight cam videos) and fail to generalize to the other less common distribution (oblique cam videos). Additionally, we found that a potential source for false-positive detections for all methods was “let” events where the rally may prematurely end due to a “foul”. These events were not marked as rally scenes in ground truth annotations however since the players perform “serve” action, these were detected as a rally by our detector. Please see the attached supplementary video for manual inspection of detection quality.

V-C3 Robustness to shaking camera and occlusion

We synthetically simulate the effect of camera shaking by randomly cropping an area in the image from each frame. We also simulated occlusions by artificially adding an occluding box of size (0.4, 0.2) at random positions in all the video frames in the test set, where is the height of the video frame. These simulated perturbations are used to test our model’s performance under highly noisy real-world scenarios. Qualitative analysis from attached supplementary videos shows that our method detects most rally scenes even in the presence of noise, although in the presence of occlusion noise, we find some false positive detections. We believe that our LSTM based classifier possibly learned to extract discriminative features only from a few key-frames of the input window. Thus, in spite of erroneous features from some frames, it can robustly pick up discerning features from other frames, thus maintaining the robustness of detection.

Vi Conclusion

We presented a robust method for event detection in unstructured sports videos. Firstly, we proposed a novel temporal feature aggregation method for boosting heuristic scores to obtain high precision player retrieval rates. Additionally, we perform data augmentation using multi-modal image translation to reduce bias in appearance-based features in the training set. Our method produces accurate unsupervised player extraction which in turn is used for precise temporal segmentation of rally scenes. Additionally, our method is robust to noise in the form of camera shaking and occlusion, which enables its applicability to videos captured from low-cost commercial cameras. Although we present an application in table tennis, our method can be applied to general racquet games like badminton, tennis etc. In the future, we want to extend this work to videos of doubles games with fine-grained action recognition for detecting various kinds of shots in an unstructured setting. Other extensions might include analyzing unstructured videos of non-racquet games like soccer, rugby, etc.


  • [1] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. . Cited by: §V-A.
  • [2] T. Dierickx (2014) Badminton game analysis from video sequences. Master’s Thesis, Universiteit Gent. Faculteit Ingenieurswetenschappen en Architectuur.. Note: https://lib.ugent.be/catalog/rug01:002153740[Online; accessed 01-September-2018] Cited by: §II.
  • [3] B. Ghanem, J. C. Niebles, C. Snoek, F. C. Heilbron, H. Alwassel, V. Escorcia, R. Khrisna, S. Buch, and C. D. Dao (2018) The activitynet large-scale activity recognition challenge 2018 summary. arXiv preprint arXiv:1808.03766. Cited by: §II.
  • [4] A. Ghosh and C. Jawahar (2018) SmartTennisTV: automatic indexing of tennis videos. In Computer Vision, Pattern Recognition, Image Processing, and Graphics: 6th National Conference, NCVPRIPG 2017, Mandi, India, December 16-19, 2017, Revised Selected Papers 6, pp. . Cited by: §II.
  • [5] A. Ghosh, S. Singh, and C. Jawahar (2018) Towards structured analysis of broadcast badminton videos. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 296–304. Cited by: §I, §II, §IV-A.
  • [6] K. He, G. Gkioxari, P. Dollár, and R. Girshick (2017) Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 2961–2969. Cited by: §IV-A1.
  • [7] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §V-A.
  • [8] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. See DBLP:conf/cvpr/2016, pp. 770–778. External Links: Link, Document Cited by: §I.
  • [9] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §IV-B.
  • [10] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger (2017) Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §I.
  • [11] X. Huang, M. Liu, S. Belongie, and J. Kautz (2018) Multimodal unsupervised image-to-image translation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189. Cited by: §I, §IV-C, §IV-C.
  • [12] (2013) IEEE conference on computer vision and pattern recognition, CVPR workshops 2013, portland, or, usa, june 23-28, 2013. IEEE Computer Society. External Links: Link, ISBN 978-0-7695-4990-3 Cited by: 25.
  • [13] C. Lea, A. Reiter, R. Vidal, and G. D. Hager (2016) Segmental spatiotemporal cnns for fine-grained action segmentation. In European Conference on Computer Vision, pp. 36–52. Cited by: §II.
  • [14] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In European conference on computer vision, pp. . Cited by: §IV-A1, §IV-A2, §V-A.
  • [15] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg (2015) SSD: single shot multibox detector. Note: cite arxiv:1512.02325Comment: ECCV 2016 External Links: Document, Link Cited by: §I.
  • [16] Cited by: §II.
  • [17] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. . Cited by: §I, §V-A.
  • [18] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.), pp. 91–99. External Links: Link Cited by: §I.
  • [19] V. Reno, N. Mosca, R. Marani, M. Nitti, T. D’Orazio, and E. Stella (2018-06) Convolutional neural networks based ball detection in tennis games. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Cited by: §II.
  • [20] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556. Cited by: §I.
  • [21] K. Simonyan and A. Zisserman (2014) Two-stream convolutional networks for action recognition in videos. In Advances in neural information processing systems, pp. 568–576. Cited by: §I, §II.
  • [22] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §V-A.
  • [23] S. Singh, C. Arora, and C.V. Jawahar (2017) Trajectory aligned features for first person action recognition. Pattern Recognition 62, pp. 45 – 55. External Links: ISSN 0031-3203, Document, Link Cited by: §II.
  • [24] Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang (2018) Beyond part models: person retrieval with refined part pooling (and a strong convolutional baseline). In Proceedings of the European Conference on Computer Vision (ECCV), pp. . Cited by: §IV-A3.
  • [25] S. Tamaki and H. Saito (2013) Reconstruction of 3d trajectories for performance analysis in table tennis. See 12, pp. 1019–1026. External Links: Link, Document Cited by: §II.
  • [26] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri (2015) Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE international conference on computer vision, pp. . Cited by: §I, §II.
  • [27] A. Ullah, J. Ahmad, K. Muhammad, M. Sajjad, and S. W. Baik (2018) Action recognition in video sequences using deep bi-directional lstm with cnn features. IEEE Access 6, pp. 1155–1166. Cited by: §IV-B.
  • [28] G. Vučković, N. James, M. Hughes, S. Murray, Z. Milanović, J. Perš, and G. Sporiš (2014) A new method for assessing squash tactics using 15 court areas for ball locations. Human Movement Science 34, pp. 81 – 90. External Links: ISSN 0167-9457, Document, Link Cited by: §II.
  • [29] K. Weeratunga, A. Dharmaratne, and K. B. How (2017-07) Application of computer vision and vector space model for tactical movement classification in badminton. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vol. , pp. 132–138. External Links: Document, ISSN 2160-7516 Cited by: §II.
  • [30] F. Yan, J. Kittler, D. Windridge, W. Christmas, K. Mikolajczyk, S. Cox, and Q. Huang (2014) Automatic annotation of tennis games: an integration of audio, vision, and learning. Image and Vision Computing 32 (11), pp. 896–903. Cited by: §I.
  • [31] F. Yoshikawa, T. Kobayashi, K. Watanabe, and N. Otsu (2010) Automated service scene detection for badminton game analysis using chlac and mra. World Academy of Science, Engineering and Technology , pp. . Cited by: §I.