Ontology Based Global and Collective Motion Patterns for Event Classification in Basketball Videos

03/16/2019
by   Lifang Wu, et al.
28

In multi-person videos, especially team sport videos, a semantic event is usually represented as a confrontation between two teams of players, which can be represented as collective motion. In broadcast basketball videos, specific camera motions are used to present specific events. Therefore, a semantic event in broadcast basketball videos is closely related to both the global motion (camera motion) and the collective motion. A semantic event in basketball videos can be generally divided into three stages: pre-event, event occurrence (event-occ), and post-event. In this paper, we propose an ontology-based global and collective motion pattern (On_GCMP) algorithm for basketball event classification. First, a two-stage GCMP based event classification scheme is proposed. The GCMP is extracted using optical flow. The two-stage scheme progressively combines a five-class event classification algorithm on event-occs and a two-class event classification algorithm on pre-events. Both algorithms utilize sequential convolutional neural networks (CNNs) and long short-term memory (LSTM) networks to extract the spatial and temporal features of GCMP for event classification. Second, we utilize post-event segments to predict success/failure using deep features of images in the video frames (RGB_DF_VF) based algorithms. Finally the event classification results and success/failure classification results are integrated to obtain the final results. To evaluate the proposed scheme, we collected a new dataset called NCAA+, which is automatically obtained from the NCAA dataset by extending the fixed length of video clips forward and backward of the corresponding semantic events. The experimental results demonstrate that the proposed scheme achieves the mean average precision of 59.22 state-of-the-art on NCAA.

READ FULL TEXT

page 1

page 2

page 4

page 7

page 12

research
07/13/2020

Fusing Motion Patterns and Key Visual Information for Semantic Event Recognition in Basketball Videos

Many semantic events in team sport activities e.g. basketball often invo...
research
12/19/2018

Unsupervised Event-based Learning of Optical Flow, Depth, and Egomotion

In this work, we propose a novel framework for unsupervised learning for...
research
06/24/2022

An Intensity and Phase Stacked Analysis of Phase-OTDR System using Deep Transfer Learning and Recurrent Neural Networks

Distributed acoustic sensors (DAS) are effective apparatus which are wid...
research
07/11/2023

Offline and Online Optical Flow Enhancement for Deep Video Compression

Video compression relies heavily on exploiting the temporal redundancy b...
research
11/09/2015

Detecting events and key actors in multi-person videos

Multi-person event recognition is a challenging task, often with many pe...
research
09/22/2019

Semi-supervised estimation of event temporal length for cell event detection

Cell event detection in cell videos is essential for monitoring of cellu...
research
09/30/2021

Workflow Augmentation of Video Data for Event Recognition with Time-Sensitive Neural Networks

Supervised training of neural networks requires large, diverse and well ...

Please sign up or login with your details

Forgot password? Click here to reset