A Comprehensive Study on Temporal Modeling for Online Action Detection

01/21/2020 ∙ by Wen Wang, et al. ∙ 0

Online action detection (OAD) is a practical yet challenging task, which has attracted increasing attention in recent years. A typical OAD system mainly consists of three modules: a frame-level feature extractor which is usually based on pre-trained deep Convolutional Neural Networks (CNNs), a temporal modeling module, and an action classifier. Among them, the temporal modeling module is crucial which aggregates discriminative information from historical and current features. Though many temporal modeling methods have been developed for OAD and other topics, their effects are lack of investigation on OAD fairly. This paper aims to provide a comprehensive study on temporal modeling for OAD including four meta types of temporal modeling methods, temporal pooling, temporal convolution, recurrent neural networks, and temporal attention, and uncover some good practices to produce a state-of-the-art OAD system. Many of them are explored in OAD for the first time, and extensively evaluated with various hyper parameters. Furthermore, based on our comprehensive study, we present several hybrid temporal modeling methods, which outperform the recent state-of-the-art methods with sizable margins on THUMOS-14 and TVSeries.



There are no comments yet.


page 1

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Online action detection (OAD) is an important problem in computer vision, which has wide range of applications like visual surveillance, human-computer interaction, and intelligent robot navigation, etc. Different from traditional action recognition and offline action detection that intend to recognize actions from full videos, the goal of online action detection is to detect an action as it happens and ideally even before the action is fully completed. It is a very challenging problem due to the extra restriction on the usage of only historical and current information except for the difficulties of traditional action recognition in untrimmed video streams.

In general, there exist two OAD tasks, i.e. spatial-temporal online action detection (ST OAD) and temporal online action detection. With online setting, the former aims to localize actors and recognize actions in space-time which is introduced in [90], while the latter is to localize and recognize actions temporally only which is systematically introduced in [15]. Our study mainly focuses on the temporal online action detection problem, and we ignore ‘temporal’ for convenience in the rest.

As illustrated in Fig.1, an online action detection (OAD) system mainly consists of three important parts: a frame-level feature extractor (e.g

. deep Convolutional Neural Network, CNN), a temporal modeling module to aggregate frame-level features, and an action classifier. Recent works on online action detection mostly focus on the temporal modeling part, aiming to generate discriminative representations from the historical and current frame features. Inspired by the sequence modeling methods in other areas especially the Long Short-Term Memory recurrent network (LSTM) 

[41], various temporal modeling methods have been developed for online action detection recently. For example, Geest et al. [15] provide a LSTM-based baseline which shows superiority to the single-frame CNN model. Gao et al.  [29] propose a LSTM-based Reinforced Encoder-Decoder network for both action anticipation and online action detection. Geest et al.  [16] propose a two-stream feedback network, where one stream focuses on the input interpretation and the other models temporal dependencies between actions. Xu et al.  [107] utilize LSTM cell to model temporal context aiming to improve online action detection by adding prediction information into observed information.

Fig. 1:

Online action detection aims to predict the ongoing action category from the historical and current frame information. A typical online action detection system is mainly composed of three parts: frame-level feature extraction, temporal modeling, and action classification.

Although the above LSTM-based temporal modeling methods have significantly boosted the performance on existing OAD datasets (e.g. TVSeries [15], THUMOS-14 [44]), however, their superiority to other temporal models, e.g. , naive temporal pooling, temporal convolution, and attention-based sequence models, is not discussed and remains unknown. Moreover, the fusion of different temporal models is also rarely investigated. To address these problems, we provide a fair comprehensive study on temporal modeling for online action detection in the following aspects.

Exploration of temporal modeling methods

. We explore four popular types of temporal modeling methods with various hyper parameters to fairly illustrate their effects for online action detection. They are namely temporal pooling, temporal convolution, recurrent neural networks, and temporal attention models. Specifically, for temporal pooling, we evaluate

average pooling (AvgPool) and max pooling (MaxPool) with various sequence lengths. For temporal convolution, we evaluate traditional temporal convolution (TC), pyramid dilated temporal convolution (PDC) [58], and dilated causal convolution (DCC) [73]. For recurrent neural networks, we evaluate LSTM and Gated Recurrent Unit (GRU) with two output choices, i.e. the last hidden state and the average hidden state. For temporal attention, we evaluate naive self-attention (Naive-SA) with a linear fully-connected (FC) layer and Softmax function, nonlinear self-attention (Nonlinear-SA) with a FC-tanh-FC-Softmax architecture, Non-local block or standard self-attention with a skip connection, and Transformer with the current frame as the query (Q) information. It is worth noting that i) we try to keep the original names of these related methods in other topics though we make adaptions for online action detection, and ii) many of these methods are introduced into online action detection for the first time to the best of our knowledge, such as TC, PDC, DCC, Non-local, etc. Overall, we extensively explore eleven individual temporal modeling methods with the off-the-shelf two-stream (TS) frame features.

The fusion of temporal modeling methods. Generally, these sequence-to-sequence methods, e.g. PDC and LSTM, can be further processed by aggregation methods to create a single representation like temporal pooling and temporal attention. Thus, we present several hybrid temporal modeling methods which combine different temporal modeling methods aiming to uncover the complementarity among them. Interestingly, we find that a simple fusion between dilated causal convolution and Transformer or LSTM improves the individual models significantly.

Comparison with state of the arts. We extensively compare our individual models and hybrid temporal models to existing baselines and recent state-of-the-art methods. Several hybrid temporal models outperform the best existing performance with a sizable margin on both TVSeries and THUMOS-14. Specifically, the fusion of dilated causal convolution and Transformer obtains 84.3% cAP on TVSeries, and the fusion of dilated causal convolution, LSTM, and Transformer achieves 48.6% mAP on THUMOS-14.

Ii Related Work

Our study is related to several other action related tasks, namely action recognition, action anticipation, temporal action detection, spatial-temporal action detection. In this section, we first briefly overview these related tasks separately and then present the recent works on online action detection.

Action recognition

is an important branch of video related research areas and has been extensively studied in the past decades. The existing methods are mainly developed for extracting discriminative action features from temporally complete action videos. These methods can be roughly categorized into hand-crafted feature based approaches and deep learning based approaches. Early methods such as Improved Dense Trajectory (IDT) mainly adopt hand-crafted features, such as HOF

[56], HOG [56] and MBH [97]. Recent studies demonstrate that action features can be learned by deep learning methods such as convolutional neural networks (CNN) and recurrent neural networks (RNN). For example, two-stream network [85, 99] learns appearance and motion features based on RGB frame and optical flow field separately. RNNs, such as long short-term memory (LSTM) [34] and gated recurrent unit (GRU) [11], have been used to model long-term temporal correlations and motion information in videos, and generate video representation for action classification. Some recent works also try to model temporal information within a 2D-CNN instead of using 2D-CNN as static feature extractor, e.g. both TSM [61] and TAM [23] propose an efficient approach to aggregate feature across frames inside the network.

Another type of action recognition approach is based on 3D CNNs, which are widely used for learning large-scale video datasets. C3D [21] is the first successful 3D CNN model for video classification. After that, many works extend C3D to different backbones, e.g. I3D [8] and ResNet3D [37]. In addition, some works aim to reduce the complexity of 3D CNN by decomposing the 3D convolution into 2D spatial convolution and 1D temporal convolution, e.g. P3D [76], S3D [104], R(2+1)D [22].

Action anticipation, also aka early action prediction, aiming to predict future unseen actions with historical and current information. Many works have been developed for this tasks in recent years. For instance, Hoai et al. [39] propose a max-margin framework with structured SVMs to address this problem. Ryoo et al. [79] develop an early action prediction system by observing some evidences from the temporal accumulated features. Yu et al. [111] formulate the action prediction problem into a probabilistic framework, which aims to maximize the posterior of activity given observed frames. Aliakbarian et al. [1]

develop a multi-stage LSTM architecture that leverages context-aware and action-aware features, and introduce a novel loss function that encourages the model to predict the correct class as early as possible. Gao

et al. [29]

propose a Reinforced Encoder-Decoder (RED) network for action anticipation, which uses reinforcement learning to encourage the model to make the correct anticipations as early as possible. Ke

et al. [51] propose an attended temporal feature, which uses multi-scale temporal convolutions to process the time-conditioned observation. The widely used datasets for action anticipation, e.g. , UCF-101 [91], JHMDB-21 [43], BIT-Interaction [53], Sports-1M [48], include short trimmed videos, and the task mainly focuses on predicting the class of the current going action timely from only a small ratio of the observed part. Our task is different from action anticipation, we mainly focus on long and unsegmented video data, e.g. TVSeries, usually with large variety of irrelevant background.

Fig. 2: The temporal modeling architectures. A: Temporal pooling with max or average operation. B: Temporal convolution methods. C: Recurrent Neural Network (RNN) with LSTM or GRU cells. D: Temporal attention methods.

Temporal action detection or localization is another hot topic which aims to temporally localize and recognize actions by observing entire untrimmed videos. The main difference between this topic and OAD is the offline setting, i.e. post-processing is allowed for temporal action localization. In this offline setting, the whole action can be observed first. The problem has recently received increasing attention due to its potential application in video data analysis. Shou et al. [84] localize actions with three stages: action proposal generation, proposal classification and proposal regression. Xu et al. [106] transform the Faster R-CNN [78] architecture into temporal action localization. Chao et al. [10] improve receptive field alignment using a multi-tower network and dilated temporal convolutions, and exploit the temporal context of actions for both proposal generation and action classification. Lin et al. [62]

generate proposals via learning starting and ending probability using temporal convolutional network, and achieve promising performance over previous methods. Zeng

et al. [112] apply the Graph Convolutional Networks (GCNs) over the graph to model the relations among different proposals and learn powerful representations for the action classification and localization.

Spatial-temporal action detection aims to determine the precise spatial-temporal extents of actions in videos, which has attracted increasing attention recently. Early methods mainly resort to bag-of-words representation and search spatio-temporal path. In deep learning era, many works transform image-based object detection methods into this task, e.g. R-CNN [32], Faster R-CNN [78], SSD [69], etc. These adaptive methods mainly first detect actions in frame level and then link the frame-level bounding boxes into final tubes [33, 75, 88, 36]. Specially, the online setting is used in [90, 88].

Online action detection is defined as an online per-frame labelling task given streaming videos, which requires correctly classifying every frame. Geest et al.  [15] first introduce the problem by introducing a realistic dataset (i.e. TVSeries) and some baseline results. Their later work [17] introduces a two-stream feedback network, where one stream processes the input and the other one models the temporal relations. Li et al.  [60]

design a deep LSTM network for 3D skeletons online action detection which also estimates the start and end frame of the current action. Xu

et al.  [107] propose the Temporal Recurrent Network (TRN) to model the temporal context by simultaneously performing online action detection and anticipation. Besides, Shou et al.  [83] formulate the online detection of action start (ODAS) as a classification task of sliding windows and introduce a model based on Generative Adversarial Network to generate hard negative samples to improve the training of the samples.

Iii Temporal Modeling Approach

Iii-a Problem Formulation

Given an observed video stream containing frames from time 0 to t, the goal of online action detection is to recognize actions of interest occurring in frame t

with these observed frames. This is very different from other tasks like action recognition and temporal action detection which assume the entire video sequence is available at once. Formally, online action detection can be defined as the problem of maximizing the posterior probability,



is the possible action label vector for frame

with K action classes and one background class. Thus, conditioned on the observed sequence V, the action label with the maximum probability is chosen to be the detection result of frame . Generally, a pre-trained CNN model is first used to extract frame-level features, e.g. the feature of t-th frame , where is the fixed parameter of the model and is the dimension of feature embedding. Given the observed frame features , a temporal modeling module aims to aggregate discriminative information from them to better estimate the output action scores.

Iii-B Temporal Modeling

For online action detection, considering that faraway frames may be unrelated to the current action state, we usually input frames of a limited sequence length to the temporal modeling module, i.e. . For convenience, we denote the input features as , and assume the output of temporal modeling as . Next we discuss four types of temporal modeling methods as illustrated in Fig.2.

Temporal Pooling. Temporal feature pooling has been extensively used for video classification [85, 72, 24, 47] which is a simple method to generate video-level representation from frame-level features. As shown in Fig.2.A, we consider two temporal pooling approaches: (1) average pooling (AvgPool), i.e. , and (2) max pooling (MaxPool) over the temporal dimension, i.e. .

Temporal Convolution. Inspired by the convolutional approaches in the analysis of temporal sequential data [13, 6, 57, 58, 73] especially the WaveNet [73], we evaluate (1) traditional temporal convolution (TC), (2) pyramid dilated temporal convolution (PDC) originally used in [58], and (3) dilated causal convolution (DCC) developed in [73]. Formally, given input , our temporal convolution models output features of the same length as follows,



is a dilation rate indicating the temporal stride to sample frames,

is a convolutional kernel, and is the kernel size. It becomes our traditional temporal convolution (i.e. conv1D without dilation) if . As shown in Fig.2.B (a), PDC first separately conducts dilated temporal convolution with various dilation rates and then concatenates the outputs in frame-wise. Formally, the output frame-level feature of PDC is defined as follows,


PDC uses different to cover various range of temporal context which could be better than only . In our study, we use three dilation rates to efficiently enlarge the temporal receptive fields for PDC. As shown in Fig.2

.B (b), our dilated causal convolution (DCC) stacks several dilated convolutional layers with different rates. We perform ReLU after convolution, and add a residual connection to combine the input and the output of each layer. For each layer, we increase dilation rate

exponentially with the depth of the network (, at level of the network). Specifically, we also use three dilation rates

in order. To map the input sequence to an output sequence of the same length, we add zero padding with length

in all layers. Formally, the output of the i-th layer and time is defined as,


where and are parameters for dilated convolution, and are parameters to transform for the residual connection. After the temporal convolutional operation, we use average pooling to generate a single representation for classification by default.

Recurrent Neural Network (RNN). Recurrent Neural Network and its variants have recently been transformed from other sequence modeling topics into action classification [72, 19, 26, 68] and detection [30, 107, 87]. In contrast to temporal pooling operation which produces order-independent representations, RNN models the dependencies between consecutive frames and capture the temporal information of the input sequence. For each time step, the RNN cell receives the past step information and the current frame feature , and passes the current hidden state into the next time step.

Fig. 3: Illustration of (a) LSTM and (b) GRU. , , , and in LSTM (a) are respectively the , , and . and in GRU (b) are the reset and update gates. and are respectively the hidden activation and the candidate activation.

Specifically, we evaluate two popular recurrent cells, namely Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU). As a special RNN structure for sequence modeling, LSTM has been proven stable and powerful for modeling long-range dependencies in various topics [34, 35, 93] and online action anticipation [29]. We illustrate LSTM in Fig.3(a) following the implementation of [35]. Formally, LSTM is formulated as follows,



is the logistic sigmoid function, and

, , and are respectively the input gate, forget gate, memory cell and output gate. is the hidden state activation vector.

Similarly to LSTM unit, the GRU has gating units that modulate the flow of information inside the unit, as illustrated in Fig.3(b). The main difference between LSTM and GRU is that there is no separate memory cell in GRN. Formally, the GRU can be formulated as follows,


where is a set of reset gates, is an update gate, and is an element-wise multiplication.

Since the output of RNN is another sequence, we consider two methods to generate the final single representation , i) following the traditional Encoder-Decoder method, we directly take the hidden state at the last time step, i.e. . ii) We average the outputs of all the time steps, i.e. .

Temporal Attention. The attention mechanism [5, 64, 109, 94] allows the model to selectively focus on only a subset of frames by increasing the attention weights of the corresponding temporal feature, while ignoring irrelevant signals and noise. We evaluate four attention methods, namely naive self-attention (Naive-SA), nonlinear self-attention (Nonlinear-SA) with a FC-tanh-FC-Softmax architecture, Non-local block or standard self-attention with a skip connection, and Transformer with the current frame as the query (Q) information. Given a feature sequence , the Naive-SA can be implemented by a linear fully-connected (FC) layer and Softmax function as follows,


where and are parameters of the FC, and is the attention weight vector. Similar to [109], we can also add more nonlinear operation as follows (i.e. the Nonlinear-SA),


where is a weight matrix,

is the bias vector, and

and are parameters of the second FC layer. With the attention weights, the output representation is the weighted average vector .

Transformer is another popular attention based model, which was originally proposed to replace traditional recurrent models for machine translation [94]. The core idea of Transformer is to model correlation between contextual signals by an attention mechanism. Specifically, it aims to encode the input sequence to a higher-level representation by modeling the relationship between queries () and memory (keys () and values ()) with,


where , and . This architecture becomes standard “self-attention” with

. Normally, we use two convolution layers followed by Batch Normalization and ReLU to generate two new features

and from , and the Non-local method [100] further adds a skip connection between the input and the output as follows,


The updated temporal feature is processed with average pooling by default to generate the final temporal representation .

The query in Eq. (9) can be also a single feature vector, similar to [101] which replaces the self-attention weights by the one between local feature and long-term features, we compute the dot-product attention between current feature and historical features as illustrated in Fig.2.D (c). This adaption is based on the assumption that the current frame is the most important one for online action detection. With this operation, an attention weight vector is obtained and used to get the final representation as,


Training and Inference. With the output of temporal modeling module, we use linear FC layer with Softmax for classification, and train the whole network with cross-entropy loss. Specifically, we divide the feature sequence of a video into non-overlapped windows (size ) as the input of our temporal modeling module. At test stage, sliding window (size ) with stride 1 is used to formulate the input, and the prediction is made for the last frame.

Iv Experimental Configuration

In this section, we first introduce two widely-used OAD datasets, i.e

. TVSeries and THUMOS-14, and then describe our implementation details, including unit-level feature extraction and hyperparameter settings.

Iv-a Datasets

TVSeries [15] is originally proposed for online action detection, which consists of episodes of popular TV series, namely Breaking Bad ( episodes), How I Met Your Mother (), Mad Men (), Modern Family (), Sons of Anarchy (), and Twenty-four (). It contains totally hours of video. The dataset is temporally annotated at the frame level with realistic, everyday actions (, pick up, open door, drink, etc.). It is challenging with diverse actions, multiple actors, unconstrained viewpoints, heavy occlusions, and a large proportion of non-action frames.

Fig. 4: The temporal length distributions of action instances on (a) TVSeries and (b) THUMOS-14.

THUMOS-14 [44] is a popular benchmark for temporal action detection. It contains over hours of sport videos annotated with actions. The training set ( UCF101 [91]) contains only trimmed videos that cannot be used to train temporal action detection models. Following prior works [29, 107], we train our model on the validation set (including action instances in untrimmed videos) and evaluate on the test set (including action instances in untrimmed videos).

To investigate the characters of the used datasets, we depict the temporal length distributions of action instances on TVSeries and THUMOS-14 in Fig.4. We observe that 70% of action instances are very short on TVSeries (i.e. 0-2s) while half of instances are longer than 3 seconds on THUMOS-14.

Iv-B Evaluation Protocols

For each class on TVSeries, we use the per-frame calibrated average precision (cAP) which is proposed in [15],


where calibrated precision , is an indicator function that is equal to if the cut-off frame is a true positive, denotes the total number of true positives, and is the ratio between negative and positive frames. The mean cAP over all classes is reported for final performance. The advantage of cAP is that it is fair for class imbalance condition. For THUMOS-14, we report per-frame mean Average Precision (mAP) performance.

Iv-C Implementation details

Unit-level feature extraction. Following previous work [107, 29, 27, 62], a long untrimmed video is first cut into video units without overlap, each unit contains continuous frames. A video chunk is processed by a visual encoder to extract the unit-level representation . In our experiments, we extract frames from all videos at frames per second. The video unit size is set to , second. We use two-stream [105] network as the visual encoder that is pre-trained on ActivityNet-1.3 [7]. In each unit, the central frame is sampled to calculate the appearance CNN feature, it is the Flatten 673 layer of ResNet-200 [38]. For the motion feature, we sample consecutive frames at the center of a unit and calculate optical flows between them. These flows are then fed into the pretrained BN-Inception model [42], and the output of global pool layer is extracted. The motion features and the appearance features are both 2048-D, and are concatenated into 4096-D vectors (i.e. ), which are used as unit-level features.

Hyperparameter setting. For the PDC model, the concatenate features are fed into an addition convolution to reduce the feature dimensions to . For the DCC model, we use 3 dilated convolution layers, each of which is comprised of one dilated convolution with kernel size , followed by a ReLU and dropout. The output dimension of the second layer is set to , and thus a

convolution is added for residual connection. Our experiments are conducted in Pytorch. We use SGD optimizer to train the network from scratch. The leaning rate, momentum and decay rate are set to

, and , respectively. All of our experiments are implemented with 8 GTX TITAN X GPU, Intel i7 CPU, and 128GB memory.

V Exploration of Temporal Modeling Methods

In this section, we first present a quick comparison among the best settings of the four mentioned temporal modeling methods, and then extensively explore both individual temporal modeling methods and their combinations, and finally compare our results to the state of the arts.

V-a A Quick Comparison of Temporal Modeling Methods

As mentioned in the Introduction, we totally explore eleven temporal modeling methods from four meta types, namely temporal pooling, temporal attention, RNN, and temporal convolution. For a quick glance, Table I presents the results of the best choice (i.e. the 2nd row) for each meta type. For a fair comparison, the input sequence length L is fixed as 4. Several observations can be concluded as following. First, temporal convolution (i.e. DCC) achieves the best results on both TVSeries and THUMOS-14, which indicates that discriminative information can be obtained effectively by temporal convolution. Second, temporal attention (i.e. Transformer) performs slightly better than temporal pooling (i.e. AvgPool), which demonstrates the effectiveness of attention mechanism. Third, RNN (i.e. LSTM) outperforms Transformer and AvgPool with sizable margins on both datasets, which shows that the temporal dependencies captured by LSTM is crucial for accurate online action detection. Overall, an interesting finding is that the temporal-dependent methods, i.e. temporal convolution and RNN, are superior to these temporal-independent methods for online action detection.

Pooling Attention RNN Convolution
AvgPool Transformer LSTM DCC
TVSeries 81.2 81.5 82.9 83.1
THUMOS-14 41.5 43.3 45.9 46.8
TABLE I: A quick comparison among the best settings of different meta types of temporal modeling methods. The best choice of each type is presented in the 2nd row.

V-B Ablation Study for Individual Temporal Modeling Method

Fig. 5: Comparison between average pooling and max pooling with different sequence length as input.
Fig. 6: Evaluation of different sequence length with different output strategies for LSTM and GRU.

Temporal pooling. We test two temporal pooling methods (i.e. average pooling and max pooling) with different sequence lengths. The results are shown in Fig.5. We also compare them to the baseline that uses a fully-connected (FC) layer and Softmax to generate action probabilities frame by frame. The baseline model only relies on the current frame feature which obtains 79.8% (cAP) and 36.3% (mAP) on TVSeries and THUMOS-14, respectively. For temporal pooling, it is clear that average pooling consistently performs better than max pooling on both datasets. Increasing the sequence length improves both pooling methods in the beginning and degrades them dramatically after the saturation length. This can be explained by that appropriate historical information introduces useful context for online action detection while long-term historical information may introduce unrelated information and may also smooth the final representation. Another observation is that increasing the sequence length after L=4 is seriously harmful for TVSeries while not for THUMOS-14. This effect indicates a fact that each video in TVSeries contains multiple actions and numerous varied background frames while each video in THUMOS-14 only contains one action instance. Overall, the simple AvgPool method (L=4) respectively improves the baselines on TVSeries and THUMOS-14 by 1.4% and 5.2%.

RNN. We evaluate LSTM and GRU in the following four aspects: input sequence length, output strategy, hidden size, and the number of recurrent layers.

Input sequence length and output strategy. For these two factors, we vary the sequence length from 2 to 16, and evaluate two alternative output strategies including the last hidden state and the average hidden state . The hidden size is fixed to 4096 and only one recurrent layer is used for this evaluation. Fig.6 illustrates the comparison results for LSTM and GRU. Several conclusions can be drawn as following. First, the ‘last hidden state’ strategy performs consistently better than the ‘average hidden state’. It can be explained by that both LSTM and GRU automatically accumulate discriminative information into the last state by their temporal dependency operations while averaging all the hidden states may introduce unrelated or noisy information for online action detection. Second, LSTM performs better than GRU on THUMOS-14 while similarly or worse on TVSeries. This indicates that the separate memory cell in LSTM is helpful to capture more context information which is crucial for THUMOS-14 while too much context (unrelated actions or background) can degrade performance on TVSeries. Third, the effect of sequence length for both LSTM and GRU is the same as the one for pooling methods, and the best trade-off sequence length is 4 on both datasets.

Hidden size. We choose LSTM, and test different hidden sizes , , , , , . The last hidden state output strategy and sequence length are used for this evaluation. The results are shown in Fig.7. A clear observation is that increasing hidden size improves the final performance significantly on both datasets. In addition, it gets saturated after hidden size 2048, and the best results are respectively 82.9% and 45.9% on TVSeries and THUMOS-14 with hidden size 4096.

Fig. 7: Evaluation of hidden size for LSTM with the last hidden state output strategy and sequence length .
Fig. 8: Evaluation of different number of layers for LSTM and GRU with the last hidden state, sequence length , and hidden size .

The number of recurrent layers. Generally, one can easily stack several recurrent layers to model the complex dependency of sequences. To this end, we evaluate the number of recurrent layers for both LSTM and GRU on TVSeries and THUMOS-14. The results are shown in Fig.8. Interestingly, adding one more layer does not bring performance gain and even dramatically degrades the performance for LSTM on both datasets. The main problem is that adding one more recurrent layer can double the number of parameters leading to overfit easily.

Temporal convolution. As shown in Table II, we compare temporal convolution models with different kernel size and dilation rate , denoted as . For PDC and DCC, we use temporal convolutional filters with kernel size as a building block. The input sequence length is fixed to for all the comparison experiments. In order to obtain output with equal length as the input, we add zero padding as it needs. Several observations can be concluded as follows. First, the comparison between TC(2,1) and TC(3,1) indicates that the kernel size is slightly better than on both datasets. Second, the comparison among TC(2,1), TC(2,2), and TC(2,4) shows that different dilation rates perform similarly on both datasets. Third, both PDC and DCC which combines TC(2,1), TC(2,2), and TC(2,4) in either parallel or serial manner significantly improve the traditional TC models, and DCC performs best. This demonstrates that combining multi-dilation temporal convolution layers can capture complementary multi-scale action information.

TC(2,1) TC(3,1) TC(2,2) TC(2,4) PDC DCC
TVSeries 81.1 80.9 81.0 80.8 82.7 83.1
THUMOS-14 42.4 41.9 42.6 42.7 46.1 46.8
TABLE II: Comparison between different temporal convolutional models.
Naive-SA Nonlinear-SA Non-local Transformer
TVSeries 80.1 80.9 80.9 81.5
THUMOS-14 39.9 42.5 42.4 43.3
TABLE III: Comparison between different temporal attention models.
256 512 1024 2048 4096
TVSeries 80.6 80.9 80.6 80.7 80.3
THUMOS-14 42.0 42.0 42.5 41.1 41.5
TABLE IV: Evaluation of on TVSeries (cAP %) and THUMOS-14 dataset (mAP %).

Temporal attention. We compare four different attention models mentioned in Sec.III-B, i.e. Naive-SA as described in Eq.(7), Nonlinear-SA as described in Eq.(8), Non-local as described in Eq.(10), and Transformer as described in Eq.(11). As shown in Table III, several observations can be concluded as following. First, Nonlinear-SA outperforms Naive-SA by on TVSeries and on THUMOS-14. Compared to Naive-SA, Nonlinear-SA computes attention weights with one more nonlinear tanh and linear FC which may be more effective for modeling the complex temporal relationships. Second, Non-local performs equally to Nonlinear-SA on both datasets, indicating that they share the similar attention mechanism more or less. Third, Transformer with current frame feature as a query performs better than Non-local by on TVSeries and on THUMOS-14, showing the effectiveness of our proposed design (i.e. computing the attention between current frame feature with historical features) for online action detection.

As there is a hyper parameter in Nonlinear-SA (see Eq.(8)) which can impact the final performance, we also conduct an evaluation in Table IV. We observe (1024) yield the best performance for TVSeries (THUMOS-14), and the final performance is not very sensitive to it.

Fig. 9: The online detection results of ours compared to previous methods in terms of per-frame cAP () for each action class on TVSeries.
Hybrid models TVSeries THUMOS-14
M1 LSTM Transformer 83.6 47.7
M2 DCC Transformer 84.3 47.1
M3 LSTM DCC Transformer 83.0 48.5
M4 DCC LSTM Transformer 83.7 48.6
M5 DCC LSTM 83.2 47.9
M6 LSTM DCC AvgPool 81.5 47.5
TABLE V: Comparison between different hybrid temporal models on TVSeries (cAP %) and THUMOS-14 (mAP %).
Method Inputs cAP
CNN (De Geest et al. , 2016) [15] VGG 60.8
LSTM (De Geest et al. , 2016) [15] 64.1
RED (Gao et al. , 2017) [29] 71.2
Stacked LSTM (De Geest and Tuytelaars, 2018) [17] 71.4
2S-FN (De Geest and Tuytelaars, 2018) [17] 72.4
TRN (Xu et al. , 2019) [107] 75.4
SVM (e Geest et al. , 2016) [15] FV 74.3
RED (Gao et al. , 2017) [29] TS 79.2
TRN (Xu et al. , 2019) [107] 83.7
Ours TS 84.3
TABLE VI: Comparison with state-of-the-art methods in terms of per-frame cAP (%) on TVSeries.
Method mAP
Single-frame CNN (Simonyan and Zisserman, 2014) [86] 34.7
Two-stream CNN (Simonyan and Zisserman, 2014) [85] 36.2
C3D+LinearInterp (Shou et al. , 2017) [82] 37.0
Predictive-corrective (Dave et al. , 2017) [14] 38.9
LSTM (Donahue et al. , 2014) [19] 39.3
MultiLSTM (Yeung et al. , 2015) [110] 41.3
CDC (Shou et al. , 2017) [82] 44.4
RED (Gao et al. , 2017)[29] 45.3
TRN (Xu et al. , 2019) [107] 47.2
Ours 48.6
TABLE VII: Comparison with published state-of-the-art methods in terms of per-frame mAP (%) on THUMOS-14.

V-C Combination of Temporal Modeling Methods

Generally, these sequence-to-sequence temporal models, e.g. DCC and LSTM can be further processed by aggregation methods like temporal pooling and temporal attention to generate a single representation. Thus, we present several hybrid temporal modeling methods which combine different temporal modeling methods, aiming to uncover the complementarity among them. Specifically, according to their characters, we mainly combine those temporal-dependent models and temporal-independent models as follows.

  • LSTM Transformer: The hidden states at all time steps are further fed into Transformer to generate a single representation, and the classification is performed on the representation.

  • DCC Transformer: The output of the DCC network is the same as the input sequence length, and Transformer is performed on the output sequence to generate a single representation for classification.

  • DCC LSTM Transformer: The output sequence is further processed by LSTM aiming to capture strong temporal dependency, and finally Transformer is used to generate the representation for classification.

  • LSTM DCC Transformer: The hidden states are first fed into DCC and then Transfomer. This model is similar to M3 except for that it swaps the order of DCC and LSTM;

  • DCC LSTM: The output sequence of DCC is processed by LSTM, and then the last hidden state is used for action classification.

  • LSTM DCC AvgPool: This model replaces the Transformer of M3 with AvgPool to generate the final representation for classification.

The results of the hybrid methods on TVSeries and THUMOS-14 are shown in Table V. Several observations can be concluded as following. First, the best results on TVSeries and THUMOS-14 are achieved by M2 and M4, respectively. Second, combining temporal-dependent models (i.e. LSTM and DCC) with temporal-independent ones (i.e. Transformer) largely improves individual models, which indicates that they are complementary. Third, integrating LSTM into DCC Transformer (i.e. M2M3) degrades the performance by 1.3% on TVSeries while increases the one by 1.4% on THUMOS-14. This may be explained by that temporal dependencies are important for these long-term action instances of THUMOS-14 while harmful for the dominated short-term action instances of TVSeries.

V-D Comparison with state-of-the-art

We compare our best results to the state-of-the-art approaches on TVSeries and THUMOS-14 in Table VI and Table VII, respectively. With two-stream features, we achieve in terms of mean cAP on TVSeries and 48.6% mAP on THUMOS-14, which outperforms the recent sophisticated-designed TRN [107] by and , respectively. Besides, we also present the comparison of ours with previous methods [15] for each action class on TVSeries in Fig.9. Our method can always outperform CNN and LSTM by a large margin except for action class Use computer and Write.

Vi Conclusions

In this paper, we provide a comprehensive study on temporal modeling for online action detection including four meta types of temporal modeling methods, i.e. temporal pooling, temporal convolution, recurrent neural networks, and temporal attention. We extensively explore eleven individual temporal modeling methods and explore several hybrid temporal models which combine different temporal modeling methods to uncover the complementarity among them. Based on our comprehensive study, we find that a simple fusion between dilated causal convolution and Transformer or LSTM improves the individual models significantly and also outperforms the best existing performance with a sizable margin on both TVSeries and THUMOS-14 datasets.


  • [1] Mohammad Sadegh Aliakbarian, Fatemeh Sadat Saleh, Mathieu Salzmann, Basura Fernando, Lars Petersson, and Lars Andersson. Encouraging lstms to anticipate actions very early. In ICCV, 2017.
  • [2] Mohammad Sadegh Aliakbarian, Fatemehsadat Saleh, Mathieu Salzmann, Basura Fernando, and Lars Andersson. Encouraging lstms to anticipate actions very early. In ICCV, 2017.
  • [3] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. 2016.
  • [4] Seungryul Baek, Kwang In Kim, and Tae-Kyun Kim. Real-time online action detection forests using spatio-temporal contexts. CoRR, 2016.
  • [5] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. 2014.
  • [6] Shaojie Bai, J. Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. 2018.
  • [7] Fabian Caba, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In CVPR, 2015.
  • [8] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 2017.
  • [9] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 2017.
  • [10] Yu Wei Chao, Sudheendra Vijayanarasimhan, Bryan Seybold, David A. Ross, Jia Deng, and Rahul Sukthankar. Rethinking the faster r-cnn architecture for temporal action localization. In CVPR, 2018.
  • [11] KyungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. CoRR, abs/1409.1259, 2014.
  • [12] Junyoung Chung, Çaglar Gülçehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. volume abs/1412.3555, 2014.
  • [13] Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. 2017.
  • [14] Achal Dave, Olga Russakovsky, and Deva Ramanan. Predictive-corrective networks for action detection. 2017.
  • [15] Roeland De Geest, Efstratios Gavves, Amir Ghodrati, Zhenyang Li, Cees Snoek, and Tinne Tuytelaars. Online action detection. In CVPR, 2016.
  • [16] Roeland De Geest and Tinne Tuytelaars. 2018 ieee winter conference on applications of computer vision (wacv) - modeling temporal structure with lstm for online action detection. In WACV, 2018.
  • [17] Roeland De Geest and Tinne Tuytelaars. Modeling temporal structure with lstm for online action detection. In WACV, pages 1549–1557, 2018.
  • [18] Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual recognition and description. In CVPR, volume abs/1411.4389, 2014.
  • [19] Jeff Donahue, Lisa Anne Hendricks, Marcus Rohrbach, Subhashini Venugopalan, Sergio Guadarrama, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual recognition and description. 2014.
  • [20] Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual recognition and description. In CVPR, 2015.
  • [21] Tran Du, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3d convolutional networks. In ICCV, 2015.
  • [22] Tran Du, Heng Wang, Lorenzo Torresani, Jamie Ray, and Yann Lecun. A closer look at spatiotemporal convolutions for action recognition. In CVPR, 2018.
  • [23] Quanfu Fan, Chun-Fu (Richard) Chen, Hilde Kuehne, Marco Pistoia, and David Cox. More is less: Learning efficient video representations by big-little network and depthwise temporal aggregation. In NIPS, pages 2261–2270, 2019.
  • [24] Christoph Feichtenhofer, Axel Pinz, and Andrew Zisserman. Convolutional two-stream network fusion for video action recognition. In CVPR, 2016.
  • [25] Antonino Furnari and Giovanni Maria Farinella. What would you expect? anticipating egocentric actions with rolling-unrolling lstms and modality attention. In ICCV, 2019.
  • [26] Harshala Gammulle, Simon Denman, Sridha Sridharan, and Clinton Fookes. Two stream lstm: A deep fusion framework for human action recognition. 2017.
  • [27] Jiyang Gao, Kan Chen, and Ram Nevatia. Ctap: Complementary temporal action proposal generation. In ECCV, 2018.
  • [28] Jiyang Gao and Ram Nevatia. Revisiting temporal modeling for video-based person reid. In BMVC, 2018.
  • [29] Jiyang Gao, Zhenheng Yang, and Ram Nevatia. RED: reinforced encoder-decoder networks for action anticipation. In BMVC, 2017.
  • [30] Mingfei Gao, Mingze Xu, Larry S. Davis, Richard Socher, and Caiming Xiong. Startnet: Online detection of action start in untrimmed videos. 2019.
  • [31] Rohit Girdhar, Joao Carreira, Carl Doersch, and Andrew Zisserman.

    Video action transformer network.

    In CVPR, 2019.
  • [32] Ross B. Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014.
  • [33] Georgia Gkioxari and Jitendra Malik. Finding action tubes. CoRR, abs/1411.6031, 2014.
  • [34] Alex Graves. Long short-term memory. volume 9, pages 1735–1780, 1997.
  • [35] Alex Graves. Generating sequences with recurrent neural networks. CoRR, abs/1308.0850, 2013.
  • [36] Chunhui Gu, Chen Sun, David A. Ross, Carl Vondrick, Caroline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, Cordelia Schmid, and Jitendra Malik. AVA: A video dataset of spatio-temporally localized atomic visual actions. In CVPR, 2018.
  • [37] Kensho Hara, Hirokatsu Kataoka, and Yutaka Satoh.

    Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet?

    In CVPR, 2017.
  • [38] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. volume abs/1512.03385, 2015.
  • [39] Minh Hoai and Fernando De La Torre. Max-margin early event detectors. In CVPR, 2012.
  • [40] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. volume 9, pages 1735–1780, 1997.
  • [41] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 1997.
  • [42] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. 2015.
  • [43] Hueihan Jhuang, Juergen Gall, Silvia Zuffi, Cordelia Schmid, and Michael J. Black. Towards understanding action recognition. In ICCV, 2013.
  • [44] Y.-G. Jiang, J. Liu, A. Roshan Zamir, G. Toderici, I. Laptev, M. Shah, and R. Sukthankar. THUMOS challenge: Action recognition with a large number of classes. 2014.
  • [45] Xu Jing, Zhao Rui, Zhu Feng, Huaming Wang, and Wanli Ouyang. Attention-aware compositional network for person re-identification. In CVPR, 2018.
  • [46] Rafal Józefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. volume abs/1602.02410, 2016.
  • [47] Amlan Kar, Nishant Rai, Karan Sikka, and Gaurav Sharma. Adascan: Adaptive scan pooling in deep convolutional neural networks for human action recognition in videos. 2017.
  • [48] Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, and Fei Fei Li. Large-scale video classification with convolutional neural networks. In CVPR, 2014.
  • [49] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, and Andrew Zisserman. The kinetics human action video dataset. 2017.
  • [50] Qiuhong Ke, Mohammed Bennamoun, Senjian An, Farid Boussaid, and Ferdous Sohel. Human interaction prediction using deep temporal features. In ECCV, 2016.
  • [51] Qiuhong Ke, Mario Fritz, and Bernt Schiele. Time-conditioned action anticipation in one shot. In CVPR, 2019.
  • [52] Y. Kong, Z. Tao, and Y. Fu. Deep sequential context networks for action prediction. In CVPR, 2017.
  • [53] Yu Kong, Yunde Jia, and Yun Fu. Interactive phrases: Semantic descriptionsfor human interaction recognition. volume 36, pages 1775–1788, 2014.
  • [54] Yu Kong, Dmitry Kit, and Yun Fu. A discriminative model with multiple temporal scales for action prediction. In ECCV, 2014.
  • [55] Hilde Kuehne, Hueihan Jhuang, Rainer Stiefelhagen, and Thomas Serre. Hmdb51: A large video database for human motion recognition. In High Performance Computing in Science and Engineering ‘12, pages 571–582. Springer, 2013.
  • [56] Ivan Laptev, Marcin Marszalek, Cordelia Schmid, and Benjamin Rozenfeld. Learning realistic human actions from movies. In CVPR, 2008.
  • [57] Colin Lea, Michael D. Flynn, René Vidal, Austin Reiter, and Gregory D. Hager. Temporal convolutional networks for action segmentation and detection. volume abs/1611.05267, 2016.
  • [58] Jianing Li, Jingdong Wang, Qi Tian, Wen Gao, and Shiliang Zhang. Global-local temporal representations for video person re-identification. 2019.
  • [59] Kang Li and Yun Fu. Prediction of human activity by discovering temporal sequence patterns. volume 36, pages 1644–1657, 2014.
  • [60] Yanghao Li, Cuiling Lan, Junliang Xing, Wenjun Zeng, Chunfeng Yuan, and Jiaying Liu. Online human action detection using joint classification-regression recurrent neural networks. In ECCV, 2016.
  • [61] Ji Lin, Chuang Gan, and Song Han. Temporal shift module for efficient video understanding. volume abs/1811.08383, 2018.
  • [62] Tianwei Lin, Xu Zhao, Haisheng Su, Chongjing Wang, and Ming Yang. Bsn: Boundary sensitive network for temporal action proposal generation. In ECCV, 2018.
  • [63] Tianwei Lin, Xu Zhao, Haisheng Su, Chongjing Wang, and Ming Yang. Bsn: Boundary sensitive network for temporal action proposal generation. In ECCV, 2018.
  • [64] Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. 2017.
  • [65] Chunhui Liu, Yanghao Li, Yueyu Hu, and Jiaying Liu. Online action detection and forecast via multitask deep recurrent neural networks. In ICASSP, 2017.
  • [66] Jiaying Liu, Yanghao Li, Sijie Song, Junliang Xing, and Wenjun Zeng. Multi-modality multi-task recurrent neural network for online action detection. 2018.
  • [67] Jun Liu, Gang Wang, Ling Yu Duan, Kamila Abdiyeva, and Alex C. Kot. Skeleton based human action recognition with global context-aware attention lstm networks. volume PP, pages 1–1, 2018.
  • [68] Jun Liu, Gang Wang, Ping Hu, Ling Yu Duan, and Alex C Kot. Global context-aware attention lstm networks for 3d action recognition. In CVPR, 2017.
  • [69] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott E. Reed, Cheng-Yang Fu, and Alexander C. Berg. SSD: single shot multibox detector. In ECCV, 2016.
  • [70] Minh Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention-based neural machine translation. 2015.
  • [71] Tahmida Mahmud, Mahmudul Hasan, and Amit K. Roy-Chowdhury. Joint prediction of activity labels and starting times in untrimmed videos. In CVPR, 2017.
  • [72] Yue Hei Ng, Matthew Hausknecht, Sudheendra Vijayanarasimhan, Oriol Vinyals, Rajat Monga, and George Toderici. Beyond short snippets: Deep networks for video classification. In CVPR, 2015.
  • [73] Aaron Van Den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. 2016.
  • [74] Alonso Patron, Marcin Marszalek, Andrew Zisserman, and Ian Reid. High five: Recognising human interactions in tv shows. In BMVC, 2010.
  • [75] Xiaojiang Peng and Cordelia Schmid. Multi-region two-stream R-CNN for action detection. In ECCV, 2016.
  • [76] Zhaofan Qiu, Ting Yao, and Mei Tao. Learning spatio-temporal representation with pseudo-3d residual networks. In ICCV, 2017.
  • [77] S. Ren, K. He, R Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. volume 39, pages 1137–1149, 2017.
  • [78] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. 2015.
  • [79] M. S. Ryoo. Human activity prediction: Early recognition of ongoing activities from streaming videos. In CVPR, 2012.
  • [80] Shikhar Sharma, Ryan Kiros, and Ruslan Salakhutdinov. Action recognition using visual attention. In ICLR, 2015.
  • [81] Shikhar Sharma, Ryan Kiros, and Ruslan Salakhutdinov. Action recognition using visual attention. 2017.
  • [82] Zheng Shou, Jonathan Chan, Alireza Zareian, Kazuyuki Miyazawa, and Shih Fu Chang. Cdc: Convolutional-de-convolutional networks for precise temporal action localization in untrimmed videos. In ICCV, 2017.
  • [83] Zheng Shou, Junting Pan, Jonathan Chan, Kazuyuki Miyazawa, Hassan Mansour, Anthony Vetro, Xavier Giro-I-Nieto, and Shih Fu Chang. Online detection of action start in untrimmed, streaming videos. In CVPR, 2018.
  • [84] Zheng Shou, Dongang Wang, and Shih Fu Chang. Temporal action localization in untrimmed videos via multi-stage cnns. In CVPR, 2016.
  • [85] Karen Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS, 2014.
  • [86] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. 2014.
  • [87] Bharat Singh, Tim K. Marks, Michael Jones, Oncel Tuzel, and Shao Ming. A multi-stream bi-directional recurrent neural network for fine-grained action detection. In CVPR, 2016.
  • [88] Gurkirt Singh, Suman Saha, Michael Sapienza, Philip HS Torr, and Fabio Cuzzolin. Online real-time multiple spatiotemporal action localisation and prediction. In ICCV, pages 3637–3646, 2017.
  • [89] Sijie Song, Cuiling Lan, Junliang Xing, Wenjun Zeng, and Jiaying Liu. An end-to-end spatio-temporal attention model for human action recognition from skeleton data. In AAAI, 2016.
  • [90] Khurram Soomro, Haroon Idrees, and Mubarak Shah. Predicting the where and what of actors and actions through online action localization. In CVPR, pages 2648–2657, 2016.
  • [91] Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. 2012.
  • [92] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. volume 15, pages 1929–1958, 2014.
  • [93] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. CoRR, abs/1409.3215, 2014.
  • [94] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. volume abs/1706.03762, 2017.
  • [95] Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Anticipating the future by watching unlabeled video. volume abs/1504.08023, 2015.
  • [96] Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Anticipating visual representations from unlabeled video. In CVPR, 2016.
  • [97] Heng Wang, Alexander Kläser, Cordelia Schmid, and Cheng Lin Liu. Dense trajectories and motion boundary descriptors for action recognition. 2013.
  • [98] Hongsong Wang and Jiashi Feng. Delving into 3d action anticipation from streaming videos. 2019.
  • [99] Limin Wang, Yuanjun Xiong, Wang Zhe, and Qiao Yu. Towards good practices for very deep two-stream convnets. 2015.
  • [100] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In CVPR, 2018.
  • [101] Chao-Yuan Wu, Christoph Feichtenhofer, Haoqi Fan, Kaiming He, Philipp Krahenbuhl, and Ross Girshick. Long-term feature banks for detailed video understanding. In CVPR, June 2019.
  • [102] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, and Klaus Macherey. Google’s neural machine translation system: Bridging the gap between human and machine translation. 2016.
  • [103] Zuxuan Wu, Xi Wang, Yu-Gang Jiang, Hao Ye, and Xiangyang Xue. Modeling spatial-temporal clues in a hybrid deep learning framework for video classification. volume abs/1504.01561, 2015.
  • [104] Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, and Kevin Murphy. Rethinking spatiotemporal feature learning for video understanding. volume abs/1712.04851, 2017.
  • [105] Yuanjun Xiong, Limin Wang, Zhe Wang, Bowen Zhang, Hang Song, Wei Li, Dahua Lin, Yu Qiao, Luc Van Gool, and Xiaoou Tang. Cuhk & ethz & siat submission to activitynet challenge 2016. 2016.
  • [106] Huijuan Xu, Abir Das, and Kate Saenko. R-c3d: Region convolutional 3d network for temporal activity detection. In ICCV, 2017.
  • [107] Mingze Xu, Mingfei Gao, Yi-Ting Chen, Larry S. Davis, and David J. Crandall. Temporal recurrent networks for online action detection. In ICCV, 2019.
  • [108] Fan Yang, Ke Yan, Shijian Lu, Huizhu Jia, Xiaodong Xie, and Wen Gao. Attention driven person re-identification. pages S0031320318303133–, 2018.
  • [109] Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. Hierarchical attention networks for document classification. 2016.
  • [110] Serena Yeung, Olga Russakovsky, Ning Jin, Mykhaylo Andriluka, Greg Mori, and Li Fei-Fei.

    Every moment counts: Dense detailed labeling of actions in complex videos.

  • [111] Cao Yu, Daniel Barrett, Andrei Barbu, Siddharth Narayanaswamy, and Wang Song. Recognize human activities from partially observed videos. In CVPR, 2013.
  • [112] Runhao Zeng, Wenbing Huang, Mingkui Tan, Yu Rong, Peilin Zhao, Junzhou Huang, and Chuang Gan. Graph convolutional networks for temporal action localization. In ICCV, 2019.