has been widely studied in an offline setting, which allows making a decision for the detection after fully observing a long untrimmed video. This is called offline action detection. In contrast, online action detection aims to identify ongoing actions from streaming videos, at every moment in time. This task is useful for many real-world applications (e.g., autonomous driving, robot assistants , and surveillance systems [17, 28]).
and gated recurrent unit (GRU)) for modeling the temporal sequence of an ongoing action. They introduce additional modules to learn a discriminative representation. However, these methods overlook the fact that the given input video contains not only the ongoing action but irrelevant actions and background. Specifically, the recurrent unit accumulates the input information without explicitly considering its relevance to the current action, and thus the learned representation would be less discriminative. Note that, in the task of detecting actions online, ignoring such a characteristic of streaming videos makes the problem more challenging .
In this paper, we investigate on the question of how RNNs can learn to explicitly discriminate relevant information from irrelevant information for detecting actions in the present. To this end, we propose a novel recurrent unit that extends GRU  with a mechanism utilizing current information and an early embedding module (see Fig. 1). We name our recurrent unit Information Discrimination Unit (IDU). Specifically, our IDU models the relation between an ongoing action and past information (i.e., and ) by additionally taking current information (i.e., ) at every time step. We further introduce the early embedding module to more effectively model the relation. By adopting action classes and feature distances as supervisions, our embedding module learns the features for the current and past information describing actions in a high level. Based on IDU, our Information Discrimination Network (IDN) effectively determines whether to use input information in terms of its relevance to the current action. This enables the network to learn a more discriminative representation for detecting ongoing actions. We perform extensive experiments on two benchmark datasets, where our IDN achieves state-of-the-art performances of 86.1% mcAP and 60.3% mAP on TVSeries  and THUMOS-14 , respectively. These performances significantly outperform TRN , the previous best performer, by 2.4% mcAP and 13.1% mAP on TVSeries and THUMOS-14, respectively.
Our contributions are summarized as follows:
Different from previous methods, we investigate on how recurrent units can explicitly discriminate relevant information from irrelevant information for online action detection.
We introduce a novel recurrent unit, IDU, with a mechanism using current information at every time step and an early embedding module to effectively model the relevance of input information to an ongoing action.
We demonstrate that our IDN significantly outperforms state-of-the-arts in extensive experiments on two benchmark datasets.
2 Related Work
Offline Action Detection. The goal of offline action detection is to detect the start and end times of action instances from fully observed long untrimmed videos. Most methods [3, 27, 38] consist of two steps including action proposal generation and action classification. SSN 
first evaluates actionness scores for temporal locations to generate temporal intervals. Then, these intervals are classified by modeling the temporal structures and completeness of action instances. TAL-Net including the proposal generation and classification networks is the extended version of Faster R-CNN  for offline action detection. This method changes receptive field alignment, the range of receptive fields, and feature fusion to fit the action detection. Several methods [7, 21, 22, 23] has been also studied for temporal action proposal generation. They show large performance gains by combining promising action proposals with existing action classifiers.
Early Action Prediction. This task is similar to online action detection but focuses on recognizing actions from the partially observed videos. Hoai and la Torre  introduced a maximum margin framework with the extended structured SVM  to accommodate sequential data. Cai et al.  proposed to transfer the action knowledge learned from full actions for modeling partial actions.
Online Action Detection. Given a streaming video, online action detection aims to identify actions as soon as each video frame arrives, without observing future video frames. Geest et al.  introduced a new large dataset, TVSeries, for online action detection. They also analyzed and compared several baseline methods on the TVseries dataset. Gao, Yang, and Nevatia  proposed an encoder-decoder network with a reinforcement module, of which the reward function encourages the network to make correct decisions as early as possible. TRN  predicts future information and utilizes the predicted future as well as the past and current information together for detecting a current action.
Aforementioned methods [9, 8, 35] for online action detection adopt RNNs to model a current action sequence. However, the RNN units such as LSTM  and GRU  operate without explicitly considering whether input information is relevant to the ongoing action. Therefore, the current action sequence is modeled based on both relevant and irrelevant information, which results in a less discriminative representation.
3 Preliminary: Gated Recurrent Units
We first analyze GRU  to compare differences between the proposed IDU and GRU. GRU is one of the recurrent units, which is much simpler than LSTM. Two main components of GRU are reset and update gates.
The reset gate is computed based on a previous hidden state and an input as follows:
where and are parameters to be trained and
is the logistic sigmoid function. Then, the reset gate determines whether a previous hidden stateis ignored as
where is a new previous hidden state.
Similar to , the update gate is also computed based on and as
where and are learnable parameters. The update gate decides whether a hidden state is updated with a new hidden state as follows:
Here and are trainable parameters and is the tangent hyperbolic function.
Based on reset and update gates, GRU effectively drops and accumulates information to learn a compact representation.
However, there are limitations when we applied GRU to online action detection as below:
First, the past information including and directly effects the decision of the reset and update gates. For online action detection, the relevant information to be accumulated is the information related to a current action. Thus, it is advantageous to make a decision based on the relation between the past information and the current action instead. To this end, we reformulate the computations of the reset and update gates by additionally taking the current information (i.e., ) as input. This enables the reset and update gates to drop the irrelevant information and accumulate the relevant information regarding the ongoing action.
Second, it is implicitly considered that the input features that the reset and update gates use represent valuable information. We augment GRU with an early embedding module with supervisions, action classes and feature distances, so that the input features explicitly describe actions. This also lets the reset and update gates focus on accumulating the relevant information along with the recurrent steps.
We present the schematic view of our IDU and the framework of IDN in Fig. 2. We first describe our IDU in details and then explain on IDN for online action detection.
4.1 Information Discrimination Units
Our IDU extends GRU with two new components, a mechanism utilizing current information (i.e., ) and an early embedding module. We explain IDU with early embedding, reset, and update modules, which takes a previous hidden state , the features at each time , and the features at current time as input and outputs a hidden state (see Fig. 2.(a)).
Early Embedding Module. Our early embedding module individually processes the features at each time and the features at current time and outputs embedded features and as follows:
where is a weight matrix and
is the ReLUactivation function. Note that we share for and . We omit a bias term for simplicity.
To encourage and to represent specific actions, we introduce two supervisions, action classes and feature distances. First, we process and to obtain probability distributions and over action classes and background:
where is a shared weight matrix to be learned and is the softmax function. We design a classification loss by adopting the multi-class cross-entropy loss as
where and are ground truth labels. Second, we use the contrastive loss [5, 11] proposed to learn an embedding representation by preserving the distance between similar data points close and dissimilar data points far on the embedding space in metric learning . By using and as a pair, we design our contrastive loss as
where is the squared Euclidean distance and is a margin parameter.
We train our embedding module with and , which provides more representative features for actions. More details on training will be provided in Section 4.2.
Reset Module. Our reset module takes the previous hidden state and the embedded features to compute a reset gate as
where and are weight matrices which are learned. We define as the logistic sigmoid function same as GRU. We then obtain a new previous hidden state as follows:
Different from GRU, we compute the reset gate based on and . This enables our reset gate to effectively drop or take the past information according to its relevance to an ongoing action.
Update Module. Our update module adopts the embedded features and to compute an update gate as follows:
where and are trainable parameters. Then, a hidden state is computed as follows:
Here is a new hidden state and is the tangent hyperbolic function. and are trainable parameters.
There are two differences between the update modules of our IDU and GRU. The first difference is that our update gate is computed based on and . This allows the update gate to consider whether is relevant to an ongoing action. Second, our update gate uses the embedded features which are more representative in terms of specific actions.
4.2 Information Discrimination Network
In this section, we explain our recurrent network, called IDN, for online action detection (see Fig. 2.(b)).
Problem Setting. To formulate the online action detection problem, we follow the same setting as in previous methods [8, 35]. Given a streaming video including current and past chunks as input, our IDN outputs a probability distribution of a current action over action classes and background. Here we define a chunk as the set of consecutive frames. is the th frame.
Feature Extractor. We use TSN  as a feature extractor. TSN takes an individual chunk
as input and outputs an appearance feature vectorand a motion feature vector . We concatenate and into a two-stream feature vector . Here equals to . After that, we sequentially feed and into our IDU.
Training. We feed the hidden state at current time into a fully connected layer to obtain the final probability distribution of an ongoing action as follows:
where is a trainable matrix.
We define a classification loss for a current action by employing the stand cross-entropy loss as
where are the ground truth labels for the th time step. We train our IDN by jointly optimizing , , and by designing a multi-task loss to train IDN as follows:
where is a balance parameter and is the softmax function.
In this section, we evaluate the proposed method on two benchmark datasets, TVSeries  and THUMOS-14 . We first demonstrate the effectiveness of our IDU by conducting comprehensive ablation studies. We then report comparison results among our IDN and the state-of-the-art methods for online action detection.
TVSeries . This dataset includes 27 untrimmed videos on six popular TV series, divided into 13, 7, and 7 videos for training, validation, and test, respectively. Each video contains a single episode, approximately 20 minutes or 40 minutes long. The dataset is temporally annotated with 30 realistic actions (e.g., open door, read, eat, etc). The TVSeries dataset is challenging due to diverse undefined actions, multiple actors, heavy occlusions, and a large proportion of non-action frames.
THUMOS-14 . The THUMOS-14 dataset consists of 200 and 213 untrimmed videos for validation and test sets, respectively. This dataset has temporal annotations with 20 sports actions (e.g., diving, shot put, billiards, etc). Each video includes 15.8 action instances and 71 background on average. As done in previous methods [8, 35], we used the validation set for training and the test set for evaluation.
5.2 Evaluation Metrics
For evaluating performance in online action detection, existing methods [9, 8, 35] measure mean average precision (mAP) and mean calibrated average precision (mcAP)  in a frame-level. Both metrics are computed in two steps: 1) calculating the average precision over all frames for each action class and 2) averaging the average precision values over all action classes.
mean Average Precision (mAP). On each action class, all frames are first sorted in descending order of their probabilities. The precision at cut-off (i.e., on the sorted frames) is then computed as
where TP and FP are the numbers of true positive and false positive frames, respectively. Based on Eq. (20), the average precision of the th class over all frames is computed as follows:
where is the total number of positive frames. equals to 1 if the frame is a true positive, otherwise 0. The final mAP is then defined as the mean of the AP values over all action classes.
mean calibrated Average Precision (mcAP). It is difficult to compare two different classes in terms of the AP values when the ratios of positive frames versus negative frames for these classes are different. To address this problem, Geest et al.  propose the calibrated precision as
where is a ratio between negative frames and positive frames. Similar to the AP, the calibrated average precision of the th class over all frames is computed as
Then, the mcAP is obtained by averaging the cAP values over all action classes.
5.3 Implementation Details
Problem Setting. We use the same setting as used in state-of-the-art methods [8, 35]. On both TVSeries  and THUMOS-14  datasets, we extract video frames at fps and set the number of frames in each chunk to . We use chunks (i.e., ), which are seconds long, for the input of IDN.
Feature Extractor. We use a two-stream network as a features extractor. In the two-stream network, one stream encodes appearance information by taking the center frame of a chunk as input, while another stream encodes motion information by processing an optical flow stack computed from an input chunk. Among several two-stream networks, we employ the TSN model  pretrained on the ActivityNet-v1.3 dataset . Note that this TSN is the same feature extractor as used in state-of-the-art methods [8, 35]. The TSN model consists of ResNet-200  for an appearance network and BN-Inception  for a motion network. We use the outputs of the Flatten_673 layer in ResNet-200 and the global_pool layer in BN-Inception as the appearance features and motion features , respectively. The dimensions of and are and , respectively, and equals to .
IDN Architecture. Table 1 provides the specifications of IDN considered in our experiments. In the early embedding module, we set the number of the hidden units for to . In the reset module, both weights and have 512 hidden units. In the update module, we use 512 hidden units for , , , and . According to the number of action classes, we set to 31 for TVSeries and 21 for THUMOS-14.
|Early Embedding Module||FC|
To train our IDN, we use a stochastic gradient descent optimizer with the learning rate of 0.01 for both THUMOS-14 and TVSeries datasets. We set batch size to 128 and balance the numbers of action and background samples in terms of the class of. We empirically set the margin parameter in Eq. (11) to and the balance parameter in Eq. (19) to .
5.4 Ablation Study
We evaluate RNNs with the simple unit, LSTM , and GRU . We name these networks RNN-Simple, RNN-LSTM, and RNN-GRU, respectively. Although many methods [9, 8, 35] report the performances of these networks as baselines, we evaluate them in our setting to clearly confirm the effectiveness of our IDU.
In addition, we individually add IDU components to GRU as a baseline for analyzing their effectiveness:
Baseline+CI: We add a mechanism using current information to GRU in computing reset and update gates. Specifically, we replace Eq. (1) for with
and Eq. (3) for with
where , , , and are trainable parameters.
We construct a recurrent network with this modified unit.
Baseline+CI+EE (IDN): We incorporate our main components, a mechanism utilizing current information and an early embedding module, into GRU, which is our IDU. These components enable reset and update gates to effectively model the relation between an ongoing action and input information at every time step. Specifically, Eq. (12) and Eq. (14) are substituted for Eq. (1) and Eq. (3), respectively. We design a recurrent network with our IDU, which is the proposed IDN.
In Table 2, we report the performances of five networks on the TVSeries dataset . Among RNN-Simple, RNN-LSTM, and RNN-GRU, RNN-GRU results in the highest mcAP of 81.3%. By comparing RNN-GRU (Baseline) with Baseline+CI, we first analyze the effect of using in calculating reset and update gates. This component enables the gates to decide whether input information at each time is relevant to a current action. As a result, Baseline-CI achieves the performance gain of 2.1% mcAP, which demonstrates the effectiveness of using . Next, we observe that adding the early embedding module improves the performance by 1.3% mcAP from the comparison between Baseline+CI and Baseline+CI+EE (IDN). Note that our IDN achieves mcAP of 84.7% with a performance gain of 3.4% mcAP compared with Baseline.
We also conduct the same experiment on the THUMOS-14 dataset  to confirm the generality of the proposed components. We obtain performance gains as individually incorporating the proposed components into GRU (see Table 3), where our IDN achieves an improvement of 3.3% mAP compared to Baseline. These results successfully demonstrate the effectiveness and generality of our components.
To confirm the effect of our components, we compare the values of the update gates between our IDU and GRU. For a reference, we introduce the relevance score of each chunk regarding a current action. Specifically, we set the scores of input chunks representing the current action as 1, otherwise 0 (see Fig. 3). Note that the update gate controls how much information from the input will carry over to the hidden state. Therefore, the update gate should drop the irrelevant information and pass over the relevant information related to the current action. In Fig. 4, we plot the values of IDU and GRU and relevance scores against each time step. On the input sequences containing from one to five relevance chunks, the values of GRU are very high at all time steps. In contrast, our IDU successfully learns the values following the relevance scores (see Fig. 4(a)). We also plot the average values on the input sequences including from 11 to 15 relevance chunks in Fig. 4(b), where our IDU yields the values similar to the relevance scores. These results demonstrate that our IDU effectively models the relevance of input information to the ongoing action.
5.5 Performance Comparison
In this section, we compare our IDN with state-of-the-art methods on TVSeries  and THUMOS-14  datasets. We use three types of input including RGB, Flow, and Two-Stream. As the input of our IDU, we take only appearance features for the RGB input and motion features for the Flow input. IDN, TRN , RED , and ED  use same two-stream features for the Two-Stream input, which allows a fair comparison. We also employ another feature extractor, the TSN model  pretrained on the Kinetics dataset . We name our IDN with this feature extractor IDN-Kinetics.
We report the results on TVSeries in Table 4. Our IDN significantly outperforms state-of-the-art methods on all types of input, where IDN achieves 76.6% mcAP on the RGB input, 80.3% mcAP on the Flow input, and 84.1% mcAP on the Two-Stream input. Furthermore, IDN-Kinetics achieves the best performance of 86.1% mcAP. Note that our IDN yields better performance than TRN  although IDN takes shorter temporal information than IDN (i.e., 16 chunks vs. 64 chunks).
|Portion of action|
(bottom). Each result shows frames, ground truth, and estimated probabilities.
In Table 5, we compare performances between our IDN and state-of-the-art approaches for online and offline action detection. The compared offline action detection methods perform frame-level prediction. As a result, both IDN and IDN-Kinetics outperforms all methods by a large margin.
In online action detection, it is important to identify actions as early as possible. To compare this ability, we measure the mcAP values for every 10% portion of actions on TVSeries. Table 6 shows the comparison results among IDN, IDN-Kinetics, and previous methods, where our methods achieve state-of-the-art performance at every time interval. This demonstrates the superiority of our IDU in identifying actions at early stages as well as all stages.
5.6 Qualitative Evaluation
For qualitative evaluation, we visualize our results on TVSeries  and THUMOS-14  in Fig. 5. The results on the TVSeries dataset show high probabilities on the true action label and reliable start and end time points. Note that identifying actions at the early stage is very challenging in this scene because the only subtle change (i.e., opening a book) happens. On THUMOS-14, our IDN successfully identifies ongoing actions by yielding the contrasting probabilities between true action and background labels.
In this paper, we proposed IDU that extends GRU  with two novel components: 1) a mechanism using current information and 2) an early embedding module. These components enable IDU to effectively decide whether input information is relevant to a current action at every time step. Based on IDU, our IDN effectively learns to discriminate relevant information from irrelevant information for identifying ongoing actions. In comprehensive ablation studies, we demonstrated the generality and effectiveness of our proposed components. Moreover, we confirmed that our IDN significantly outperforms state-of-the-art methods on TVSeries  and THUMOS-14  datasets for online action detection.
This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.B0101-15-0266, Development of High Performance Visual BigData Discovery Platform for Large-Scale Realtime Data Analysis).
Action knowledge transfer for action prediction with partial videos.
Proc. Association for the Advancement of Artificial Intelligence (AAAI) Conference on Artificial Intelligence, pp. 8118–8125. Cited by: §2.
-  (2017-Jul.) Quo vaids, action recognition? a new model and the kinectics dataset. In , pp. 4724–4733. Cited by: §5.5.
-  (2018-Jun.) Rethinking the faster r-cnn architecture for temporal action localization. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1130–1139. Cited by: §1, §2.
Learning phrase representations using rnn encoder-decoder for statistical machine translation.
Proc. Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1724–1734. Cited by: Figure 1, §1, §1, §2, §3, Figure 4, §5.4, §6.
-  (2005-Jun.) Learning a similarity metric discriminatively, with application to face verification. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 539–546. Cited by: §4.1.
-  (2015-Jun.) Long-term recurrent convolutional networks for visual recognition and description. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2625–2634. Cited by: Table 4, Table 5.
-  (2018-Sep.) CTAP: complementary temporal action proposal generation. In Proc. European Conference on Computer Vision (ECCV), pp. 68–83. Cited by: §2.
-  (2017-Sep.) RED: reinforced encoder-decoder networks for action anticipation. In Proc. British Machine Vision Conference (BMVC), pp. 92.1–92.11. Cited by: §1, §2, §2, §4.2, §5.1, §5.2, §5.3, §5.3, §5.4, §5.5, Table 4, Table 5.
-  (2016-Oct.) Online action detection. In Proc. European Conference on Computer Vision (ECCV), pp. 269–285. Cited by: §1, §1, §2, §2, Figure 5, §5.1, §5.2, §5.2, §5.3, §5.4, §5.4, §5.5, §5.6, Table 2, Table 4, Table 6, §5, §6.
-  (2018-Mar.) Modeling temporal structure with lstm for online action detection. In Proc. IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1549–1557. Cited by: Table 4.
-  (2006-Jun.) Dimensionality reduction by learning an invariant mapping. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1735–1742. Cited by: §4.1.
-  (2016-Jul.) Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 771–778. Cited by: §5.3.
-  (2015-Jun.) ActivityNet: a large-scale video benchmark for human activity understanding. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 961–970. Cited by: §5.3.
-  (2014-Apr.) Max-margin early event detector. International Journal of Computer Vision (IJCV) 2, pp. 191–202. Cited by: §2.
-  (1997-Dec.) Long short-term memory. Neural Computation 9, pp. 1735–1780. Cited by: §1, §2, §5.4.
-  (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167. Cited by: §5.3.
-  (2013-Sep.) Recognizing humans in motion: trajectory-based aerial video analysis. In Proc. British Machine Vision Conference (BMVC), pp. 127.1–127.11. Cited by: §1.
-  (2014) THUMOS challenge: action recognition with a large number of classes. Note: http://crcv.ucf.edu/THUMOS14/ Cited by: §1, Figure 5, §5.1, §5.3, §5.4, §5.5, §5.6, Table 3, Table 5, §5, §6.
-  (2019-Jun.) Grounding human-to-vehicle advice for self-driving vehicles. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10591–10599. Cited by: §1.
-  (2013-Nov.) Anticipating human activities for reactive robotic response. In Proc. International Conference on Intelligent Robots and Systems (IROS), pp. 2071–2071. Cited by: §1.
-  (2019-Oct.) BMN: boundary-matching network for temporal action proposal generation. In Proc. IEEE International Conference on Computer Vision (ICCV), pp. 3889–3898. Cited by: §2.
-  (2018-Sep.) Bsn: boundary sensitive network for temporal action proposal generation. In Proc. European Conference on Computer Vision (ECCV), pp. 3–19. Cited by: §2.
-  (2019-Jun.) Multi-granularity generator for temporal action proposal. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3604–3613. Cited by: §1, §2.
Rectified linear units improve restricted obltzmann machines.
Proc. International Conference on Machine Learning (ICML), Cited by: §4.1.
-  (2015-Dec.) Faster r-cnn: towards real-time object detection with region proposal networks. In Proc. Conference on Neural Information Processing Systems (NeurIPS), pp. 91–99. Cited by: §2.
-  (2017-Jul.) Cdc: convolutional-de-convolutional networks for precise temporal action localization in untrimmed videos. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5734–5743. Cited by: Table 5.
-  (2016-Jun.) Temporal action localization in untrimmed videos via multi-stage cnns. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1049–1058. Cited by: §2.
-  (2015-Jun.) Joint inference of groups, events and human roles in aerial videos. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4756–4584. Cited by: §1.
-  (2014-Dec.) Two-stream convolutional networks for action recognition in videos. In Proc. Conference on Neural Information Processing Systems (NeurIPS), pp. 568–576. Cited by: Table 5.
-  (2015-05) Very deep convolutional networks for large-scale image recognition. In Proc. International Conference on Learning Representations (ICLR), Cited by: Table 5.
-  (2016-Dec.) Improved deep metric learning with multi-class n-pair loss objective. In Proc. Conference on Neural Information Processing Systems (NeurIPS), pp. 1857–1865. Cited by: §4.1.
-  (2005-Sep.) Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research (JMLR) 6, pp. 1453–1484. Cited by: §2.
-  (2016-Oct.) Temporal segment networks: towards good practices for deep action recognition. In Proc. European Conference on Computer Vision (ECCV), pp. 20–36. Cited by: §4.2, §5.3, §5.5.
-  (2017-Oct.) R-c3d: region convolutional 3d network for temporal activity detection. In Proc. IEEE International Conference on Computer Vision (ICCV), pp. 5783–5792. Cited by: §1.
-  (2019-Oct.) Temporal recurrent networks for online action detection. In Proc. IEEE International Conference on Computer Vision (ICCV), pp. 5532–5541. Cited by: §1, §1, §2, §2, §4.2, §5.1, §5.2, §5.3, §5.3, §5.4, §5.5, §5.5, Table 4, Table 5, Table 6.
-  (2018-Apr.) Every moment counts: dense detailed labeling of actions in complex videos. International Journal of Computer Vision (IJCV) 126, pp. 375–389. Cited by: Table 5.
-  (2019-Jan.) Learning transferable self-attentive representations for action recognition in untrimmed videos with weak supervision. In Proc. Association for the Advancement of Artificial Intelligence (AAAI) Conference on Artificial Intelligence, pp. 9227–9242. Cited by: §1.
-  (2017-Oct.) Temporal action detection with structured segment networks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2914–2923. Cited by: §1, §2.