Action recognition in videos has drawn enormous attention from the computer vision community in the past decades[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
, partially due to its applications in many areas such as video surveillance, content-based analysis, and human-computer interaction. Recently, deep learning methods have witnessed great success for many image-based tasks, such as image classification[11, 12], object detection , semantic segmentation 
, and pose estimation. These deep models have also been introduced into the video domain for action recognition [5, 6], which has shown superior performance to the traditional hand-crafted representations . However, unlike the static image, the extra temporal dimension of the video increases the difficulty of developing an effective deep action recognition approach. When proposing an action recognition solution, it is necessary to carefully consider the inherent properties of temporal information.
Currently, deep learning methods for action recognition are mostly based on short-term classifiers, due to modeling complexity and computational overhead of directly learning a long-term video-level classifier. These short-term classifiers are with architectures of 2D CNNs  or 3D CNNs , which operate on a single frame (e.g., two-stream CNNs ) or a short clip of multiple consecutive frames (e.g. C3D , I3D , ARTNet ). In the training phase, these short-term classifiers are optimized on a single randomly sampled clip [4, 2] or a set of uniformly sampled clips from the whole video . In the test phase, to gather the information from the whole video, these trained short-term classifiers are applied in a fully convolutional manner on densely sampled video clips. The recognition results from densely testing are simply averaged to yield the final recognition results. However, there are two critical issues in the above framework when training short-term classifiers randomly and testing these classifiers densely.
First, training these short-term classifiers from randomly sampled clips may not be the best approach from the training perspective. As depicted in Fig. 1, the distribution of action-related information is uneven across the video. In this sense, some clips in a video may contribute more to action recognition than others. Furthermore, the distribution of clip-level importance may vary among different input videos. Therefore, given an input video, we need to on-the-fly select the most important clips to recognize the action in the video. This dynamic and adaptive sampling module will force short-term classifiers to focus on these more discriminative clips which are also beneficial to train more powerful classifiers.
Second, from the testing perspective, it is expensive to apply these trained short-term classifiers in a fully convolutional manner on densely sampled clips. Meanwhile, simply averaging from these dense prediction scores may also be problematic because these recognition results from less relevant clips will over-smooth the final recognition result. It is well known the clips in the video are redundant and semantics varies slowly along the temporal dimension. Therefore, it is possible to test these short-term classifiers only on a subset of densely sampled clips, which can greatly improve inference efficiency and possibly improve the final recognition accuracy.
To mitigate the above issues, we propose a new principled framework called as Dynamic Sampling Network (DSN) for action recognition in videos based on a section based selection policy. The primary objective is to train an optimized sampling strategy that is able to maintain the accuracy of action recognition while select very few clips based on each input video. Specifically, the DSN framework is composed of a sampling module and a classification module
, which are optimized in an alternating manner. The sampling module is composed of an observation network and a policy maker. The observation network takes several clips as inputs and generates a posterior probability, and the policy maker determines which clips to select based on the posterior probability distribution. The selected clips will be fed into the classification module, and the rewards based on the predictions will guide the training of the observation network. In addition, based on the sampling module output, the classification module will be fine-tuned on the video dataset, by focusing more on these clips with high probabilities. In implementation, to improve inference efficiency, the complexity of the sampling module is much smaller than that of the classification module. When a deploying DSN in practice, the trained sampling module will dynamically determine a small subset of clips to be fed into the classification module, thus greatly improving the efficiency of the whole action recognition system.
Concerning the training of our DSN, we formulate the framework of DSN within an associative reinforcement learning  where all the decisions are made in a single step given the context. Associative reinforcement learning is different from full reinforcement learning 
. Full reinforcement learning is a multi-step Markov decision process. Through multi-step exploration, agent continuously observes the environment and makes decisions based on state history, and gets rewards. Associative reinforcement learning is also called-armed contextual bandit  which is a simplified reinforcement learning algorithm derived from the bandit problem. In -armed contextual bandit setting, the agent selects one from levers by observing the environment and gets a reward. When the environment changes, the reward of each lever also changes. Since a full reinforcement learning requires complex reward design and is prone to over-fitting with a single label, it is difficult to train a multi-step Markov decision process solely with single label supervision. We reduce our designed sampling policy to a single-step Markov decision process, which could be easily optimized with an associative reinforcement learning. The training objective of the sampling module is to maximize the reward for the selected clip, where the reward is calculated based on the action classifier output and groundtruth.
In experiment, we use ResNet2D-18 as the observation network and use ResNet3D-18 or R(2+1)D-34 as the short-term action classifier to implement DSN framework. We test DSN on the datasets of UCF101 , HMDB51 , THUMOS14  and ActivityNet v1.3 . Since the dataset is small, DSN has been pre-trained on Kinetics. DSN obtains a performance improvement over previous state-of-the-art approaches. More importantly, DSN applies the short-term classifier only on a smaller number of frames, which greatly reduces the computational overhead while still yields slightly better recognition accuracy than previous approaches.
Ii Related Work
Ii-a Deep Learning in Action Recognition
The breakthrough by AlexNet  in image classification started the booming of deep learning in computer vision. Researchers were inspired to exploit deep network architectures for action recognition in videos [17, 3, 2, 25, 6, 26, 4, 7, 8, 27, 28]. Ji et al.  extended 2D CNNs to 3D domain for action recognition and tested the proposed models on small datasets. To support a huge amount of data needed for training deep networks, Karpathy et al. collected a large-scale dataset (Sports-1M)  with weak tag label from Youtube. Simonyan et al.  proposed a two-stream network of spatial and temporal networks that capture appearance and motion information, respectively. Carreira et al.  proposed I3D by inflating the 2D filters in 2D CNNs into spatiotemporal kernels. Huang et al.  studied the effect of short-term temporal information on action recognition. Tran et al.  studied 3D CNNs on realistic and large-scale video datasets. Then they further decomposed the 3D convolutional filters into separate spatial and temporal components to obtain R(2+1)D , which can improve recognition accuracy.
combined information across the whole video over longer time periods by recurrent neural networks (LSTM). Zhuet al.  proposed KVMF to filter out irrelevant information by identifying key volumes in the sequences. Wang et al.  designed a temporal segment network (TSN) that used sparsely sampled frames to aggregate the temporal information of video. They also developed an UntrimmedNet 
which utilized the attention model to aggregate information for learning from the untrimmed datasets.
Ii-B Efficient Action Recognition
Neural networks are typically cumbersome, and there is significant redundancy for deep learning models. To run deep neural network models on mobile devices, we must effectively reduce the storage and computational overhead of the networks. Parameter pruning and sharing [34, 35, 36, 37, 38] reduced redundant parameters which are not sensitive to the performance. Knowledge distillation [39, 40, 41, 42, 43] trained a compact neural network with distilled knowledge of a large model.
Efficient network architectures proposed recently such as MobileNet  and -ResNet  were developed for training compact networks. Wu et al.  used REINFORCE to learn optimal block dropping policies in ResNet. ECO  greatly improved recognition speed by manually designing the stacking of sample frames. SlowFast  obtained spatial semantics by constructing a slow pathway and obtains action semantics through a very lightweight Fast pathway, which greatly reduces computational overhead compared to traditional two-stream networks. Yu et al.  utilized local appearance and structural information to achieve real-time action recognition. Korbar et al.  constructed an oracle sampler and trains a clip sampler with performance close to the oracle sampler by using knowledge distillation. Although their idea is similar to ours, they train sampler and classifier in a separate way.
These methods mainly focus on reducing the complexity of the network architecture or reducing the overhead of inference through some artificially designed methods to improve the speed of inference of the network. Our work provides a novel perspective for reducing the computational overhead of inference by training a lightweight clip sampler within a new reinforcement learning framework. This new framework allows us to jointly train the clip sampler and classifier.
Ii-C Reinforcement Learning in Videos
Deep reinforcement learning has made many dramatic advances in the field of video understanding. Gao et al.  used reinforcement learning to train an encoder-decoder network for action anticipation. Yeung et al.  utilized reinforcement learning for action detection. Fan et al.  proposed an end-to-end reinforcement approach that they developed an agent to decide which frame to watch at the next time step or to stop watching the video to make classification decision at the current time step. Wang et al.  proposed a novel hierarchical reinforcement learning framework for video captioning. Wu et al.  proposed the AdaFrame framework to determine which frames based on the LSTM  module, and the whole framework was trained by policy gradient methods. Dong et al.  used BiLSTM and reinforcement learning to select clips to improve performance.
Similar to the these approaches, our DSN also uses reinforcement learning to train our selection module for efficient action recognition. Notably different from them, we carefully design a section based selection policy to leverage the uniform prior to regularize the selection process. In addition, our designed selection policy will reduce to a single-step Markov decision process, which greatly relieves the training difficulty of reinforcement learning algorithm merely with a single label. Attention-Aware Sampling 
proposed an agent algorithm to select discriminative clips based on the clip-level features. It only improved performance and failed to save computational overhead as it passed all clips into networks for feature extraction. However, due to the sampling module is lightweight, the computational overhead of the sampling module is very small compared with the classification module. So, our DSN can greatly reduce computational overhead while still be able to maintain or even improve performance.
Iii Dynamic Sampling Networks
In this section, we give a detailed description on our proposed Dynamic Sampling Networks (DSN). First, we provide a general overview of our DSN framework. Then, we present the technical details of the sampling and classification module.
Iii-a Overview of DSN
As shown in Fig. 2, given a video , we first divide the video into sections of equal duration without overlap and then uniformly sample short clips in section . We sample a total of clips from the whole video. The DSN framework is composed of a sampling module and a classification module, which performs clip selection in each section and recognize the action class of selected clips, respectively.
Specifically, for the sampling module, it takes the sampled clips from each section as inputs and selects a single clip in each section independently:
where is the selected clip in section by the sampling module , and is the model parameter of the sampling module. Then the output of the sampling module will be fed into the classification module to perform action recognition:
where is the output of classification module , representing the prediction result of the video , and is the model parameter of classification module. The sampling module dynamically selects a subset of clips for each video and enables the classification module to concentrate on these most discriminative clips. It can greatly improve the inference efficiency when deploying the trained models in practice. We will present the technical details of modules and , and then describe the training process for optimizing model parameters and .
Discussion. Our DSN presents a dynamic clip sampling policy, called as section based selection, for efficient video recognition. This section based clip selection is mainly inspired by the work of TSN , and our DSN could be viewed as a dynamic version of TSN. This dynamic clip sampling exhibits several important properties: First, similar to the TSN framework, our DSN leverages a uniform prior to our selection process. We argue that an action instance could be divided into different courses, and each course contains its own important and discriminative information for action recognition. Therefore, our section based selection aims to regularize the selected clips to cover the entire duration of the whole video. Second, we perform clip selection for each section independently in our designed sampling policy. We analyze that clips of different sections describe different courses of an action instance, and the selection result in a section would have little effect on the other sections. In this sense, our sampling module only needs to focus on distinguishing similar clips in a local section, which would reduce the complexity of the sampling module and increase its flexibility for easy deployment. Finally, our designed sampling policy will reduce to a single-step Markov decision process, which could be easily optimized with an associative reinforcement learning algorithm as described in Section IV. Instead, without our section based selection, it is difficult to train a multi-step Markov decision process solely with single label supervision, as a normal reinforcement learning requires complex reward design and is also prone to over-fitting with training data.
Iii-B Sampling Module
The sampling module is designed to select the most discriminative and relevant clip from each section and feed them into the classification module. As shown in Fig. 3, it consists of an observation network () and a policy maker that determine which clips to select.
The observation network aims to output the sampling probability of each clip. Formally, it can be formulated as follows:
where denotes the backbone of observation network and denotes a fully connected layer. takes the clips in section as inputs and produces the feature maps for each clip. Then these feature maps are concatenated and fed into a fully connected layer followed by softmax function . In sampling module, we can obtain the probabilities of sampling clips in section , where .
After obtaining the probabilities , the policy maker will determine which clip to keep. Specifically, it will output an action based on the estimated probabilities .
is a binary vector and, where indicates the clip will be kept in section , otherwise this clip will be discarded. Since the sampling module selects one clip from each section, only one element in vector will be set to 1, and the rest will be set to 0. During the training phase, we randomly select a clip from each section based on the probabilities , and in the testing phase, we directly choose the clip with the highest probability. This slight difference will enable the sampling module to explore more clips and add randomness to the training process to its generalization ability.
It should be noted that the policy makes the independent decision for each section. In this sense, we keep the same number of clips for each section and thus the selected clips can cover the entire video. In addition, this independence assumption will reduce the decision process in each section into a single-step Markov decision process. Therefore, there are only possible actions in each section, and the sampling module can also be viewed as a -armed contextual bandit whose agent selects one from levers by observing the environment and gets a reward. The rewards obtained by these selected clips will guide the training of the sampling module through the reinforcement learning algorithm (REINFORCE), which will be explained in section IV.
Iii-C Classification Module
The classification module output the action prediction results of each selected clip. Then rewards will be calculated from prediction and ground-truth label to guide the training of the sampling module.
3D Convolutional Neural Networks (3D CNNs)[4, 7] have become an effective clip-level action recognition model. Therefore, as shown in Fig. 3, we choose 3D CNNs as the backbone of our classification module. In experiments, to demonstrate the robustness of DSN framework, we try two kinds of network architectures: (1) more efficient one: R3D-18, and (2) more powerful one: R(2+1)D-34. R3D is an inflated 3D ResNet whose input is a short clip of stacking multiple consecutive frames. R(2+1)D is a competitive short-term classifier that decomposes a 3D convolution into separate spatial and temporal convolutions, making its optimization easier than original 3D CNN. Different network architectures can demonstrate the generalization ability of DSN.
Prediction of DSN. After the training of sampling module and classification module, our DSN keeps clips for each video. We first perform action recognition for each clip independently, and then report video-level results via an average fusion:
where represents the 3D convolutional networks, is its weight, and is the sampling module output.
Iv Learning of Dynamic Sampling Networks
In this section, we describe the details on the optimization of DSN framework. First, we provide a detailed description on the training process of DSN framework. Then, we present the important implementation details of learning DSN from trimmed and untrimmed videos.
Iv-a Optimization of DSN
Since DSN framework is composed of the sampling module and the classification module coupled with each other, and involves a non-differentiable operation of selection, we need to design a special training scheme to optimize the two modules. Specifically, we propose two strategies to optimize their model parameters. In the first scheme, we first train the classification module in advance and then tune the parameters of the sampling module by fixing the classification module parameters. In the second scheme, we propose an alternate optimization scheme learning between classification module and sampling module. In particular, as shown in Algorithm 1, if we fix the pre-trained classification module, we skip the process of updating the classification module parameters. Otherwise, we first optimize the parameters of the classification module by fixing the parameters of the sampling module. Then, we update the model parameters of the sampling module by using REINFORCE. The total iteration size is denoted by . We will give a detailed explanation of these two steps in the following.
Classification module optimization. Given the training set where is the label of the video . The sampling module outputs for each video and the training objective classification module is defined as follow:
regularization term. The above objective function can be easily optimized using stochastic gradient descent (SGD). As mentioned in sectionIII-C, if we sample more than one clip in , we will average these prediction scores of selected clips to obtain a video-level score, and use this score for training.
Sampling module optimization. After the training of the classification module, we move to optimize the parameter of the sampling module, namely, the weights of the observation network. Since the selection operation is non-differentiable, we intend to use the reinforcement learning algorithm for optimization. The objective of reinforcement learning is to learn a policy
that decides actions by maximizing future rewards. The reward is calculated by an evaluation metric that compares the generated prediction to the corresponding groundtruth. Only the selected clips will be fed into the classification module. To encourage the policy to be more discriminative, we associate the actions taken with the following reward function:
where represents the reward if the selected clip is classified correctly, which is the score of the correctly classified actions by the classification module. On the other hand, we penalize it with a constant if the classification module cannot make a correct prediction for clip .
The training objective of sampling module is to maximize the expected reward over all possible actions :
REINFORCE. In order to maximize the expected reward of the sampling module, we use the Monte Carlo policy gradient algorithm REINFORCE  to train the sampling module. In contrast to value function methods, REINFORCE algorithm can select actions without consulting the value function. With Monte-Carlo sampling using all samples in a mini-batch, the expected gradient can be approximated as follow:
where is the parameters of the sampling module and is the collection of sampled clips in section . As mentioned in section III-B, to obtain in Equation (8), we sample the clip based on . When the clip is chosen, its corresponding will be set to , and otherwise .
REINFORCE with Baseline.
As REINFORCE may be of high variance and learn slowly, we utilize the REINFORCE with Baseline to reduce variance:
where the baseline can be an arbitrary function, as long as it does not depend on the action . The equation is still valid because we subtract the zero value. It is proved as follows:
Update Parameter. We select the largest probability from to construct R() as baseline. is the action in which if , and otherwise. Using REINFORCE with the baseline, we rewrite the estimate of the gradient in Equation(8) as:
To optimize the expected reward , we update weight by as follow:
where is the iteration number and is the learning rate.
Discussion on end-to-end training. DSN can be also optimized in an end-to-end scheme, that is, to update the parameters of two modules at the same time. In this case, the classification module can hardly recognize the action correctly in the early stage of training. For any clip, the sampling module gets almost negative rewards and the whole system is hard to optimize properly. Therefore, the alternating optimization between the two modules is a practical solution, and pre-training classification module before the optimization of the sampling module is helpful to improve the training efficiency of the sampling module in the early stages.
Iv-B DSN in Practice
Finally, we give detailed descriptions about the implementations of DSN in practice. We use the notation of to represent the sampling scheme in DSN that each video is divided into sections of equal duration without overlap and clips are sampled from each section. Our DSN is a general framework, which could be applied on both trimmed and untrimmed video datasets. To fully investigate DSN framework, we report the results of DSN on both trimmed and untrimmed datasets in experiments.
Considering that the temporal duration of videos is different between trimmed and untrimmed datasets, we use different settings in our DSN implementation on the two types of datasets. For trimmed video datasets (e.g., UCF101, HMDB51), considering that the duration of each video is normally very short, we set and during the training phase. In testing, we use to yield better recognition results. The is consistent whether in training or testing. For untrimmed video datasets (e.g., THUMOS, ActivityNet), we try to use a larger under the constraint GPU memory. It is expected that action instances might only occupy a small portion of the whole untrimmed video. Larger numbers of sampling clips would help DSN to select the most action-relevant clips. Specifically, when we utilize a lightweight backbone of ResNet3D-18, we set and during training. In testing, we keep and study with different values of to find a trade-off between performance and computational overhead.
In this section, we first describe the datasets used in experiments and the implementation details. Then we perform some exploration studies on different aspects of DSN. After that, we compare the performance of DSN with state-of-the-art methods on four datasets. Finally, we visualize the learned policy of DSN to provide some insights about our method.
We report performance on two popular untrimmed datasets, ActivityNet v1.3  and THUMOS14 . ActivityNet v1.3 covers a wide range of complex human activities divided into 200 classes, which contains 10,024 training videos, 4,926 validation videos, and 5,044 testing videos. Since annotations of action instances in untrimmed videos are hardly available in most cases, we only use video-level labels to train the model. We adopt the Top-1 as the evaluation metric on ActivityNet v1.3 in ablation study and both Top-1 and mean average precision (mAP) as the evaluation metric comparing with the state-of-the-art models. Due to the lack of annotated groundtruth of the test set, here we only report the performance on the validation dataset. THUMOS14 has 101 classes for action recognition, which consists of four parts: training set, validation set, test set, and background set. For the ablation study, we only train models on the validation set and evaluate their mAP on the test set. When comparing with other methods, we train DSN on the combination of the training and validation set.
We also validate DSN on two trimmed video datasets, UCF101  and HMDB51 . UCF101 has 101 action classes and 13K videos. HMDB51 consists of 6.8K videos divided into 51 action categories. In trimmed video datasets, Top-1 is employed as an evaluation metric for all models. We take the official evaluation protocol for these trimmed video datasets: adopting the three training/testing splits provided by the organizers and measuring average accuracy over these splits for evaluation.
V-B Implementation Details
For classification module, we use ResNet3D-18  and R(2+1)D-34  as the backbone networks. To make the sampling module as small as possible, we choose the ResNet2D-18  as the backbone of the observation network. For ResNet2D-18 to process the same input length as R(2+1)D-34, we take the middle 8 frames in a single clip as input. It is worth noting that the computation required for ResNet2D-18 is only 9.4% of the ResNet3D-18 and 1.2% of R(2+1)D-34, respectively. In the experiment, we use SGD with momentum 0.9 as the optimizer. ResNet3D-18, R(2+1)D-34 and ResNet2D-18 are pre-trained on Kinetics 
. When training ResNet3D-18, the initial learning rate is set to 0.001, which is reduced at 45, 60 epochs by a factor of 10 and stops at 70 epochs. Considering the R(2+1)D-34 is a heavier model, the whole training procedure is done in 120, starting at a learning rate of 0.001 and decreased at 90, 110 epochs by a factor of 10. Video frames are scaled to the size of and then patches are generated by randomly cropping window of size with scale jittering. The temporal dimension depends on the backbone architecture of classification module (i.e., 8 for ResNet3D-18 and 32 for R(2+1)D-34). When training our models, the batch size is 128 for ResNet3D-18 and 32 for R(2+1)D-34. We use Farneback’s method  to compute optical flow.
V-C Exploration Studies
First, we are ready to perform ablation studies for DSN framework on trimmed and untrimmed video datasets. In this exploration study, we choose ResNet3D-18 as the backbone network for the classification module and ResNet2D-18 for the observation network. We use to denote training or testing setting that each video is divided into sections and each section has clips without overlap. The sampling module keeps clips as inputs to the classification module. All ablation experiments use RGB as input.
|R3D & DSN ()||4||104.6G||60.9%|
|R3D & DSN ()||3||78.5G||61.5%|
|R(2+1)D & DSN ()||4||631.2G||75.8%|
|R3D & TSN ()||20||407.2G||73.6%|
|R3D & DSN ()||6||157.0G||75.2%|
|R(2+1)D & DSN ()||6||946.8G||83.1%|
|Two Stream ||RGB+FLOW||25||66.1%|
|Jain et al. ||HOG+HOF+MBH||ALL||71.0%|
|UntrimmedNet (hard) ||RGB+FLOW||ALL||81.2%|
|UntrimmedNet (soft) ||RGB+FLOW||ALL||82.2%|
|R3D-RGB & TSN (6 seg)||RGB||20||73.6%|
|R3D-RGB & DSN ()||RGB||6||75.2%|
|R(2+1)D & DSN ()||RGB||6||83.1%|
|R(2+1)D & DSN ()||FLOW||6||78.4%|
|R(2+1)D & DSN ()||RGB+FLOW||6||84.7%|
|R3D-RGB & TSN (6 seg)||ResNet-18||Kinetics||20||65.6%||68.7%|
|R3D-RGB & DSN ()||6||66.7%||71.4%|
|R3D-RGB & DSN ()||10||68.0%||71.7%|
|R(2+1)D-RGB & DSN ()||6||76.5%||82.6%|
|R(2+1)D-FLOW & DSN ()||6||78.2%||82.9%|
|R(2+1)D-RGB+FLOW & DSN ()||6||82.1%||87.8%|
|R(2+1)D-RGB & DSN ()||10||78.5%||83.5%|
|R(2+1)D-FLOW & DSN ()||10||79.1%||84.3%|
|R(2+1)D-RGB+FLOW & DSN ()||10||82.6%||87.8%|
|TSN RGB ||25||Inception V2||91.1%||-|
|TSN RGB ||25||Inception V3||93.2%||-|
|I3D-RGB ||ALL||Inception V1||95.6%||74.8%|
|ARTNet with TSN ||25||ResNet-18||94.3%||70.9%|
|R3D-RGB & DSN ()||4||88.7%||59.7%|
|R3D-RGB & DSN ()||3||89.5%||61.6%|
|R(2+1)D-RGB & DSN ()||4||96.8%||75.5%|
Study on DSN. To demonstrate the effectiveness of DSN framework, we compare it to five different methods and another different training method of DSN mentioned in section IV called fixed DSN. In this experiment, DSN is trained with the setting of for trimmed video and for untrimmed video respectively, and to be consistent with training, we test with the same setting.
We choose random sampling and uniform sampling as the baseline. They use the same clip-level classifier, randomly sample clips and uniformly sample clips from the whole video in the test phase, respectively. We also compare DSN with a competitive method called max response sampling  which trains another observation network and uses the maximum classification score to sample clips. Then we introduce the fixed DSN in section IV which trains the sampling module with a trained classification module and denote it as DSN-f. Finally, we list the results of dense sampling and oracle sampling  which are the most intuitive measurement of the performance of the classifier. All results are shown in Table I.
The clip-level classifiers of the baseline methods are trained by randomly sampled frames. When , we average the predictions of all clips to get the final predictions and train the baseline. As shown in Table I, on UCF101 and HMDB51, DSN outperforms the random baseline by 2.6% and 5.9% and outperforms the uniform baseline by 1.9% and 2.7%, respectively. For untrimmed video tests, DSN performance improvements are more evident. On THUMOS14 and ActivityNet v1.3, DSN is 10.1% and 4.7% higher than the random baseline and 10.3% and 4.1% higher than the uniform baseline. The promising results demonstrate that different sampling policies have a huge impact on the classification performance, and also validate that the sampling module has indeed learned the effective sampling policies.
The method called max response sampling trains another observation network independently and outputs the classification score for each clip. The max response is defined as the maximum classification score of all the classes. We only select the clip with the highest max response in each section as input to the classification module during testing. For a fair comparison, we use ResNet2D-18 as the backbone of the observation network and sample clips from according to maximum score during testing. We observe that DSN is better than max response sampling by 0.6% on UCF101, 3.6% on HMDB51, 6.7% on THUMOS14 and 1.6% on ActivityNet v1.3. This demonstrates that simply using max response as the sampling policy obtains a very limited performance improvement.
DSN-f is a simplified training version of DSN. Compared to DSN, the parameters of the classification module of DSN-f are fixed and no longer updated during training. The parameters of the classification module of DSN-f are consistent with the baseline. Without joint training, the performance of DSN-f is lower than that of DSN. On the four datasets, DSN-f is decreased by 0.8%, 1.7%, 5.3%, and 1.8%, respectively. We analyze that the joint training of two modules alternately enables the sampling module and the classification module to perform their duties, focus more on the learning of their respective tasks, and at the same time better couple with each other. Therefore, compared with DSN-f, training DSN model by joint training is more adequate. This fully demonstrates that joint training is indispensable.
The dense sampling results are the experimental upper bound of the classifier in practice. Currently, the state-of-the-art results of most models are obtained by dense sampling. For the trimmed video and untrimmed video datasets, we take 10 clips and 20 clips respectively as input to the classification module. On the trimmed video dataset, DSN uses only one clip as input and achieves performance comparable to dense sampling using 10 clips. On the untrimmed video dataset, DSN using 6 clips as input not only achieves performance comparable to the dense sampling using 20 clips on ActivityNet v1.3 but also has higher mAP on THUMOS14 than the dense sampling. Oracle sampling is the theoretical upper bound of the sampling policy. As long as there is a clip that can be correctly classified by the classifier, oracle sampling will select this clip as the input to the classifier. The results are almost impossible to achieve in the experiment. There is still a gap between DSN and oracle sampling, which shows that there is room for further improvement and optimization of the clip sampling policy.
Study on different numbers of sampled clips. We have demonstrated the effectiveness of DSN. Now, we are ready to explore the performance curve of DSN by varying the number of sampled clips. We gradually increase the number of sampled clips from 1 to 10 and compare it to the baseline and max response sampling mentioned in Table I. The experimental results are shown in Fig. 4.
For trimmed video, when the number of clips is greater than 4, the advantages of DSN over other methods will gradually decrease. We analyze that due to most of the useless frames of the trimmed video have been trimmed and the duration of trimmed videos is short in general, as the number of clips increases, most of the frames will be selected, and all methods will gradually degenerate into dense sampling. For untrimmed video datasets, DSN has an advantage over other methods when using a small number of clips. But when the number of clips is greater than 3 and 5 on THUMOS14 and ActivityNet, the performance gain from increasing clip number gradually decreases. The reason is that the useful frames of untrimmed videos are distributed unevenly and sparsely, and it is difficult to cover them by simply increasing the number of clips. Experimental results show that DSN can almost reach its peak performance when computing a small number of clips, so in practical deployment, when the computing resources are limited, computing a small number of clips can achieve very good results.
Study on training setting. After evaluating the performance trend of DSN, we are ready to investigate the effect of the different number of sections and the number of clips in each section in the training sampling scheme. In training, we use the training sampling schemes of , , and . In testing, the is the same as training and is set to 3 for fair comparison between different sampling schemes. The results are shown in Table II. When , simply increasing the number of clips in each section does not bring about significant performance improvement. The reason is that increasing the number of segments means enlarging the exploration space of reinforcement learning, which may lead to under-fitting, so sampling module needs more epoch to explore. With the same number of training epochs, the sampling module is difficult to get enough training. When we increased from 1 to 3, we achieved 0.7% and 1.9% performance improvement on UCF101 and HMDB51 respectively. This is because more sections mean that the sampling module and the classification module can obtain more global semantic information. This avoids the imbalance of the learning samples caused by repeated local sampling during the training phase. At the same time, in the training phase of the classification module, based on the conclusion of TSN , the effect of the multi-segments fusion training is significantly better than the single-segment training. Therefore, when computational resources are sufficient, increasing the number of sections can obtain better performance.
Quantitative analysis in FLOPs. We perform experiments on computational overhead and the performance of DSN. In the experiments, R3D uses ResNet3D-18 as the backbone and R(2+1)D use ResNet3D-34 as the backbone, respectively. As shown in Table III, on HMDB51, R3D with DSN uses only 3 clips and one-third of the baseline FLOPS, and the accuracy is 2% higher than the baseline. R(2+1)D uses only 4 clips and less than half of the baseline FLOPs and achieves performance comparable to baseline. The effect is more obvious on THUMOS14. As shown in Table IV, R3D with DSN uses only 6 clips, which is 1.6% higher than the baseline using 20 clips, while saving more than half of FLOPs. R(2+1)D requires only one-third of FLOPs and achieves accuracy improvement of 1.2%. This indicates that DSN framework has great advantages in improving accuracy and saving computational cost.
V-D Comparison with the State of the Art
We compare DSN with the current state-of-the-art methods. In the experiments, R3D uses ResNet3D-18 as the backbone and R(2+1)D uses ResNet3D-34 as the backbone, respectively. The number of clips used by other methods is also given in the table.
Results on untrimmed video datasets. We list the performance of DSN models and other state-of-the-art approaches on THUMOS14 and ActivityNet v1.3. The results of both RGB and optical flow are shown in Table V and Table VI. DSN is trained with the setting of for R3D and for R(2+1)D. R3D and R(2+1)D use the same sampling scheme and or for testing.
As shown in Table V, on THUMOS14, for R3D and R(2+1)D, DSN only uses 6 clips, and their mAP is 1.6% and 1.2% higher than the dense sampling when using RGB as input. The optical flow and two-stream results of R(2+1)D are higher than dense sampling by 4.9% and 2.1%, respectively. R(2+1)D trained with DSN only uses RGB and 6 clips as input, and it achieves better performance than other methods of two-stream results. We give more experimental results on ActivityNet v1.3 in Table VI. For R3D, both DSN and baseline use 6 clips for training。 When using RGB as input and sampling 6 clips from each video, the Top-1 accuracy of DSN is 1.1% higher than the baseline using 20 clips. When only half of the input of the baseline is used, that is, 10 clips, the Top-1 accuracy is directly increased by 2.4%. For R(2+1)D, both DSN and baseline use only one clip for training. The Top-1 accuracy of DSN using only 10 clips is 0.3% higher than the baseline using 20 clips. It has also achieved very obvious effects on the optical flow that outperform dense sampling by 1.3% and 2.2% by using 6 clips and 10 clips, respectively. These experimental results demonstrate that DSN framework only uses video-level supervision information for training, which allows the sampling module to select useful clips in the video.
Results on trimmed video datasets. We furthermore compare the performance of our method with the state-of-the-art approaches on UCF101 and HMDB51 in Table VII. We only list the accuracy of the models using RGB as input. The DSN models are trained with , and testing sampling scheme is set as .
On UCF101, for R3D backbone with training scheme, the Top-1 accuracy of DSN using 4 clips is comparable to the baseline using 10 clips. When using training scheme, the input of 3 clips can improve accuracy by nearly 1%. For R(2+1)D, DSN uses only 4 clips to get the same performance as the baseline using 10 clips. The experimental results on HMDB51 are more obvious. After R3D and R(2+1)D are trained by DSN and with training scheme, the results of the model using 4 clips are 0.9% and 0.7% higher respectively than the baseline using 10 clips. These experimental results demonstrate that although the trimmed datasets have removed most of the irrelevant frames that do not contain actions, DSN framework can still filter out the useless clips so that the performance is further improved when the performance has already saturated.
V-E Visualization and Discussion
After analyzing the quantitative results in both aspects of recognition accuracy and computational overhead, we present the visualization results of the sampling policies in this section. Fig. 5 illustrates sampling frames sorted by the output of the observation network in the sampling module. We show the top 2 frames with the highest confidence and last 2 frames with the lowest confidence. It is seen that the sampling module in DSN framework can select discriminative action clips and avoid irrelevant clips that might degrade the classifier training. For some videos, the duration of action is usually a short clip through the whole video but interspersed with some irrelevant pictures. For example, sometimes it needs some useless preparations before the action happens. Even during the action is happening, the camera shot may turn to other irrelevant scenes. Those useless clips not only bring a large waste of computing but also degenerate the performance in final recognition. From Fig. 5, this observation shows that the motivation of DSN is of practical significance that it is necessary to sample frames based on some optimal policy to train a better clip-level action classifier.
Vi Conclusions and Future Work
In this paper, we have presented a new framework for learning action recognition models, called Dynamic Sampling Networks (DSN). DSN is composed of a sampling module and a classification module, whose objectives are to dynamically select discriminative clips given an input video and to perform action recognition on the selected clips, respectively. The sampling module in DSN could be optimized within an associative reinforcement learning algorithm. The classification module can be optimized by SGD with cross-entropy loss. Meanwhile, we provide two different schemes to optimize DSN framework. One iteratively optimizes the sampling module and classification module, while the other only trains the sampling module while fixing the classification module. We conduct extensive experiments on several action recognition datasets, including trimmed and untrimmed video recognition. As demonstrated on four challenging action recognition benchmarks, DSN can greatly reduce the computational overhead yet is still able to achieve slightly better or comparable accuracy to the previous state-of-the-art approaches.
In the future, we still have many directions to improve the DSN framework. Compared with the current alternating optimization scheme between sampling module and classification module, we can utilize the reparameterization trick and Gumbel-Softmax to discretize the output of the sampling module, so as to solve the non-differentiable problem of clip selection operation and realize an end-to-end training of the DSN framework. In addition, the lastest reinforcement learning algorithms such as PPO  could be used to improve sample efficiency and robustness. Finally, we can also use temporal models such as LSTM  to better exploit temporal information across different sections for better clip selection.
-  H. Wang and C. Schmid, “Action recognition with improved trajectories,” in ICCV, 2013, pp. 3551–3558.
-  K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” in NIPS, 2014, pp. 568–576.
-  A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and F. Li, “Large-scale video classification with convolutional neural networks,” in CVPR, 2014, pp. 1725–1732.
-  D. Tran, L. D. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “Learning spatiotemporal features with 3d convolutional networks,” in ICCV, 2015, pp. 4489–4497.
-  L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool, “Temporal segment networks for action recognition in videos,” TPAMI, vol. 41, no. 11, pp. 2740–2755, 2019.
-  J. Carreira and A. Zisserman, “Quo vadis, action recognition? A new model and the kinetics dataset,” in CVPR, 2017, pp. 4724–4733.
-  D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun, and M. Paluri, “A closer look at spatiotemporal convolutions for action recognition,” in CVPR, 2018, pp. 6450–6459.
-  L. Wang, W. Li, W. Li, and L. V. Gool, “Appearance-and-relation networks for video classification,” in CVPR, 2018, pp. 1430–1439.
-  S. Xie, C. Sun, J. Huang, Z. Tu, and K. Murphy, “Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification,” in ECCV, 2018, pp. 318–335.
-  X. Wang, R. B. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in CVPR, 2018, pp. 7794–7803.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in CVPR, 2016, pp. 770–778.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, 2012, pp. 1106–1114.
-  R. B. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in CVPR, 2014, pp. 580–587.
-  J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in CVPR, 2015, pp. 3431–3440.
-  K. He, G. Gkioxari, P. Dollár, and R. B. Girshick, “Mask R-CNN,” in ICCV, 2017, pp. 2980–2988.
-  Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
-  S. Ji, W. Xu, M. Yang, and K. Yu, “3d convolutional neural networks for human action recognition,” TPAMI, vol. 35, no. 1, pp. 221–231, 2013.
-  A. G. Barto, R. S. Sutton, and P. S. Brouwer, “Associative search network: A reinforcement learning associative memory,” Biological Cybernetics, vol. 40, no. 3, pp. 201–211, 1981.
-  R. S. Sutton and A. G. Barto, “Reinforcement learning: An introduction,” TNNLS, vol. 9, no. 5, pp. 1054–1054, 1998.
-  J. Langford and T. Zhang, “The epoch-greedy algorithm for contextual multi-armed bandits,” pp. 817–824, 2007.
-  K. Soomro, A. R. Zamir, and M. Shah, “UCF101: A dataset of 101 human actions classes from videos in the wild,” CoRR, vol. abs/1212.0402, 2012.
-  H. Kuehne, H. Jhuang, E. Garrote, T. A. Poggio, and T. Serre, “HMDB: A large video database for human motion recognition,” in ICCV, 2011, pp. 2556–2563.
-  Y.-G. Jiang, J. Liu, A. Roshan Zamir, G. Toderici, I. Laptev, M. Shah, and R. Sukthankar, “THUMOS challenge: Action recognition with a large number of classes,” 2014.
-  F. C. Heilbron, V. Escorcia, B. Ghanem, and J. C. Niebles, “Activitynet: A large-scale video benchmark for human activity understanding,” in CVPR, 2015, pp. 961–970.
-  L. Wang, Y. Qiao, and X. Tang, “Action recognition with trajectory-pooled deep-convolutional descriptors,” in CVPR, 2015, pp. 4305–4314.
-  D. Huang, V. Ramanathan, D. Mahajan, L. Torresani, M. Paluri, L. Fei-Fei, and J. C. Niebles, “What makes a video a video: Analyzing temporal information in video understanding models and datasets,” in CVPR, 2018, pp. 7366–7375.
-  Z. Liu, D. Luo, Y. Wang, L. Wang, Y. Tai, C. Wang, J. Li, F. Huang, and T. Lu, “Teinet: Towards an efficient architecture for video recognition,” in AAAI, 2020, pp. 11 669–11 676.
-  C. Feichtenhofer, H. Fan, J. Malik, and K. He, “Slowfast networks for video recognition,” in ICCV, 2019, pp. 6201–6210.
-  J. Y. Ng, M. J. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici, “Beyond short snippets: Deep networks for video classification,” in CVPR, 2015, pp. 4694–4702.
-  J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, T. Darrell, and K. Saenko, “Long-term recurrent convolutional networks for visual recognition and description,” in CVPR, 2015, pp. 2625–2634.
-  W. Zhu, J. Hu, G. Sun, X. Cao, and Y. Qiao, “A key volume mining deep framework for action recognition,” in CVPR, 2016, pp. 1991–1999.
-  L. Wang, Y. Xiong, D. Lin, and L. V. Gool, “Untrimmednets for weakly supervised action recognition and detection,” in CVPR, 2017, pp. 6402–6411.
-  H. Fan, Z. Xu, L. Zhu, C. Yan, J. Ge, and Y. Yang, “Watching a small portion could be as good as watching all: Towards efficient video classification,” in IJCAI, 2018, pp. 705–711.
-  S. Srinivas and R. V. Babu, “Data-free parameter pruning for deep neural networks,” in BMVC, 2015, pp. 1–12.
-  S. Han, J. Pool, J. Tran, and W. J. Dally, “Learning both weights and connections for efficient neural network,” in NIPS, 2015, pp. 1135–1143.
-  W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen, “Compressing neural networks with the hashing trick,” in ICML, 2015, pp. 2285–2294.
-  V. Lebedev and V. S. Lempitsky, “Fast convnets using group-wise brain damage,” in CVPR, 2016, pp. 2554–2564.
-  Z. Wu, T. Nagarajan, A. Kumar, S. Rennie, L. S. Davis, K. Grauman, and R. S. Feris, “Blockdrop: Dynamic inference paths in residual networks,” in CVPR, 2018, pp. 8817–8826.
-  G. E. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” CoRR, vol. abs/1503.02531, 2015.
-  A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio, “Fitnets: Hints for thin deep nets,” CoRR, vol. abs/1412.6550, 2014.
-  A. K. Balan, V. Rathod, K. P. Murphy, and M. Welling, “Bayesian dark knowledge,” in NIPS, 2015, pp. 3438–3446.
P. Luo, Z. Zhu, Z. Liu, X. Wang, and X. Tang, “Face model compression by distilling knowledge from neurons,” inAAAI, 2016, pp. 3560–3566.
-  T. Chen, I. J. Goodfellow, and J. Shlens, “Net2net: Accelerating learning via knowledge transfer,” in ICLR, 2016.
-  A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” CoRR, vol. abs/1704.04861, 2017.
-  X. Yu, Z. Yu, and S. Ramalingam, “Learning strict identity mappings in deep residual networks,” in CVPR, 2018, pp. 4432–4440.
-  M. Zolfaghari, K. Singh, and T. Brox, “ECO: efficient convolutional network for online video understanding,” in ECCV, 2018, pp. 713–730.
-  T. Yu, T. Kim, and R. Cipolla, “Real-time action recognition by spatiotemporal semantic and structural forests,” in BMVC, 2010, pp. 1–12.
-  B. Korbar, D. Tran, and L. Torresani, “Scsampler: Sampling salient clips from video for efficient action recognition,” CoRR, vol. abs/1904.04289, 2019.
-  J. Gao, Z. Yang, and R. Nevatia, “RED: reinforced encoder-decoder networks for action anticipation,” in BMVC, 2017.
-  S. Yeung, O. Russakovsky, G. Mori, and L. Fei-Fei, “End-to-end learning of action detection from frame glimpses in videos,” in CVPR, 2016, pp. 2678–2687.
-  X. Wang, W. Chen, J. Wu, Y. Wang, and W. Y. Wang, “Video captioning via hierarchical reinforcement learning,” in CVPR, 2018, pp. 4213–4222.
-  Z. Wu, C. Xiong, C. Ma, R. Socher, and L. S. Davis, “Adaframe: Adaptive frame selection for fast video recognition,” in CVPR, pp. 1278–1287.
S. Hochreiter and J. Schmidhuber, “Long short-term memory,”Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.
-  W. Dong, Z. Zhang, and T. Tan, “Attention-aware sampling via deep reinforcement learning for action recognition,” in AAAI, 2019, pp. 8247–8254.
-  D. Tran, J. Ray, Z. Shou, S. Chang, and M. Paluri, “Convnet architecture search for spatiotemporal feature learning,” CoRR, vol. abs/1708.05038, 2017.
-  G. Farnebäck, “Two-frame motion estimation based on polynomial expansion,” in SCIA, ser. Lecture Notes in Computer Science, J. Bigün and T. Gustavsson, Eds., vol. 2749. Springer, 2003, pp. 363–370.
-  B. Zhang, L. Wang, Z. Wang, Y. Qiao, and H. Wang, “Real-time action recognition with enhanced motion vector cnns,” in CVPR, 2016, pp. 2718–2726.
-  L. Wang, Y. Qiao, and X. Tang, “Action recognition and detection by combining motion and appearance features,” THUMOS14 Action Recognition Challenge, vol. 1, no. 2, p. 2, 2014.
-  M. Jain, J. C. van Gemert, and C. G. M. Snoek, “What do 15, 000 object categories tell us about classifying and localizing actions?” in CVPR, 2015, pp. 46–55.
-  M. Jain, J. van Gemert, C. G. Snoek et al., “University of amsterdam at thumos challenge 2014,” ECCV THUMOS Challenge, vol. 2014, 2014.
-  G. Varol and A. A. Salah, “Extreme learning machine for large-scale action recognition,” in ECCV workshop, vol. 7. Citeseer, 2014, p. 8.
-  Z. Qiu, T. Yao, and T. Mei, “Learning spatio-temporal representation with pseudo-3d residual networks,” in ICCV, 2017, pp. 5534–5542.
-  C. Zhu, X. Tan, F. Zhou, X. Liu, K. Yue, E. Ding, and Y. Ma, “Fine-grained video categorization with redundancy reduction attention,” in ECCV, 2018, pp. 139–155.
-  W. Wu, D. He, X. Tan, S. Chen, and S. Wen, “Multi-agent reinforcement learning based frame sampling for effective untrimmed video recognition,” CoRR, vol. abs/1907.13369, 2019.
-  Y. Chen, Y. Kalantidis, J. Li, S. Yan, and J. Feng, “Multi-fiber networks for video recognition,” in ECCV, 2018, pp. 364–380.
-  J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” CoRR, vol. abs/1707.06347, 2017.