Similarity R-C3D for Few-shot Temporal Activity Detection

12/25/2018 ∙ by Huijuan Xu, et al. ∙ Boston University berkeley college 4

Many activities of interest are rare events, with only a few labeled examples available. Therefore models for temporal activity detection which are able to learn from a few examples are desirable. In this paper, we present a conceptually simple and general yet novel framework for few-shot temporal activity detection which detects the start and end time of the few-shot input activities in an untrimmed video. Our model is end-to-end trainable and can benefit from more few-shot examples. At test time, each proposal is assigned the label of the few-shot activity class corresponding to the maximum similarity score. Our Similarity R-C3D method outperforms previous work on three large-scale benchmarks for temporal activity detection (THUMOS14, ActivityNet1.2, and ActivityNet1.3 datasets) in the few-shot setting. Our code will be made available.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

As the popularity of devices with video recording capabilities rises, the amount of video data tends to explode. Most of the videos are in untrimmed form and contain few interesting events, while the majority of the content is background. Temporal activity detection tries to automatically detect the start and end times of interesting events in the untrimmed video and enables many real-world applications, e.g. unusual event detection in surveillance video. The fact that these events are rare means that models must detect them based on few training examples.

Few-shot activity detection [37] addresses this problem via example-based action detection with a few training examples as input. [37] is based on the Matching Network and utilizes the correlations between the sliding window feature encoding and few-shot example video features to localize actions of previously unseen classes. However their model uses the “sliding window” approach to propose likely activities (proposals), and thus suffers from the high computational cost and inflexible activity boundaries. To localize proposals for few-shot detection more accurately, in this paper we propose an approach based on two-stage proposal/refinement networks using direct coordinate regressions for localization, called Similarity R-C3D. Our Similarity R-C3D model (Fig. 1) is derived from the two-stage activity detection framework R-C3D [36], with the second classification stage re-purposed for class-agnostic proposal classification. The entire untrimmed video is encoded by 3D convolutional (C3D) features [31]

on top of which a two-stage proposal network is applied. Refined proposals (foreground events) from the second stage are paired with a binary proposal score and then compared with few-shot examples using cosine similarity function.

Figure 1: Overview of the proposed Similarity RC3D model for few-shot activity detection in untrimmed video. Our model efficiently encodes video with 3D convolutions, proposes foreground segments (action proposals) and assigns labels to them based on their similarity to the one/few-shot training examples of activities.

To extract features for the few-shot example inputs, we design an additional branch which shares weights with the main C3D branch. After feature extraction, we compare each proposal’s features with each few-shot example’s features to produce a cosine similarity score, and each proposal is assigned a few-shot class label corresponding to the maximum similarity score. The final detection is determined by both the proposal score and the similarity score, where the proposal score filters out background activities and the similarity score assigns the few-shot class label. Our model is trained following the same meta-learning setting as in 

[37] to enable a fair comparison.

Despite being an end-to-end model, [37] does not demonstrate the data-driven benefits of end-to-end learning in reported results. Namely, as the model sees more few-shot example videos, the detection results do not show an obvious improvement. This unsatisfactory trend might also be related to the localization strategy based on rigid sliding windows adopted by the method [37]. Our model is end-to-end trainable, and demonstrates the advantages of feature learning in an end-to-end fashion. As more few-shot example videos are observed by the model, the detection results improve significantly.

We perform extensive comparisons of Similarity R-C3D to previous activity detection methods in the few-shot setting using three publicly available benchmark datasets - THUMOS’14 [10], ActivityNet1.2 and ActivityNet1.3 [8], and achieve new state-of-the-art results on all three datasets.

To summarize, the main contributions of our paper are:

  • Similarity R-C3D, a few-shot activity detection model with a two-stage proposal network for the efficient and accurate detection of potential events;

  • an end-to-end few-shot branch for feature extraction and similarity calculation from few-shot examples which can benefit from additional examples;

  • extensive evaluation on three activity detection datasets that demonstrates the general applicability of our model.

Figure 2: An overview of our proposed Similarity R-C3D network for few-shot activity detection. Given an untrimmed test video, and one- or few-shot training examples of an activity class, the network detects instances of the given activities in the test video and localizes their start and end times. The model has three components: (1) a 3D ConvNet feature extractor applied to both the untrimmed input video and the few-shot trimmed videos, (2) a two-stage temporal proposal network, and (3) a similarity network that computes similarity scores between proposals and few-shot videos and assigns labels.

2 Related Work

2.1 Activity Detection

There are two types of activity detection tasks: spatio-temporal and temporal-only. Spatio-temporal activity detection localizes activities within spatio-temporal tubes and requires heavier annotation work to collect the training data. [7, 22, 35, 39, 43] temporally track bounding boxes corresponding to activities in each frame to realize spatio-temporal activity detection. We focus on temporal activity detection [4, 14, 18, 26, 28, 38]

which only predicts the start and end times of the activities within long untrimmed videos and classifies the overall activity without spatially localizing people and objects in the frame.

Existing temporal activity detection approaches are dominated by models that use sliding windows to generate segments and subsequently classify them with activity classifiers trained on multiple features [11, 19, 26, 33]. The use of exhaustive sliding windows is computationally inefficient and constrains the boundary of the detected activities to some extent. Recently, some approaches have bypassed the need for exhaustive sliding window search and proposed to detect activities with arbitrary lengths [4, 14, 18, 28, 38]. They achieve this by modeling the temporal evolution of activities using RNNs or LSTM networks and predicting an activity label at each time step. More recently, CDC [24] and SSN [42] propose the bottom-up activity detection by first predicting labels at the frame-/snippet-level and then fusing them.

The R-C3D activity detection pipeline [36] encodes the frames with fully-convolutional 3D filters, finds activity proposals, then classifies and refines them based on pooled features within their boundaries. We adapt the second classification stage to be a proposal refinement stage to obtain more accurate proposals used in our few-shot setting.

Aside from supervised activity detection, recent work [34] has addressed weakly supervised activity localization from data labeled only with video-level class labels by learning attention weights on uniformly sampled proposals. A similar problem setting is addressed in [25]. Another type of weakly supervised localization of temporal activities is that given an action sequence (without the ground truth temporal annotation for each action) and a paired video, the model is required to localize each action within the input video under the action sequential constraint [21, 3, 9]. Our paper addresses a different type of partially-supervised task, called few-shot temporal activity detection, where only a few labeled examples are available for each class.

2.2 Few-shot Learning

The goal of few-shot learning is to generalize to new examples despite few available training labels. The conventional setting tests on novel classes only, while a generalized setting tests on both seen and unseen classes. Metric learning methods based on measuring similarity to few-shot example inputs are widely employed by many few-shot learning algorithms that generate good results [12, 27, 29, 32]

. Similarity values are typically defined on selected attributes or word vectors 

[16, 40]. Another line of few-shot learning algorithms augments data by synthesizing examples for the unseen labels [13, 41]

. Other approaches train deep learning models to learn to update the parameters for few-shot example inputs 

[1, 20, 2].

Most few-shot models tend to utilize the meta-learning framework for training on a set of datasets with the goal of learning transferable knowledge among datasets [23, 17]. MAML [5] is one typical example which combines the meta-learner and the learner into one, and directly computes the gradient with respect to the meta-learning objective. [15] studies the zero-shot spatial-temporal action localization using spatial-aware object embeddings. [37] is the first work proposing the task of temporal activity detection based on few-shot input examples. It performs the activity retrieval for the few-shot example(s) through a sliding window approach and then combines bottom-up retrieval results to obtain the final temporal activity detection. This work follows the conventional setting of testing on novel examples only, and uses the MAML training strategy. In our paper, we work along this direction, and propose a few-shot temporal activity detection model based on a two-stage proposal detector to realize more accurate temporal localization. For a fair comparison, we adopt the same settings as in [37].

3 Approach

In this paper, we tackle the task of few-shot temporal activity detection, which can also be framed as example-based temporal activity detection. The goal is to locate all instances of activities (temporal segments) in an untrimmed test video, given only a few typical examples from a new set of activity classes for training. We propose a fast and end-to-end model called Similarity based Region Convolutional 3D Network (Similarity R-C3D) to solve the few-shot temporal activity detection task. Our model builds on the state-of-the-art R-C3D model for fast activity detection [36], augmenting it with an extra branch for extracting few-shot example video features and a similarity computation subnet. Our key idea is to assign one of the few-shot class labels to each detected proposal based on the maximum similarity score between each proposal’s features and the features of the few-shot example videos.

3.1 Model Overview

An overview of our proposed Similarity R-C3D network is illustrated in Figure 2. The model consists of three components: a 3D ConvNet feature extractor [31] that encodes both the untrimmed input video and the few-shot trimmed example videos, a two-stage temporal proposal network, and a similarity network for few-shot class classification. The two-stage proposal subnet predicts temporal segments of variable length that contain potential activities, while the similarity network classifies these proposals into one of the few-shot activity categories. The two-stage proposal subnet and the few-shot trimmed video feature extractor share the same C3D weights.

Next, we describe the shared video feature hierarchies in Sec. 3.2, the two-stage temporal proposal subnet in Sec. 3.3 and the similarity classification subnet in Sec. 3.4. Sections 3.5 and 3.6 detail the optimization strategy during training and testing respectively.

3.2 3D Convolutional Feature Hierarchies

We employ the same 3D ConvNet [31] as in the R-C3D model to extract rich spatio-temporal feature hierarchies for our Similarity R-C3D model. In our example-based classification setup, the model accepts two types of input videos, namely, the untrimmed video and the trimmed few-shot example videos. These two types of videos are processed by two network branches that share the same 3D ConvNet weights and extract features for further similarity calculation in Sec. 3.4.

Suppose the input sequence consisting of RGB video frames has dimensions . The height () and width () of the frames are both set to 112 in our experiments, following [31]. For the bottom few-shot branch in Figure 2, we follow the traditional use of the 3D ConvNet [31] to sample frames uniformly from each few-shot example video, and obtain a fixed-dimensional feature vector for each video (see “One-shot features” in Figure 2). The number of frames for each video is set to in the few-shot branch. For the upper branch which encodes the untrimmed input video, we use fully convolutional layers, which means that the number of frames can be arbitrary. In our experiments we set it to or , depending on specific datasets. We adopt the convolutional layers (conv1a to conv5b) of C3D, so a feature map ( is the channel dimension of the layer conv5b) is produced as the feature encoding of the untrimmed input video. We use activations as the shared input to our two-stage proposal subnet in Sec. 3.3.

3.3 Two-stage Proposal Subnet

Proposal Stage I: The first proposal stage is similar to the proposal stage in the original R-C3D model. The input to this stage is the feature encoding of the untrimmed video. A series of 3D convolutional filters with kernel size

and a 3D max-pooling filter with kernel size

are applied to the input to produce a temporal only feature map . We pre-define a set of anchor segments on this map as fixed multiscale windows centered at

and uniformly distributed temporal locations. The 512-dimensional feature vector at each temporal location in

is used to predict a relative offset to the center location and the length of each anchor segment . These anchors, with the predicted center offsets and lengths, define the initial set of temporal proposals. The network also predicts a binary label (score) that indicates whether each proposal is a foreground activity or background. Two convolutional layers predict the proposal offsets and scores after receiving .

Proposal Stage II: In the original R-C3D model the second stage classifies the proposals predicted by the first stage into action categories, training on many labeled examples. In our few-shot model we change this second stage into another class-agnostic proposal stage which refines the proposal boundaries from stage I. Here high quality proposals from proposal stage I are selected by score thresholding and Non-Maximum Suppression (NMS). We then use 3D RoI pooling [36] to extract features with the fixed size for each variable-length proposal from the convolutional features (shared with the Proposal Stage I). The output of the 3D RoI pooling is fed to a series of fully connected layers to extract proposal features. We then adopt two separate fully-connected layers (a classification and a regression layer) to again classify proposals into fore/background and predict refined start and end times. For each proposal , the features extracted from the aforementioned fully-connected layers after the 3D RoI pooling layer are also used in the Similarity Network to compute similarities to the few-shot video features (See Sec. 3.4).

Training: For training, positive/negative labels are assigned to the anchor segments in Proposal Stage I, and to candidate proposals in Proposal Stage II, based on temporal Intersection-over-Union (tIoU) thresholds. All others are held out from training. We sample balanced batches with a positive/negative ratio of . For the proposal boundary regression, ground truth activity segments are transformed with respect to either nearby anchor segments or initial proposals from Proposal Stage I. represents the coordinate transformation of ground truth segments to anchor segments or proposals. The coordinate transformation is computed as follows:

(1)

where and are the center location and the length of anchor segments or proposals. Also, we define the same and for the ground truth activity segments.

3.4 Similarity Network for Few-shot Classification

After obtaining proposals from the two-stage proposal network, we use each proposal’s features to compute a similarity score with the few-shot video features . We produce a Similarity Score matrix by calculating such a score for each (proposal, few-shot video) pair. For each proposal and few-shot video , is computed as the cosine similarity between feature embedding vectors and : .

Once we have the Similarity Score matrix between the proposal features and the few-shot video features, we can label each proposal with a few-shot class label . In the one-shot setting, is directly applied on each column of the Similarity Score matrix : . In the few-shot setting, we first average the similarity scores of the multiple examples belonging to the same class, and then assign by applying among few-shot class labels.

Training: For training the multi-class classifier in the Similarity Network, we follow the same rules as in the Proposal Stage II to assign positive/negative labels, with the only difference being that now the positive label is not a binary proposal label but one of the few-shot class labels.

3.5 Optimization

We train the network by optimizing the binary proposal classification and regression tasks jointly for the two-stage proposal subnet, as well as optimizing for the multi-class classification task in the few-shot Similarity Network. The softmax loss function

is used for proposal binary classification, and smooth L1 loss function [6] is used for regression. Specifically, the objective function used in the Proposal Stage I is given by:

(2)

where and stand for batch size and the number of anchor/proposal segments, is the loss trade-off parameter and is set to a value of . For each anchor/proposal segment in a batch,

is its predicted probability,

is the ground truth, represents the relative offset predicted for anchor segments or proposals, and represents the coordinate transformation of ground truth segments to anchor segments or proposals. A similar objective function is used in Proposal Stage II.

The softmax loss function is used in few-shot multi-class classification. All five losses from Proposal Stage I, Proposal Stage II and few-shot Similarity Network are optimized jointly, resulting in the following total loss:

(3)

3.6 Prediction

Few-shot temporal activity detection in Similarity R-C3D consists of two steps.

First, the two-stage proposal subnet generates candidate proposals and predicts the start-end time offsets as well as a proposal score for each. The proposals from Stage I are refined via NMS with a threshold value of 0.7. After NMS, the selected proposals are fed to the Proposal Stage II network to again perform binary proposal classification, and the boundaries of the predicted proposals are further refined by the regression layer. The boundary prediction in both proposal stages is in the form of relative displacement of center point and length of segments. In order to find the start time and end time of the predicted proposals, inverse coordinate transformation to Equation 1 is performed.

Second, proposals from the Proposal Stage II are further used to calculate their similarity scores with the few-shot trimmed video features, and the maximum similarity score and corresponding class label are assigned for each proposal. In the multiple shot cases, the scores are first averaged within each class, and then max is taken among classes. Each proposal from Stage II is paired with a proposal score and a few-shot class with maximum similarity score, and the final activity detection is determined by thresholding both scores.

4 Experiments

We evaluate Similarity R-C3D on three large-scale activity detection datasets - THUMOS’14 [10], ActivityNet1.2 and ActivityNet1.3 [8]. Sections 4.1,  4.24.3 provide the experimental details and evaluation results on these three datasets.

We use the same few-shot problem setup as in the previous work [37], with the testing classes (novel classes) having no overlap with the training classes. We split the classes in each dataset into two subsets, train on one subset and test on the other subset. The specific split for each dataset is introduced in the corresponding section. We also use the same meta-learning setup as in [37] for fair comparison with its results. During the meta-training stage, in each training iteration, we randomly sample five classes from the subset of training classes, and then for each class we randomly sample one trimmed example video (in the one-shot setting) to constitute a training batch as the input for the few-shot branch. In the five-shot setting, the number of randomly sampled example videos is five for each class, and the final similarity value for each class is averaged by the five example videos of same class. Notably, the randomly sampled five classes for the few-shot branch should have at least one class overlap with the ground truth activity classes in the randomly sampled untrimmed input video. The five sampled activity classes are assigned labels from 0-4 randomly.

During the meta-testing stage, the test data preparation is the same as in the meta-training stage, but on the subset of testing classes. Results are shown in terms of mean Average Precision - mAP@ where denotes Intersection over Union (IoU) threshold, and average mAP at 10 evenly distributed IoU thresholds between 0.5 and 0.95, as is the common practice in the literature. However in the meta-testing setting, mAP@0.5 and average mAP are calculated in each iteration for one untrimmed test video, and the final results are reported by averaging over 1000 iterations.

4.1 Experiments on THUMOS’14

Experimental Setup: The THUMOS’14 activity detection dataset contains videos of 20 different sport activities. The training set contains 2765 trimmed videos from UCF101 dataset [30] while the validation and the test sets contain 200 and 213 untrimmed videos respectively. The 20 sport activities in THUMOS’14 are a subset of the 101 classes in UCF101. The 20 classes are split into 6 classes (Thumos-val-6) for training and 14 classes (Thumos-test-14) for testing as in [37]. In the THUMOS’14 dataset, the trimmed videos in the few-shot branch come from UCF101.

For the untrimmed input video, a buffer of 512 frames at 25 frames per second (fps) act as inputs to the two-stage proposal network. We first pretrain the two-stage proposal network using the proposal annotations from Thumos-val-6, starting from the C3D weights trained on Sports-1M released by the authors in [31]. Then we use the pre-trained two-stage proposal network weights to initialize the entire Similarity R-C3D network, and allow all the layers to be trained with a fixed learning rate of . The proposal score threshold is set as 0.05 and the similarity score threshold as 0.02.

Results: In Table 1, we present a comparative evaluation of the one-shot and five-shot temporal activity detection performance of Similarity R-C3D with existing state-of-the-art approaches 111For CDC, we use the values reported in [37] in terms of mAP@0.5. From the table, we see that our Similarity R-C3D model outperforms previous methods by a large margin in both the one-shot setting and the five-shot setting. The one-shot mAP@0.5 improves over the sliding window based approach proposed by [37] by 11.2%, and the five-shot mAP@0.5 improves by 14.1% (one-hundred percent relative improvement).

Another important finding is that, for the previous methods, the mAP@0.5 result differences between one-shot and five-shot settings are very small, showing almost no improvement from increasing the number of few-shot examples. However, our method shows a significant improvement from increasing the few-shot examples for each class from one to five, which comes from the data-driven benefit of end-to-end learning in our model. Figure 3 shows some representative qualitative results from three videos in this dataset. We see that, compared to one-shot, the five-shot model detects more accurate activity boundaries, and successfully detects the “Long Jump” activity in the last example while the one-shot model fails and detects the wrong activity class “Pole Vault”.

mAP@0.5
CDC@1 [24] 6.4
CDC@5 [24] 6.5
sliding window@1 [37] 13.6
sliding window@5 [37] 14.0
Similarity R-C3D@1 24.8
Similarity R-C3D@5 28.1
Table 1: Few-shot temporal activity detection results on THUMOS’14 (in percentage). mAP at tIoU threshold are reported. @1 means “one-shot” and @5 means “five-shot”.
Figure 3: Qualitative visualization of one-shot/five-shot temporal activity detection results from our model, Similarity R-C3D , on THUMOS’14 dataset. Ground truth (GT) clips corresponding to GT class are marked with black arrows. Correct predictions (predicted clips having tIoU more than 0.5 with ground truth) are marked in green, and incorrect predictions are marked in red. The start and end times are shown in seconds. Best viewed in color.
Figure 4: The change of mAP@0.5 and average mAP (in percentage) with the proposal score threshold for the Similarity R-C3D model in one-shot setting on ActivityNet1.2 dataset.

4.2 Experiments on ActivityNet1.3

Experimental Setup: The ActivityNet [8] dataset consists of untrimmed videos and is released in three versions. The latest release (1.3) contains 200 different types of activities and around 20K videos. Most videos contain activity instances of a single class covering a great deal of the video. The 200 activity classes are randomly split into 180 classes (ActivityNet1.3-train-180) for training and 20 classes (ActivityNet1.3-test-20) for testing. In the ActivityNet1.3 dataset, the trimmed videos in the few-shot branch come from the ground truth annotations inside each untrimmed video.

For the two-stage proposal network, the length of the input buffer is set to 768 at 6 fps. We also pre-train the two-stage proposal network using the proposal annotations from ActivityNet1.3-train-180, starting from the Sports-1M C3D weights. We initialize the Similarity R-C3D network with the pre-trained two-stage proposal network weights. As a speed-efficiency trade-off, we freeze the first two convolutional layers in our model during training and the learning rate is fixed at . The proposal score threshold is set as 0.5 and the similarity score threshold as 0.02.

Results: In Table 2 we show the performance of Similarity R-C3D in terms of mAP@0.5 and average mAP. Since no paper has reported results on this dataset before, we are the first to report results in a few-shot temporal activity detection setting on this dataset. Five-shot outperforms one-shot in both metrics significantly, showing the benefit of our end-to-end model where more few-shot example data is available for training. Figure 5(b) shows some representative qualitative results from this dataset.

mAP@0.5 average mAP
Similarity R-C3D@1 43.4 29.1
Similarity R-C3D@5 51.6 34.6
Table 2: Few-shot temporal activity detection results on activityNet1.3 (in percentage). mAP at tIoU threshold and average mAP of are reported. @1 means “one-shot” and @5 means “five-shot”.

4.3 Experiments on ActivityNet1.2

Experimental Setup: We also compare our Similarity R-C3D model on the ActivityNet1.2 dataset, as previous few-shot results are reported on this dataset [37]. The ActivityNet1.2 dataset contains around 10k videos and 100 activity classes which are a subset of the 200 activity classes in ActivityNet1.3. The ActivityNet1.2 dataset and the ActivityNet1.3 dataset share the same video data but have different sets of annotations. The 100 activity classes are split into 80 classes (ActivityNet1.2-train-80) for training and 20 classes (ActivityNet1.2-test-20) for testing.222We keep ActivityNet1.2-test-20 and ActivityNet1.3-test-20 the same. In the ActivityNet1.2 dataset, the trimmed videos in the few-shot branch also come from the ground truth annotations inside each untrimmed video. Other settings (the buffer length, proposal weight, pretraining, learning rate, etc.) are set the same as in ActivityNet1.3 dataset (see Sec. 4.2). The proposal score threshold is set as 0.3 and the similarity score threshold as 0.02.

Results: In Table 3 we show the performance of Similarity R-C3D and compare with existing published approaches. From the results, we see that the CDC model almost fails on this dataset with average mAP around 2%. The sliding window based model in  [37] gets significantly better results in both mAP@0.5 and average mAP compared to the CDC model. Our Similarity R-C3D in both one-shot (Similarity R-C3D@1) and five-shot (Similarity R-C3D@5) settings further improves the results by a large margin. We also observe a significant improvement from increasing the few-shot input from one to five, while the previous two methods (CDC and sliding window based method) do not exhibit such a trend. Figure 5(a) shows some representative qualitative results from this dataset.

Ablation Study for Proposal Network Pretrain: To investigate the effect of the proposal pretraining, we train another set of one-shot and five-shot experiments on ActivityNet1.2 dataset, starting from the two-stage proposal weights pre-trained on ActivityNet1.3 dataset (namely, ActivityNet1.3-train-180). The results are shown in Table 3 with mark. Comparing the corresponding pair of one-shot/five-shot experiments, we can see that the two-stage proposal weights have minor effects on the detection results. Our Similarity R-C3D model mainly learns the knowledge from the few-shot data during the end-to-end training.

Ablation Study for Proposal Score Threshold: Recall that our final few-shot detection results are determined by thresholding over the proposal score and similarity score. To show the effect of the proposal score threshold, we fix the similarity score at 0.02, and plot mAP@0.5 and average mAP with respect to the proposal score threshold for our one-shot setting on the ActivityNet1.2 dataset in Figure 4. In this setting, the optimal proposal score threshold is 0.3. This also shows the trade-off between the quantity and the quality of proposals.

mAP@0.5 average mAP
CDC@1 [24] 8.2 2.4
CDC@5 [24] 8.6 2.5
sliding window@1 [37] 22.3 9.8
sliding window@5 [37] 23.1 10.0
Similarity R-C3D@1 39.4 23.2
Similarity R-C3D@5 45.8 28.2
Similarity R-C3D@1 39.2 24.5
Similarity R-C3D@5 45.8 27.9
Table 3: Few-shot temporal activity detection results on ActivityNet1.2 (in percentage). mAP at tIoU threshold and average mAP of are reported. @1 means “one-shot” and @5 means “five-shot”. The results with use the activityNet1.3 pretrained two-stage proposal weights as initialization.
(a) ActivityNet1.2 detection examples
(b) ActivityNet1.3 detection examples
Figure 5: Qualitative visualization of one-shot/five-shot temporal activity detection results from our model, Similarity R-C3D , on ActivityNet 1.2 dataset LABEL: and ActivityNet 1.3 dataset LABEL:. Ground truth (GT) clips corresponding to GT class are marked with black arrows. Correct predictions (predicted clips having tIoU more than 0.5 with ground truth) are marked in green, and incorrect predictions are marked in red. The start and end times are shown in seconds. Best viewed in color.

5 Conclusion

In this paper, we propose a few-shot temporal activity detection model, called Similarity R-C3D, which is composed of a two-stage proposal subnet and a few-shot classification branch based on similarity values. Our model is end-to-end trainable and can benefit from observing additional few-shot examples. The approach is simple, yet outperforms previous models by a large margin. One possible future direction is to consider the domain differences between the few-shot examples and the untrimmed input video, since in the THUMOS14 dataset the few-shot examples come from the UCF101 dataset which is a different visual domain.

References

  • [1] M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, T. Schaul, B. Shillingford, and N. De Freitas. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems, pages 3981–3989, 2016.
  • [2] L. Bertinetto, J. F. Henriques, J. Valmadre, P. Torr, and A. Vedaldi. Learning feed-forward one-shot learners. In Advances in Neural Information Processing Systems, pages 523–531, 2016.
  • [3] P. Bojanowski, R. Lajugie, F. Bach, I. Laptev, J. Ponce, C. Schmid, and J. Sivic. Weakly supervised action labeling in videos under ordering constraints. In

    European Conference on Computer Vision

    , pages 628–643. Springer, 2014.
  • [4] V. Escorcia, F. C. Heilbron, J. C. Niebles, and B. Ghanem. Daps: Deep action proposals for action understanding. In European Conference on Computer Vision, pages 768–784. Springer, 2016.
  • [5] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. arXiv preprint arXiv:1703.03400, 2017.
  • [6] R. Girshick. Fast R-CNN. In IEEE International Conference on Computer Vision, pages 1440–1448, 2015.
  • [7] G. Gkioxari and J. Malik. Finding action tubes. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , pages 759–768, 2015.
  • [8] F. C. Heilbron, V. Escorcia, B. Ghanem, and J. C. Niebles. ActivityNet: A Large-Scale Video Benchmark for Human Activity Understanding. In IEEE Conference on Computer Vision and Pattern Recognition, pages 961–970, 2015.
  • [9] D.-A. Huang, L. Fei-Fei, and J. C. Niebles. Connectionist temporal modeling for weakly supervised action labeling. In European Conference on Computer Vision, pages 137–153. Springer, 2016.
  • [10] Y.-G. Jiang, J. Liu, A. R. Zamir, G. Toderici, I. Laptev, M. Shah, and R. Sukthankar. THUMOS Challenge: Action Recognition with a Large Number of Classes. http://crcv.ucf.edu/THUMOS14/, 2014.
  • [11] S. Karaman, L. Seidenari, and A. Del Bimbo. Fast saliency based pooling of fisher encoded dense trajectories. In ECCV THUMOS Workshop, volume 1, page 7, 2014.
  • [12] G. Koch, R. Zemel, and R. Salakhutdinov.

    Siamese neural networks for one-shot image recognition.

    In ICML Deep Learning Workshop, volume 2, 2015.
  • [13] Y. Long, L. Liu, L. Shao, F. Shen, G. Ding, and J. Han. From zero-shot learning to conventional supervised classification: Unseen visual data synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [14] S. Ma, L. Sigal, and S. Sclaroff. Learning activity progression in lstms for activity detection and early detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1942–1950, 2016.
  • [15] P. Mettes and C. G. Snoek. Spatial-aware object embeddings for zero-shot localization and classification of actions. In Proceedings of the International Conference on Computer Vision, 2017.
  • [16] A. Mishra, V. K. Verma, M. S. K. Reddy, S. Arulkumar, P. Rai, and A. Mittal. A generative approach to zero-shot and few-shot action recognition. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 372–380. IEEE, 2018.
  • [17] N. Mishra, M. Rohaninejad, X. Chen, and P. Abbeel. Meta-learning with temporal convolutions. arXiv preprint arXiv:1707.03141, 2017.
  • [18] A. Montes, A. Salvador, S. Pascual, and X. Giro-i Nieto. Temporal activity detection in untrimmed videos with recurrent neural networks. arXiv preprint arXiv:1608.08128, 2016.
  • [19] D. Oneata, J. Verbeek, and C. Schmid. The lear submission at thumos 2014. 2014.
  • [20] S. Ravi and H. Larochelle. Optimization as a model for few-shot learning. 2016.
  • [21] A. Richard and J. Gall. Temporal action detection using a statistical language model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3131–3140, 2016.
  • [22] S. Saha, G. Singh, M. Sapienza, P. H. Torr, and F. Cuzzolin. Deep learning for detecting multiple space-time action tubes in videos. arXiv preprint arXiv:1608.01529, 2016.
  • [23] A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap. One-shot learning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065, 2016.
  • [24] Z. Shou, J. Chan, A. Zareian, K. Miyazawa, and S.-F. Chang. Cdc: Convolutional-de-convolutional networks for precise temporal action localization in untrimmed videos. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 1417–1426. IEEE, 2017.
  • [25] Z. Shou, H. Gao, L. Zhang, K. Miyazawa, and S.-F. Chang. Autoloc: Weakly-supervised temporal action localization. arXiv preprint arXiv:1807.08333, 2018.
  • [26] Z. Shou, D. Wang, and S.-F. Chang. Temporal action localization in untrimmed videos via multi-stage cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1049–1058, 2016.
  • [27] P. Shyam, S. Gupta, and A. Dukkipati. Attentive recurrent comparators. arXiv preprint arXiv:1703.00767, 2017.
  • [28] B. Singh, T. K. Marks, M. Jones, O. Tuzel, and M. Shao.

    A multi-stream bi-directional recurrent neural network for fine-grained action detection.

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1961–1970, 2016.
  • [29] J. Snell, K. Swersky, and R. Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pages 4077–4087, 2017.
  • [30] K. Soomro, A. R. Zamir, and M. Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012.
  • [31] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 4489–4497, 2015.
  • [32] O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pages 3630–3638, 2016.
  • [33] L. Wang, Y. Qiao, and X. Tang. Action recognition and detection by combining motion and appearance features. THUMOS14 Action Recognition Challenge, 1(2):2, 2014.
  • [34] L. Wang, Y. Xiong, D. Lin, and L. Van Gool. Untrimmednets for weakly supervised action recognition and detection. In IEEE Conf. on Computer Vision and Pattern Recognition, volume 2, 2017.
  • [35] P. Weinzaepfel, Z. Harchaoui, and C. Schmid. Learning to track for spatio-temporal action localization. In Proceedings of the IEEE international conference on computer vision, pages 3164–3172, 2015.
  • [36] H. Xu, A. Das, and K. Saenko. R-c3d: region convolutional 3d network for temporal activity detection. In IEEE Int. Conf. on Computer Vision (ICCV), pages 5794–5803, 2017.
  • [37] H. Yang, X. He, and F. Porikli. One-shot action localization by learning sequence matching network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1450–1459, 2018.
  • [38] S. Yeung, O. Russakovsky, G. Mori, and L. Fei-Fei. End-to-end learning of action detection from frame glimpses in videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2678–2687, 2016.
  • [39] G. Yu and J. Yuan. Fast action proposals for human action detection and search. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1302–1311, 2015.
  • [40] R. Zellers and Y. Choi. Zero-shot activity recognition with verb attribute induction. arXiv preprint arXiv:1707.09468, 2017.
  • [41] C. Zhang and Y. Peng. Visual data synthesis via gan for zero-shot video classification. arXiv preprint arXiv:1804.10073, 2018.
  • [42] Y. Zhao, Y. Xiong, L. Wang, Z. Wu, X. Tang, and D. Lin. Temporal action detection with structured segment networks. ICCV, Oct, 2, 2017.
  • [43] H. Zhu, R. Vial, and S. Lu. Tornado: A spatio-temporal convolutional regression network for video action proposal. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5813–5821, 2017.