Implementation of Cross-category Video Highlight Detection via Set-based Learning (ICCV 2021).
Autonomous highlight detection is crucial for enhancing the efficiency of video browsing on social media platforms. To attain this goal in a data-driven way, one may often face the situation where highlight annotations are not available on the target video category used in practice, while the supervision on another video category (named as source video category) is achievable. In such a situation, one can derive an effective highlight detector on target video category by transferring the highlight knowledge acquired from source video category to the target one. We call this problem cross-category video highlight detection, which has been rarely studied in previous works. For tackling such practical problem, we propose a Dual-Learner-based Video Highlight Detection (DL-VHD) framework. Under this framework, we first design a Set-based Learning module (SL-module) to improve the conventional pair-based learning by assessing the highlight extent of a video segment under a broader context. Based on such learning manner, we introduce two different learners to acquire the basic distinction of target category videos and the characteristics of highlight moments on source video category, respectively. These two types of highlight knowledge are further consolidated via knowledge distillation. Extensive experiments on three benchmark datasets demonstrate the superiority of the proposed SL-module, and the DL-VHD method outperforms five typical Unsupervised Domain Adaptation (UDA) algorithms on various cross-category highlight detection tasks. Our code is available at https://github.com/ChrisAllenMing/Cross_Category_Video_Highlight .READ FULL TEXT VIEW PDF
Implementation of Cross-category Video Highlight Detection via Set-based Learning (ICCV 2021).
In current days, people show growing interests in sharing the videos recording their daily life on the social media platforms like YouTube and Instagram00footnotetext: *Corresponding author: Bingbing Ni.
. Among all these videos, the well-edited ones that summarize the highlights of specific events are apparently more attractive to the audience. However, in most cases, the original video of a real-world event contains many contents unrelated to its gist, and it is an onerous and time-consuming task to pick out the highlight parts of the video manually. Therefore, in order to enhance the efficiency of video content refinement, it is desirable to develop a machine learning model for autonomous video highlight detection.
To endow a model with the capability of identifying the highlight segments within a video, existing works have explored various ways of supervision, including the explicit highlight annotations [10, 49, 15], the frequent occurrence of specific video segments [23, 48, 21], the duration of a video , etc. These approaches generally focused on training a highlight detector for a specific video category (e.g. surfing, skiing, parkour, etc.), while the transferability of a highlight detection model across different video categories has been less studied in previous works.
As a matter of fact, in practical applications, one can face the situation where supervisory signal is lacked on the target video category intended to be used in practice, while the supervision on another video category is available, just as shown in Fig. 1. Under such situation, we consider the problem of Cross-category Video Highlight Detection. The setting of this problem is analogous to that of Unsupervised Domain Adaptation (UDA)  in which one seeks to adapt the knowledge learned from the labeled source domain (the source video category with supervision) to the unlabeled target domain (the unsupervised target video category).
In addition, for optimizing the highlight detector, most of existing methods [10, 49, 15, 23, 41, 14] followed the philosophy of pair-based learning, i.e. comparing a positive sample (e.g. a highlight video segment or a segment bag containing highlights) with a negative one, and, after training, the former is expected to rank higher than the latter. Nevertheless, such learning manner might not fully exploit the contextual information spanning among different video segments. For example, in a soccer match, the moment of a player’s dribbling the ball is more attractive than the one of the players’ entering the pitch, and both of them are less exciting than the moment of a goal. These relationships can hardly be captured by a single segment pair, which makes the highlight prediction of a pair-learning-based model potentially imprecise in the span of a whole video.
Motivated by the facts above, in this work, we propose a Dual-Learner-based Video Highlight Detection (DL-VHD) framework to address the Cross-category Video Highlight Detection problem. Under this framework, we first devise a Set-based Learning module (SL-module) to improve the conventional pair-based learning manner for highlight detection. In a nutshell, this module learns to regress the highlight score distribution over a set of segments from the same video, in which a Transformer encoder  is employed to model the interrelationship among various video segments. Based on this learning mechanism, we further introduce two different learners to capture two types of knowledge about highlight moments. In specific, the coarse-grained learner gains the basic concepts about what distinguishes the videos of target category from other ones, and the fine-grained learner acquires the precise highlight notions on source videos. These two kinds of knowledge are further integrated by distilling each of them into the other learner, and such integrated knowledge forms the more complete concepts about the highlight moments on target video category. In practice, the SL-module can be individually applied to derive an effective highlight detector when the segment-level annotation is available on the target video category, while, when such annotation is unobtainable, we can resort to the DL-VHD method for highlight knowledge transfer.
Our contributions can be summarized as follows:
To the best of our knowledge, this work is the first attempt at cross-category video highlight detection, in which we utilize a dual-learner-based scheme to transfer the concepts about highlight moments across different video categories.
We propose a novel set-based learning mechanism which is able to identify whether a video segment is highlight or not under a broader context.
Under the category-specific setting, we verify the superior performance of the SL-module over previous methods. For cross-category highlight detection, the DL-VHD model substantially surpasses existing UDA algorithms and performs comparably with the supervised model trained on target video category.
Video Highlight Detection. This task aims at assigning each video segment a score of its worthiness as highlight. In recent years, the videos studied for this task extend from sport videos [24, 42, 33] to general videos from social media  or first-person camera shooting . According to the manner of supervision, the existing works on this topic can be generally divided into two classes. For the supervised methods [10, 49, 15, 32], the highlight annotations of all segments in a video are given. For the weakly-supervised approaches [23, 48, 21, 41, 14], various weak supervisory signals have been exploited to define highlights, including the frequent occurrence of specific segments within a video category [23, 48, 21], the duration of a video  and the information from segment bags . For model optimization, most of these methods [10, 49, 15, 32, 41, 14] followed the philosophy of pair-based learning, i.e. comparing between a positive sample and a negative one.
Improvements over existing methods. In this work, we novelly explore the cross-category video highlight detection problem through learning two types of knowledge about highlight moments and integrating them on target video category. In addition, a set-based learning mechanism is proposed to improve the pair-based learning by performing highlight prediction on a set of video segments, such that the highlight extent of each segment can be judged more precisely with rich contextual information.
Unsupervised Domain Adaptation (UDA). UDA focuses on generalizing a model learned from the labeled source domain to another unlabeled target domain. To pursue this goal, a commonly used strategy is to minimize a specific metric for measuring domain shift [2, 20], e.g. Maximum Mean Discrepancy (MMD) [8, 36], Multi-Kernel MMD , Weighted MMD , Wasserstein Distance [28, 16] and the difference of feature covariance  or feature norm . On another line of research, adversarial learning is employed to facilitate domain-invariance on either pixel level [3, 27, 13] or feature level [6, 35, 18, 45]. In order to introduce the discriminative information on target domain, recent works [40, 5, 25, 43, 38, 44] utilized the pseudo labels of target samples for category-level domain alignment. This work explores cross-category video highlight detection, a similar problem as UDA, in which one intends to transfer the highlight knowledge acquired from the source video category to the target one.
In the cross-category video highlight detection problem, a set of videos containing the highlight moments of source video category, i.e. , are given, and each video is divided into segments with similar duration, where denotes the ground-truth highlight label for segment . In addition, we have another set of videos including the highlight moments of target video category, i.e. , while the segment-level highlight annotations of target category are not available on these videos. Under such condition, the main objective is to derive an effective highlight detector on target video category through fully exploiting the labeled source videos and the unlabeled target ones.
Cross-category Video Highlight Detection. In real-world applications, the segment-level highlight annotations may not be available for the target video category that the model is applied to, while one can obtain the supervision on another video category (named as source video category). Therefore, in such a situation, a natural question to ask is how to transfer the knowledge about highlight moments on source video category to the target one, i.e. performing cross-category video highlight detection. A straightforward answer is to leverage the existing Unsupervised Domain Adaptation (UDA) techniques for feature distribution alignment between two distinct video categories. However, such distribution alignment is hard, if not ill-posed, for the highlight detection problem, since the highlight segments for the target category may be nuisance for the source one, and vice versa, which is experimentally illustrated in Sec. 4.3.
To acquire the exact highlight concepts for target video category using the data from both categories, we propose a Dual-Learner-based Video Highlight Detection (DL-VHD) framework. Under this framework, the model learns two kinds of knowledge about highlight moments, i.e. the distinction of target category videos with other ones and the characteristics of highlights on source category. These two types of knowledge are further merged to form the more complete highlight concepts about target video category.
Set-based Learning. Previous works [10, 49, 32, 41, 14] commonly trained the highlight detection model by contrasting a highlight segment with a non-highlight segment , which seeks to model the conditional distribution . However, such pair-based learning may fail to discover the more complex highlight relations among more than two segments. For example, the excitement level of a soccer match differs from moment to moment, and the relative highlight extent of these moments cannot be sufficiently captured by pairs of video segments.
Motivated by such limitation, we propose a Set-based Learning module (SL-module). Its core idea is to train the model to predict the highlight score distribution over a set of video segments, and the prediction of a single segment is depended on all the other segments in the set, which models ( denotes the set size). By including such contextual information spanning among different video segments, it is expected that the model can assign more accurate highlight score to each segment.
Cross-category Video Highlight Detection via Set-based Learning. In order to bridge the highlight patterns of two distinct video categories, it is essential to explore the interrelationship among the video segments within the same category and also across different categories. Such complex relational patterns can be better captured under the rich context provided by segment sets. Based on such motivation, in DL-VHD, we employ SL-module as the basic learning module to acquire more precise highlight knowledge.
The SL-module models the interdependency among the video segments in a set and predicts the highlight score of each segment under such set-determined context, as shown in Fig. 2(a). Next, we introduce the detailed learning and inference schemes of this module.
Learning scheme. In each learning step, a set of annotated segments randomly sampled from the same video, i.e. , is given, and a pre-trained C3D  model extracts the feature embedding of each segment, i.e. ( is fixed in the learning phase). On these segment embeddings, a Transformer encoder  models the interrelationship among different segments and outputs the contextualized segment embeddings, i.e. . The Transformer encoder used in our method basically follows the original design in  which stacks layers of multi-head self-attention and feed forward network. In contrast, we remove the positional encoding module for the permutation-invariance of set learning, and, as suggested in , Layer Normalization (LN)  is applied before each self-attention and feed forward module. The architecture of the Transformer encoder is shown in Fig. 2(b). We refer readers to the original literature  for more details.
Upon the contextualized segment embeddings, a scoring model predicts the highlight score of each video segment, i.e. . Now that the highlight prediction on a segment set is obtained, we define the learning objective. During the learning phase, the basic desiderata is to match the highlight score distribution on set predicted by the model with the ground-truth distribution. To attain this goal, we define the learning objective as follows:
where denotes the softmax function producing the predicted and ground-truth highlight score distributions, and
stands for the Kullback–Leibler divergence.
Inference scheme. To infer the highlight score of a segment from a test video, we first construct a segment set containing and other context segments adjacent to in the video, denoted as . Half of these context segments are right before in the video, and the other half are right after. Segment duplication is conducted if the segments around
cannot fill the set. Here, we do not use the set composed of random segments as in the learning phase for the sake of suppressing the variance of prediction. We then infer the highlight score of all segments in the set,i.e. , by feeding them into the C3D feature extractor, the Transformer encoder and the scoring model successively. Finally, we pick out ( stands for the index of in ) as the highlight score of the video segment to be evaluated.
On the basis of SL-module, we now explore the cross-category video highlight detection problem. Its main objective is to derive an effective highlight detector on target video category by fully exploiting the labeled source videos and the unlabeled target ones . To pursue such goal, we seek to capture the highlight concepts about target video category from two aspects. On one hand, there exists some obvious features that distinguish the target category videos from the ones of other topics, e.g. the surfboard in surfing videos, the ski pole in skiing videos, etc. The perception of such features endows a model with the basic capability of picking out the segments of the target category from a video mixing different contents. On the other hand, there are some common characteristics of highlight moments sharing between distinct video categories. For instance, the moments with a standing person moving on some surface of the scene can be the highlights of both surfing and skiing videos. Such generic knowledge can be employed to identify the highlight moments for the target video category. However, neither of these two types of concepts alone can sufficiently define the highlights on the target category, which calls for a scheme that integrates different knowledge.
Following the above intuitions, we design a dual-learner-based framework, in which two kinds of highlight knowledge are learned by a coarse-grained learner and a fine-grained learner respectively, and they are further integrated by a knowledge distillation  scheme. The graphical illustration of this framework is shown in Fig. 3. We state the detailed learning and inference schemes as follows.
Learning scheme. In each learning step, we use a set of labeled segments randomly sampled from a video in , denoted as , and a set of unlabeled segments randomly sampled from a video in , denoted as . Based on these two sets, we further construct a mixed set with the segments from two video categories, denoted as ( equals to if is a target category segment and is otherwise), in which half of the segments are randomly sampled from , and the other half are from . Using the C3D feature extractor and Transformer encoder, we respectively derive the contextualized segment embeddings for these three sets, i.e. , and .
Upon these segment embeddings, on one hand, we introduce a coarse-grained learner to learn the basic distinction of target video segments with the source ones. This is achieved by matching the highlight prediction of on mixed set, i.e. , with the ground-truth highlight distribution on that set, which defines the coarse-grained highlight prediction loss:
On the other hand, a fine-grained learner
is introduced to acquire the knowledge about highlight moments on the source video category. This is attained through the supervised learning on set, in which the prediction of , i.e. , is aligned with the ground-truth highlight score distribution:
Now that two types of knowledge about highlight moments are acquired by two different learners, we aim to integrate them on the target video category. Inspired by the idea of knowledge distillation , we would like to distill the knowledge of each learner into the other learner without impairing its original knowledge. Specifically, the coarse-grained and fine-grained learner are both utilized to predict the highlight scores of the segments in set , which gives out and . We then generate the prediction reflecting both kinds of highlight knowledge by averaging and , which produces . In order to perform knowledge distillation between two learners, we constrain the individual prediction from either the coarse-grained or fine-grained learner to approach the average prediction, which defines the distillation loss as below:
The overall learning objective can be summarized as:
where is the trade-off parameter balancing between highlight prediction and knowledge distillation losses.
Inference scheme. During inference, given a segment from a target category video, we first extend it into a set with other context segments adjacent to in the same video, and the set is denoted as . The selection of these context segments follows the scheme depicted in the inference part of Sec. 3.2. The highlight score of each segment in is respectively inferred by the coarse-grained and fine-grained learner, which derives the highlight prediction and . These two kinds of predictions are further averaged to produce . Finally, we pick out ( denotes the index of in ) as the highlight score of segment .
|Category||Weakly-supervised Methods||Supervised Methods|
|RRAE ||LIM-s ||MINI-Net ||Video2GIF ||LSVM ||SL-module (w/o )||SL-module|
|Category||Weakly-supervised Methods||Supervised Methods|
|SG ||DSN ||VESD ||LIM-s ||MINI-Net ||KVS ||DPP ||vsLSTM ||SM ||SL-module (w/o )||SL-module|
|LIM-s ||MINI-Net ||LSVM ||SL-module (w/o )||SL-module|
In this section, we compare the proposed SL-module and the DL-VHD method with existing video highlight detection approaches under the category-specific and cross-category setting, respectively.
pre-trained on the UCF101 dataset
serves as the backbone for feature extraction, and its parameters are fixed during training. The Transformer encoder is constructed with 5 layers of self-attention and feed forward block, and each multi-head self-attention module is equipped with 8 attention heads. The scoring model, coarse-grained learner and fine-grained learner
are all instantiated as a multi-layer perceptron with architecture FC(4096,1024)ReLU FC(1024,256) ReLU FC(256,1), where FC is short for fully-connected layer.
Training details. In all experiments, an SGD optimizer (initial learning rate: 0.001, momentum: 0.9, weight decay:
) is employed to train the model for 50 epochs, and the learning rate is multiplied by 0.1 every 20 epochs. For each video segment, 16 frames are sampled from it with the same interval. Without otherwise specified, the set sizeis set as 20, and the trade-off parameter is set as (parameter sensitivity is analyzed in Sec. 5.2
). We use an NVIDIA Tesla V100 GPU for training. Our method is implemented with the PyTorchdeep learning framework, and the source code will be released for reproducibility.
Performance comparison. Under the category-specific setting, six supervised video highlight detection (or video summarization) methods, i.e. Video2GIF , LSVM , KVS , DPP , vsLSTM  and SM , and six weakly-supervised approaches, i.e. RRAE , SG , DSN , VESD , LIM-s  and MINI-Net , are introduced for comparison. For the cross-category setting, the SL-module trained on the source/target video category serves as the lower/upper bound of model performance. For the sake of fair comparison, five UDA algorithms, i.e. DAN , DeepCORAL , RevGrad , MCD  and AFN , are combined with SL-module to compare with the proposed DL-VHD method, and the detailed combination schemes are provided in the supplementary material.
When highlight annotations are available on the video category intended to be used, SL-module can be individually applied to train a highlight detector under the category-specific setting. We compare it with existing video highlight detection and video summarization methods in this section.
Datasets. YouTube Highlights  is composed of six video categories, i.e. dog, gymnastics, parkour, skating, skiing and surfing, and each category has approximately 100 videos. Segment-level annotations are provided to indicate whether a segment is a highlight moment or not. We follow the standard training-test split  for model evaluation.
TVSum  is a video summarization dataset consisting of 10 categories of video events with 5 videos in each category, and frame-level importance score is provided in this dataset. Following previous works [41, 14], we average the frame-level importance scores to achieve the segment-level highlight scores. For each video category, we select the two longest videos (about 10 minutes in total) for training and the rest three ones for test.
ActivityNet  is a large-scale database for human activity classification and detection. We employ the data of the temporal action localization track for highlight detection. Specifically, we split the video samples to five categories, i.e. eat&drink, personal care, household, sport and social, according to the first-level action label. The temporal Intersection over Union (tIoU) between a video segment and a ground-truth event of a specific category is used as the segment’s highlight label for this video category. Totally, we utilize 2520 videos for training and 1260 videos for test, and the detailed dataset statistics for all video categories are provided in the supplementary material.
Results on YouTube Highlights. In Tab. 1, we compare our method with existing approaches on six video categories of YouTube Highlights. It can be observed that the proposed SL-module outperforms previous pair-learning-based algorithms, i.e. LIM-s, MINI-Net, Video2GIF and LSVM, on all six categories, and superior average mAP is still obtained when the Transformer encoder is removed from our model. This phenomenon illustrates the superiority of set-based learning over pair-based methods, in which the broader contextual information within a segment set enables more precise highlight prediction of each video segment.
Results on TVSum. Tab. 2 reports the performance of various video highlight detection and video summarization approaches on TVSum. On nine of ten video categories, the proposed SL-module achieves the best performance, and, when removing the Transformer encoder, it still outperforms the state-of-the-art MINI-Net on seven of ten categories. These results verify the effectiveness of set-based learning under the circumstances with limited training data, i.e. only two videos per category for training.
Results on ActivityNet. In Tab. 3, we evaluate the performance of three existing methods and two configurations of the proposed model. Since the experiments on ActivityNet dataset were not commonly included in previous works, we examine these works by the released source code (for MINI-Net and LSVM) or our re-implementation (for LIM-s). The experimental results on this large-scale dataset further verify the superiority of the proposed set-learning method (i.e. obtaining the highest test mAP on all five video categories) when the training data is abundant.
|DL-VHD ( only)||0.574||0.498||0.704||0.635||0.631|
|DL-VHD ( only)||0.485||0.505||0.547||0.568||0.545|
|DL-VHD (w/o )||0.630||0.529||0.718||0.683||0.658|
|DL-VHD (full model)||0.649||0.540||0.748||0.713||0.686|
Under the cross-category highlight detection setting, we evaluate the effectiveness of DL-VHD and various UDA algorithms on transferring the highlight knowledge from source video category to the target one. In all experiments, the videos of source category possess segment-level annotations, while the videos of target category are unannotated.
Tasks. YouTube Highlights consists of six video categories, and we employ surfing as the source category and evaluate each of the cases that one of the other five categories serves as the target one. Also, we consider a more difficult setting where dog is used as the source category (i.e. adapting from dog activities to human ones), and the results of this setting are in the supplementary material.
ActivityNet contains five categories of human activities, and we utilize sport as the source category and aim at transferring the knowledge of sport highlights to other four video categories. The adaptation towards each target video category is separately examined.
|DL-VHD ( only)||0.689||0.694||0.742||0.741|
|DL-VHD ( only)||0.674||0.667||0.707||0.722|
|DL-VHD (w/o )||0.713||0.715||0.778||0.754|
|DL-VHD (full model)||0.730||0.728||0.793||0.766|
Cross-category results on YouTube Highlights. Tab. 4 reports the performance of various approaches on five cross-category highlight detection tasks, in which surfing serves as the source category. Source-only (target-oracle) method represents the SL-module trained on the source (target) video category in a supervised fashion, where an obvious performance gap exists between them. We can observe that the full model of DL-VHD surpasses five existing UDA algorithms on four of five tasks, and it surprisingly outperforms the target-oracle model on two tasks, i.e. surfing gymnastics and surfing skiing. Such results illustrate that cross-category video highlight detection cannot be easily deemed as a variant of UDA problem, and more dedicated techniques (e.g. the proposed dual-learner and knowledge distillation schemes) can better discover the transferrable highlight patterns across different video categories.
Cross-category results on ActivityNet. In Tab. 5, we compare the proposed DL-VHD model with five UDA methods on the cross-category highlight detection tasks of ActivityNet, and sport is utilized as the source category in all these tasks. The full model of DL-VHD achieves higher mAP than the UDA algorithms on all four tasks, and it even outperforms the target-oracle model on the sport household task. These empirical results verify that the DL-VHD model succeeds in capturing the human-related action patterns on the target video category under the guidance of labeled source videos and unlabeled target videos.
In this section, we conduct more in-depth analysis of our approach to evaluate the effectiveness of major model components both quantitatively and qualitatively.
Effect of Transformer encoder. In all three video highlight detection datasets, we compare the performance of the SL-module with and without Transformer encoder , as shown in Tabs. 1, 2 and 3. It can be observed that, after applying the Transformer encoder, the proposed set-based learning method obtains a clear performance gain on all tasks, which demonstrates the importance of interrelationship modeling when learning from a set of video segments.
Effect of dual learners and knowledge distillation. In Tabs. 4 and 5, we investigate the main components of DL-VHD through three additional model configurations: (1) only: only the coarse-grained learner is utilized to predict the highlight extent of a segment with respect to the target video category; (2) only: only the fine-grained learner is employed for highlight prediction on target category (this configuration is equivalent to the source-only baseline); (3) w/o : both the coarse- and fine-grained learners are trained, while their knowledge is not integrated by the knowledge distillation loss. When the two learners are individually applied, the coarse-grained learner outperforms the fine-grained one, which, we think, is because the supervision for coarse-grained learner is more relevant to the highlight patterns on target video category than the supervision applied to fine-grained learner. In the full model, the knowledge distillation scheme is able to further promote model’s performance upon configuration (3) by integrating the knowledge of two learners.
Sensitivity of set size . In this experiment, we analyze the sensitivity of the proposed SL-module to the set size. Fig. 5(a) shows the model performance on two highlight detection tasks under different set sizes. It can be observed that our set-based learning method can achieve stable performance gain when the size of each segment set is large enough, i.e. .
Sensitivity of trade-off parameter . In this part, we discuss the selection of trade-off parameter which balances between highlight prediction and knowledge distillation objectives. In Fig. 5(b), we plot the performance of DL-VHD on two cross-category highlight detection tasks using various values. The highest mAP on target video category is gained when the value of is around , which indicates that the appropriate balance between two distinct optimization objectives is attained under such condition.
For the cross-category highlight detection task surfing skiing, Fig. 4 visualizes the highlight prediction results of three methods, i.e. source-only, AFN and DL-VHD, on a target category video. For each method, we select the segments with the closest highlight score to the corresponding coordinate value (0.2, 0.4, 0.6 or 0.8), and each segment is represented by its first and last frames. The source-only model fails to capture the highlight patterns of skiing, and the AFN algorithm performs better but still overvalue a non-highlight segment by a score near 0.6. By comparison, DL-VHD assigns highlight scores to various video segments most appropriately. More visualization results on other tasks can be found in the supplementary material.
In this research, we novelly explore the cross-category video highlight detection problem with a Dual-Learner-based Video Highlight Detection (DL-VHD) framework. Under this framework, a Set-based Learning module (SL-module) is proposed to improve the commonly employed pair-based learning, and dual-learner and knowledge distillation schemes are further introduced for highlight knowledge transfer. The comprehensive experiments under both the category-specific and cross-category settings verify the exceeding performance of the proposed method.
Our future explorations will involve further improving the algorithm for cross-category highlight detection, applying the proposed approach to more sophisticated real-world applications and studying on the generalization capability of video highlight detection models.
This work was supported by National Science Foundation of China (U20B2072, 61976137). Authors appreciate the Student Innovation Center of SJTU and the ByteDance AI Lab for providing GPUs. Authors also thank Jie Zhou, Jiawen Li and Xuanyu Zhu for their valuable suggestions.
Unsupervised pixel-level domain adaptation with generative adversarial networks. In , Cited by: §2.
Unsupervised domain adaptation by backpropagation. In International Conference on Machine Learning, Cited by: §2, §4.1, Table 4, Table 5, Table 7, §8.
Distilling the knowledge in a neural network. CoRR abs/1503.02531. Cited by: §3.3, §3.3.
A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering 22 (10), pp. 1345–1359. Cited by: §1, §2.
Maximum classifier discrepancy for unsupervised domain adaptation. In CVPR, Cited by: §4.1, Table 4, Table 5, Table 7, §8.
AAAI Conference on Artificial Intelligence, Cited by: §2.
Video summarization with long short-term memory. In European Conference on Computer Vision, Cited by: Table 2, §4.1.
Combining SL-module with UDA methods. For the sake of fair comparison, we combine five Unsupervised Domain Adaptation (UDA) algorithms, i.e. DAN , DeepCORAL , RevGrad , MCD  and AFN , with the proposed SL-module and compare these combinations with the DL-VHD method. DAN, DeepCORAL and AFN align the feature distributions of source and target domain by minimizing specific domain discrepancy metrics, and we exert these metric-induced alignment losses on the contextualized segment embeddings (i.e. outputs of Transformer encoder) to narrow the distributional gap between source and target category video segments in the latent space. For RevGrad, we append a domain discriminator on the top of contextualized segment embeddings to conduct adversarial domain adaptation. For MCD, we train two scoring models on the source video category in a supervised way, and a minimax game is performed between Transformer encoder and two scoring models to derive more reliable highlight predictions on target video category.
Dataset statistics of ActivityNet. In our experiments, we employ a subset of ActivityNet  for model evaluation. The number of videos in the training and test split for each video category is shown in Tab. 6. Note that, each of these videos contains at least one highlight moment of the corresponding video category.
|DL-VHD ( only)||0.489||0.495||0.571||0.608||0.559|
|DL-VHD ( only)||0.486||0.480||0.535||0.564||0.531|
|DL-VHD (w/o )||0.525||0.686||0.654||0.630||0.649|
|DL-VHD (full model)||0.556||0.734||0.692||0.653||0.676|
In Tab. 7, we evaluate different methods on five cross-category video highlight detection tasks of YouTube Highlights , in which dog serves as the source video category. This setting is more difficult than the one employing surfing as the source category, since it intends to transfer the highlight patterns of dog to human. Source-only (target-oracle) method denotes the SL-module trained on the source (target) video category in a supervised way. From the table, we can observe that the full model of DL-VHD outperforms five UDA approaches with a clear margin, and it surpasses the target-oracle model on the dog gymnastics task. When the coarse-grained or fine-grained learner is individually applied (i.e. the configuration only and only), their performance is apparently lower than their combination (i.e. the configuration w/o and full model). After integrating the knowledge of two learners, the full model can derive more precise highlight predictions on target video category than the model without applying knowledge distillation.
Fig. 6 visualizes the highlight prediction results of three approaches on the target category video of two more difficult tasks, i.e. dog surfing and surfing dog. Compared to source-only and AFN , the proposed DL-VHD method can better acquire the concepts about the highlight moments on target video category, e.g. the segments describing an athlete surfing on the wave or the ones containing dogs.