Sentence Specified Dynamic Video Thumbnail Generation

08/12/2019 ∙ by Yiitan Yuan, et al. ∙ Tsinghua University 4

With the tremendous growth of videos over the Internet, video thumbnails, providing video content previews, are becoming increasingly crucial to influencing users' online searching experiences. Conventional video thumbnails are generated once purely based on the visual characteristics of videos, and then displayed as requested. Hence, such video thumbnails, without considering the users' searching intentions, cannot provide a meaningful snapshot of the video contents that users concern. In this paper, we define a distinctively new task, namely sentence specified dynamic video thumbnail generation, where the generated thumbnails not only provide a concise preview of the original video contents but also dynamically relate to the users' searching intentions with semantic correspondences to the users' query sentences. To tackle such a challenging task, we propose a novel graph convolved video thumbnail pointer (GTP). Specifically, GTP leverages a sentence specified video graph convolutional network to model both the sentence-video semantic interaction and the internal video relationships incorporated with the sentence information, based on which a temporal conditioned pointer network is then introduced to sequentially generate the sentence specified video thumbnails. Moreover, we annotate a new dataset based on ActivityNet Captions for the proposed new task, which consists of 10,000+ video-sentence pairs with each accompanied by an annotated sentence specified video thumbnail. We demonstrate that our proposed GTP outperforms several baseline methods on the created dataset, and thus believe that our initial results along with the release of the new dataset will inspire further research on sentence specified dynamic video thumbnail generation. Dataset and code are available at https://github.com/yytzsy/GTP.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 10

page 11

page 12

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Figure 1. The comparison between traditional static video thumbnail and our proposed sentence specified dynamic video thumbnails for online video searching scenarios.

Tremendous popularity of video websites and social networks has stimulated a massive growth of videos over the Internet. In face of this data deluge, video thumbnail (Liu et al., 2015; Song et al., 2016), as a commonly used technology to provide viewers a condensed and straightforward preview about the video contents, is becoming increasingly crucial to influencing users’ online searching and browsing experiences. Traditionally, one single key frame is extracted from an original video as its thumbnail, which only conveys limited information and cannot provide a vivid preview of the video. Therefore, some popular video websites, like YouTube 111https://www.youtube.com/, start to trim a short segment from a video as the video thumbnail, which provides a snapshot of what the video is about.

From picking one single key frame to trimming one segment, video thumbnails are becoming more expressive. However, there are still some problems that have been overlooked before. Currently, most video thumbnails are yielded purely based on their visual characteristics (e.g. visual quality, representativeness), while regardless of the users’ search intentions (Dirfaux, 2000; Kang and Hua, 2005; Luo et al., 2009; Hasebe et al., 2004; Song et al., 2016; Wang et al., 2011). For example, user A and user B in Figure 1(a) search online videos based on two different queries “Some horses are riding on the prairie” and “A shepherd dog works with sheep”. It can be observed that there is one video existing in both returned video pools. However, the pre-determined video thumbnail, even in the form of a video segment, only presents the scene of sheep, which partially relates to the query of user B and is irrelevant to the search intention of user A. We regard such a video thumbnail to be “static” to the users’ queries. By browsing such video thumbnails, users still cannot decide whether the video contains the meaningful and desired information they need, which will greatly influence the efficiency and experience of online video searching.

Nowadays, a thread of works (Liu et al., 2011, 2015; Vasudevan et al., 2017) take users’ queries into consideration for generating video thumbnails. On the one hand, such methods limit video thumbnails in the form of a single key frame without considering video temporal characteristics, thus making the generated video thumbnails less expressive. On the other hand, users’ queries employed in these methods are often confined to single words or short phrases, which cannot accommodate general and flexible users’ searching intentions in the form of natural language sentences. Besides the above, another thread of works (Hendricks et al., 2017; Gao et al., 2017; Liu et al., 2018; Chen et al., 2018) which aim to trim a single consecutive video segment from a video according to the given natural language query, can also apply to the video thumbnail generation task. However, such methods mainly focus on modeling video-sentence semantic correlation while ignore global video contents and internal video relationships, making the trimmed segment not comprehensive enough as a video thumbnail to express the video contents.

Based on the above considerations, in this paper, we define a distinctively new task, namely sentence specified dynamic video thumbnail generation. First, a video is evenly split into a sequence of short video clips. Afterward, we exploit the semantic relationships between these video clips as well as their matching behaviors with the query sentence, and finally select and concatenate several video clips to compose the final video thumbnail. Different from the traditional video thumbnails which are pre-determined offline, as shown in Figure 1(b), our video thumbnails are dynamically generated concerning different sentence queries.

The sentence specified dynamic video thumbnail generation is a very challenging task. Firstly, natural sentence query and video are different kinds of sequential data with rich semantic meanings. Therefore, their matching relationships are quite complicated and need to be modeled in a fine-grained manner, so as to generate video thumbnails that conform to users’ search intentions. Secondly, as a video thumbnail can be composed by several video clips, how to model the internal semantic relationships within videos and make the selected video clips semantically coherent with the overall video contents is worthy of further considerations.

To address the aforementioned challenges, we propose a novel graph convolved video thumbnail pointer (GTP), which can generate a semantically meaningful and coherent video thumbnail from an input video and meanwhile make the yielded thumbnail semantically relevant to the natural sentence query. Specifically, GTP first establishes a word-by-clip attention interaction between the sentence query and video sequence, and then performs a fine-grained semantic coupling of these two modalities. Afterward, based on the yielded sentence-video interaction features, a graph convolutional network (GCN) (Kipf and Welling, 2016) is performed to model the sentence specified relationships between different video clips, and further supports the in-video reasoning under the sentence semantics. Finally, a novel temporal conditioned pointer network, which takes the graph convolved features as input, is proposed to sequentially generate the video thumbnail and meanwhile preserve its semantic coherence.

Another major obstacle for sentence specified dynamic video thumbnail generation is the lack of dataset which contains pairs of video and sentence descriptions, as well as the associated sentence specified video thumbnails. To this end, we create a new dataset by annotating thumbnails for videos in the ActivityNet Captions (Caba Heilbron et al., 2015; Krishna et al., 2017) dataset. We take one video segment in ActivityNet Captions and its associated caption as our required video and sentence pair, and annotate the video thumbnail for the video segment, making the thumbnail semantically relevant to the caption. In total, our dataset consists of 10,000+ video-sentence pairs collected from about 4,000 videos and their captions in the ActivityNet Captions dataset.

In summary, our contributions are four-folds:

  • [leftmargin=*]

  • We introduce a novel task, namely sentence specified dynamic video thumbnail generation, aiming at dynamically selecting and concatenating video clips from an original video to generate one video thumbnail, which not only provides a concise preview of the original video but also semantically corresponds to the given sentence description.

  • We propose a novel graph convolved video thumbnail pointer (GTP) to tackle the sentence specified dynamic video thumbnail generation problem. A sentence specified video graph convolutional network is designed to exploit the complicated semantic relationships within the sentence and video sequence, based on which a temporal conditioned pointer network is proposed to sequentially generate the video thumbnail and meanwhile preserve its semantic coherence.

  • We annotate video thumbnails for videos in the ActivityNet Captions dataset, and create a new dataset to facilitate the research on sentence specified dynamic video thumbnail generation.

  • We validate the effectiveness of our proposed GTP model on the newly created dataset and achieve superior performance against the competing methods.

2. Related Work

Text Independent Video Thumbnail Generation. Most conventional video thumbnail generation methods (Gao et al., 2009; Mei et al., 2009; Song et al., 2016; Dirfaux, 2000; Kang and Hua, 2005; Hasebe et al., 2004) have focused on learning the characteristics of video thumbnails purely from visual contents, regardless of the user input textual queries. Particularly, Gao et al. (Gao et al., 2009) proposed a thematic video thumbnail selection algorithm, which constructs a visual theme model to capture the visual commodities shared between video key frames and an extra set of web images searched by the keywords from the video. Key frames with the highest similarities to the visual theme can be selected as the final video thumbnails. Song et al. (Song et al., 2016) presented an automatic thumbnail selection system which selects attractive thumbnails by analyzing various objective and subjective metrics (e.g.,

visual quality and aesthetics) of video frames. They performed clustering analysis to determine the relevance between the video thumbnail and video content, and further investigated that the selection of a good thumbnail highly relies on objective visual quality metrics, such as frame texture and sharpness.

Recently, Song et al. (Gygli et al., 2016) further introduced the problem of automatically generating animated gifs from videos. Gifs are short looping video segments of no sound and can present the expressive video contents to users, and therefore can be regarded as a new form of video thumbnails. To solve the gif generation problem, they proposed a robust deep RankNet, which models video content popularity and quality and further generates a ranking list of video segments according to their suitabilities as a gif. While the above methods can select visually qualified key frames or segments from videos to represent video contents, they ignore the user intentions for searching videos, which may not be adequate to satisfy the users’ online searching and browsing experiences.

Text Specified Video Thumbnail Generation. Recently, some researchers start to investigate how to generate video thumbnails according to textual user queries (Liu et al., 2011, 2015; Vasudevan et al., 2017). Huang et al. (Liu et al., 2011) proposed a query-specific thumbnail selection algorithm that extracts a frame being both representative of the video contents and specific to the intent of the user’s query. The matching relations between query words and frame contents are captured by a shallow dual cross-media relevance model (Liu et al., 2007) adapted from the image annotation problem. Liu et al. (Liu et al., 2015) employed a deep visual-semantic embedding model (VSEM) to measure the relevance between the query and video frames by embedding them into a latent semantic space. Hence, key frames in the video are ranked by their distances to the given query in the learned latent space, and the top-ranked frames are selected as the final video thumbnail. Based on VSEM, Vasudevan et al. (Vasudevan et al., 2017)

further proposed a quality-aware relevance estimation model (QARE) which can capture the query-independent frame-quality properties in the visual semantic embedding procedure. The frame-quality properties are characterized separately by one dimension in the common latent semantic space. Thus, their video thumbnail selection is done by using both the query dependent relevance scores and query-independent quality scores of video frames.

Most of the above text specified video thumbnail generation methods are largely based on the multi-modal semantic matching framework (Frome et al., 2013; Pan et al., 2014), which is originally designed for image searching or tagging. Due to the lack of datasets customized for video thumbnail generation, these methods can only leverage other image annotation datasets such as Clickture (Hua et al., 2013) to train their models. With such image-based framework and dataset, a lot of important video specific characteristics such as video temporal relationships are not fully explored and leveraged, which inevitably hurts the effectiveness of the video thumbnail generation. Moreover, the user queries are often confined to single words or phrases, which also cannot accommodate the general and flexible user sentence queries.

Temporal Sentence Localization in Video. Given an untrimmed video and a natural language sentence query, temporal sentence localization in video aims to identify the start and end points of one video segment, which semantically matches the given sentence query (Hendricks et al., 2017; Gao et al., 2017; Liu et al., 2018; Chen et al., 2018; Yuan et al., 2019; Chen et al., 2019a, b). To solve this problem, Hendricks et al.

firstly presented a Moment Context Network (MCN)

(Hendricks et al., 2017) to match video segments with sentence query in a multi-modal latent space, where the temporal endpoint features of video segments are also incorporated to enhance the localization performance. Gao et al. proposed a Cross-Modal Temporal Regression Localizer (CTRL) (Gao et al., 2017), which extended the object detection methodologies (Girshick et al., 2014; Girshick, 2015) in spatial dimensions to temporal dimension. They firstly sampled several candidate video segments from video and fused the sentence information with each of these segments. Then based on the fused multimodal features, the temporal boundaries of these segments were adjusted to the target positions with a localization regression network. Liu et al. proposed a Attentive Cross-Modal Retrieval Network (ACRN) (Liu et al., 2018). The ACRN enhanced the CTRL architecture with a memory attention mechanism, in which the visual information mentioned in the query was emphasized and further incorporated to the context of each candidate segment.

Our proposed sentence specified dynamic video thumbnail generation task is different from the temporal sentence localization task. For temporal sentence localization, it is assumed that the given sentence query only corresponds to one single video segment, which consists of one or several consecutive video clips. However, for dynamic video thumbnail generation, the predicted thumbnails can be composed of several temporally inconsecutive but semantically coherent video clips. More importantly, the temporal sentence localization task mainly emphasizes on modeling the semantic correlation between video and sentence. While for sentence specified video thumbnail generation, the generated video thumbnail not only should have close relationships with the sentence query, but also needs to provide a straightforward preview of the overall video contents. Therefore, the global video information, such as the semantic relationships between different video clips, needs to be considered for generating the dynamic video thumbnail.

3. Proposed Approach

Given a video and a sentence , the task of sentence specified dynamic video thumbnail generation aims to select a set of video clips from , which are semantically relevant to the sentence and will be concatenated together as the final video thumbnail. Each video is first represented as , where denotes the representation of the -th video clip, and is the total number of clips. Accordingly, each sentence is represented as , where is the embedding of the -th word in the sentence and denotes the total number of words.

We propose a novel graph convolved video thumbnail pointer (GTP), to tackle the sentence specified dynamic video thumbnail generation problem. As illustrated in Figure 2, GTP, which takes the video and sentence features and as inputs, consists of three modules: (1) video and sentence encoders, (2) sentence specified video graph convolutional network and (3) temporal conditioned pointer network. Please note that the three modules are closely coordinated and can thus be trained in an end-to-end fashion.

3.1. Video and Sentence Encoders

Considering the sequential characteristics of the video and sentence representations, two bi-directional gated recurrent units (BiGRUs) 

(Cho et al., 2014) are used to encode these two modalities, respectively:

(1)

Due to the behaviors of BiGRU, the output hidden states, namely and , encode and aggregate the flexible contexts of the video and sentence, respectively.

3.2. Sentence Specified Video Graph Convolutional Network

Relying on the encoded video and sentence representations, as shown in the middle part of Figure 2, the sentence video interaction and the video graph convolution modules are stacked together to exploit the fine-grained sentence video semantic relationships and the sentence specified video clip relationships, respectively.

Sentence Video Interaction. To fully exploit the fine-grained interaction between sentence and video, we propose to attentively summarize and incorporate the sentence information regarding each video clip. Specifically, the soft attention mechanism (Xu et al., 2015) is used to generate the attention weights of one video clip with respect to all the words in the sentence:

(2)

where , , , and are the learnable parameters. The clip-specific sentence representation is subsequently computed by aggregating the word features with the yielded attention weights:

(3)

Finally, we concatenate each video clip feature with its clip-specific sentence feature, and feed the concatenated vector to a fully-connected (FC) layer:

(4)

where

is the nonlinear activation function, and

and are the parameters of the FC layer. The yielded , denoted as the sentence-video interaction features, dynamically encodes the fine-grained word-by-clip matching relationships between the sentence and video.

Figure 2. The architecture of our GTP model, which consists of three modules. First, the video and sentence encoders aggregate the contextual evidences from the video clip representations and word embeddings of the sentence query, respectively. Second, the sentence specified video graph convolutional network establishes the fine-grained word-by-clip interaction between the sentence and video, and leverages a GCN to further exploit the sentence specified video clip relationships. Finally, the temporal conditioned pointer network predicts and concatenates the video clips to yield the video thumbnail in a sequential manner.

Video Graph Convolution. In our sentence specified dynamic video thumbnail generation task, the generated video thumbnails should not only have close relationships with the sentence semantics, but also need to provide a content preview of the overall video. Therefore, with the sentence-video interaction features, we further model the sentence specified relationships between different video clips by a graph convolutional network (Kipf and Welling, 2016), so as to take the global video contents into consideration when generating video thumbnails. Specifically, we represent the video as a graph structure, where each node in the graph represents one video clip incorporated with sentence information, and the edge between each pair of nodes represents their sentence specified semantic similarity or affinity

. After computing the affinity matrix

, we perform normalization on each row of the matrix to ensure that the sum of the edge values connected to one node be 1 (Vaswani et al., 2017; Wang and Gupta, 2018):

(5)

where is the scaling factor. is regarded as the adjacency matrix representing the constructed sentence specified video clip graph.

Based on the adjacency matrix , the graph convolution operation is performed, which computes the response of a node based on its neighbors defined by the above sentence specified graph relationships:

(6)

where

is the identity matrix to emphasize the self-interaction of each node.

is the representations of all the graph nodes. is the learnable weight matrix for performing the convolution operation. The output is of the same dimension as the input . As such, the graph convolution operation can be stacked into multiple layers. After each layer of graph convolution, the Layer Normalization (Ba et al., 2016) and nonlinear activation are performed before is forwarded to the next layer. Thus, the graph convolution process can be regarded as performing information passing inside our built graph, or as linking the relevant video clips under the sentence semantics.

In our video graph convolution, the input of the first layer of convolution is the sentence-video interaction features, i.e., , and the output of the last layer of convolution is defined as the graph convolved video features .

3.3. Temporal Conditioned Pointer Network

Based on the graph convolved video features, we design a novel temporal conditioned pointer network shown in Figure 3, which sequentially outputs a list of integers indicating the selected video clips to be concatenated as the desired video thumbnail.

Specifically, another BiGRU is used to aggregate the graph convolved video features as , where

is a padding token used to indicate the end of the sequential video clip selection. To determine

, a temporal conditioned attention mechanism is proposed to compute an attention vector , where

indicates the probability of selecting the

-th video clip as the -th clip to compose the final video thumbnail:

(7)

where is the hidden state of the temporal conditioned pointer network, which is realized by a GRU:

(8)

At each time-step, the input is yielded by attentively summarizing regarding the generated probabilities . is initialized by the average pooling of the sentence representation.

Compared with the general pointer network (Vinyals et al., 2015), as denoted in Eq (7), a temporal conditioned constraint, fulfilled via a binary attention mask , is applied on when generating the corresponding attention weight . In this way, if the position of the previously selected video clip is , the video clips before will not be considered and deactivated by setting to 0 (as illustrated in the gray region of Figure 3). On the contrary, the general pointer network will choose an already selected clip again or a video clip before the already selected clips. The disordered chosen video clips will break the logical relationships in the video and inevitably hurt the performance of the pointer network in the following time-steps. The proposed temporal conditioned constraint naturally solves the problem by introducing the attention mask, which ensures the generated thumbnail to be temporally consistent with the original video, therefore providing users a semantically coherent preview of the video contents. Moreover, it is worth noting that our proposed temporal conditioned pointer network makes the video clip selection quite flexible, and even inconsecutive video clips can be grouped together to compose the final video thumbnail. Besides, the lengths of the thumbnails are also no need to be limited to a fixed value.

Figure 3. The detailed architecture of the proposed temporal conditioned pointer network. From top to bottom, each red arrow points out the selected video clip in the sequential video thumbnail generation procedure. The video clip selection stops until it points to the zero padding state at a certain time-step. Under the temporal conditioned constraint, the gray bar in each row indicates the video clips that will not be selected at each time-step.

3.4. Training and Inference

The training samples collected in for sentence specified dynamic video thumbnail generation are video-sentence-annotation triples. Specifically, each video is associated with a sentence annotation , where is the sentence description used for video thumbnail generation, and is a ground-truth annotation matrix with binary entries. is the number of video clips in and is the maximal number of video clips that can be contained in a video thumbnail. is set to 1 when the -th video clip in video is selected as the -th video clip in the video thumbnail. Otherwise, is set to 0.

For a training sample in , the objective for video thumbnail generation is given by :

(9)

Here is the predicted selection probability of the -th video clip at the -th step in our proposed temporal conditioned pointer network, as denoted in Section 3.3.

In training, the objective will back-propagate to all the fully-coupled three modules of GTP. For all the training samples in , the objective is defined as:

(10)

During the inference stage, we first pre-process the input video and sentence description to acquire the video clip and word embedding features, then feed the features into our proposed graph convolved video thumbnail pointer, and finally obtain the predicted positions of the selected video clips. These clips are sequentially concatenated together and constitute the dynamic video thumbnail.

4. Sentence Specified Video Thumbnail Dataset

A major challenge for sentence specified dynamic video thumbnail generation is that there is a lack of large-scale dataset which consists of video and sentence pairs, as well as the corresponding sentence-related video thumbnail. To mitigate this issue, we annotate a new dataset based on the ActivityNet Captions (Krishna et al., 2017) dataset for our proposed new task.

Each video in ActivityNet Captions is annotated by several sentence captions, with each caption summarizing the content of a specific video segment with explicit starting and ending points in the video. We randomly choose 4,000 videos from ActivityNet Captions, and then trim the video segment for each caption from these chosen videos. The trimmed video segments of less than 20-second length are dropped, and the rest segments with their corresponding captions are collected to form our required video-sentence pairs. We further ask several participants to annotate the video thumbnails for these collected videos. For the convenience of annotation, we set up a website to annotate the video thumbnails. When annotating, participants will watch the video-sentence pair simultaneously. They are required to read the sentence and watch the video first, and then select no more than 5 clips from the video to constitute the final video thumbnail. To speed up the annotation, we split the original video into clips of 2-second length and place these clips on the website in the chronological order. The participants only need to click the clips to indicate their selections.

Through the aforementioned data collection and annotation procedures, we finally acquire 10,204 video-sentence pairs in total, and ensure that each pair is accompanied by 4 video thumbnail annotations from different participants. We randomly choose 70 of the collected video-sentence pairs for training, 15 for validation, and the remaining 15 for testing. Since there are 4 video thumbnail annotations for each video-sentence pair, we take the annotated video thumbnail with the highest consistency among the 4 annotations as the ground-truth during the training stage. While in the testing stage, the predicted video thumbnail will be evaluated with respect to all the 4 annotations. For more details and analysis of our created dataset, please refer to the supplemental material 222https://github.com/yytzsy/GTP/blob/master/ACM_MM19_Supplemental_Material.pdf.

5. Experiments

In this section, we begin by describing baseline methods and experimental settings, followed by the experimental results on the sentence specified dynamic video thumbnail generation task.

5.1. Baseline Methods

We compare our proposed GTP model against the following state-of-the-art video thumbnail generation methods, specifically BeautThumb (Song et al., 2016), RankNet (Gygli et al., 2016), VSEM (Liu et al., 2015), and QARE (Vasudevan et al., 2017). BeautThumb and RankNet are text independent models which generate video thumbnails by purely relying on visual characteristics of video frames. We directly run the source codes333Code for BeautThumb: https://github.com/yahoo/hecate; Code for RankNet: https://github.com/gyglim/video2gif_code, and concatenate the top-5 ranked video clips as the video thumbnail. VSEM and QARE are text specified models, which learn a joint embedding of video clips and query sentences, and thereby select video thumbnails according to their distances with the sentences. Since both VSEM and QARE only focus on selecting key frames from videos as the thumbnails, we adapt the selection unit of these two methods from video frame to video clip, and the top-5 ranked video clips are concatenated together as the final video thumbnail.

In addition, we also apply two temporal sentence localization methods CTRL (Gao et al., 2017) and ACRN (Liu et al., 2018) to the proposed sentence specified dynamic video thumbnail generation task, and evaluate their results on our created dataset. In the setting of temporal sentence localization in video, one sentence query only refers to one single video segment. However, the annotated video thumbnail in our created dataset may be composed of several inconsecutive video clips. In order to generate corresponding ground truth for temporal sentence localization in our created dataset, for each sentence query, we merge each group of continuous annotated video clips into a video segment, and take the longest video segment as the ground truth for temporal sentence localization.

5.2. Experimental Settings

Evaluation Metrics. We assess the quality of a generated video thumbnail by measuring the agreement between the video clips within it and the video clips within the ground-truth annotations. Specifically, for the -th video-sentence sample in the testing set, we denote as the set of selected video clips in the -th ground-truth video thumbnail, and as the set of video clips within the generated video thumbnail. The precision, recall, and IoU scores between and are computed as , , . Finally, the overall video thumbnail generation results are evaluated by the average Precision, Recall, F1 and IoU scores among all the M testing samples, as follows:

(11)
(12)
(13)
(14)

Implementation Details. We evenly split each video into 2-second video clips, and encode each clip with the released C3D (Tran et al., 2015) features by ActivityNet Challenge 2016444http://activity-net.org/challenges/2016/download.html. For sentences, we tokenize each sentence by Standford CoreNLP (Manning et al., 2014), and use Glove (Pennington et al., 2014) to initialize the word embedding with dimension as 300. The words not found in Glove are randomly initialized. The hidden state dimensions of all GRUs are set as 256. As for the video graph convolution, we set the number of the graph convolution layer as 2, and the scaling factor as 150. The initial learning rate is set to 0.001, and is gradually decayed over time.

5.3. Performance Comparisons

Method Precision Recall F1 IoU
Random 0.3409 0.3971 0.3604 0.2379
BeautThumb (Song et al., 2016) 0.3639 0.4217 0.3837 0.2544
RankNet (Gygli et al., 2016) 0.3790 0.4443 0.4013 0.2770
VSEM (Liu et al., 2015) 0.4142 0.4849 0.4386 0.3098
QARE (Vasudevan et al., 2017) 0.4050 0.4744 0.4285 0.2986
CTRL (Gao et al., 2017) 0.4933 0.4124 0.4303 0.3084
ACRN (Liu et al., 2018) 0.4967 0.4328 0.4456 0.3271
GTP 0.5055 0.5742 0.5285 0.3933
Table 1. Performance comparisons of different methods on our created dataset.

Table 1 illustrates the video thumbnail generation results of different methods on our created dataset. First, with randomly selecting 5 video clips to constitute the thumbnail, the Random setting performs the worst. Other methods, including our proposed GTP can indeed learn to produce meaningful video thumbnails. Second, the text specified methods VSEM, QARE and GTP achieve much better results than the text independent ones BeautThumb and RankNet. It verifies that incorporating sentence information is beneficial to choose the semantic meaningful video thumbnails for the sentence specified video thumbnail generation task. Third, among the three text specified video thumbnail generation methods, our GTP performs substantially better than VSEM and QARE. Compared with separately matching sentence and each video clip in VSEM and QARE, our GTP establishes a deeper semantic coupling between sentence and video, and captures the sentence specified video clip relations with graph convolution. Moreover, the temporal conditioned pointer network can further preserve the temporal ordering and semantic coherence of the selected video clips. As such, the generated video thumbnail by our proposed GTP is not only semantic related to the sentence description, but also coherent with the overall video contents, and thus demonstrates a significant better performance.

Moreover, as illustrated in Table 1, the two temporal sentence localization methods, namely CTRL and ACRN, achieve inferior results compared to our proposed GTP model. Both ACRN and CTRL mainly focus on modeling semantic correlations between videos and sentence queries, while neglect global video contents and internal video relationships, and can only localize one single segment from one video. Even though the predicted video segment may have close relationships to the given sentence query and make relatively high precision value, the single video segment may not be representative enough to cover other meaningful information within the overall video, thus resulting in lower recall value. As such, the temporal sentence localization methods cannot be directly applied to the video thumbnail generation task.

Method Precision Recall F1 IoU
GTP-G 0.5053 0.5384 0.5100 0.3756
GTP-P 0.4071 0.4780 0.4310 0.3043
GTP-C 0.4968 0.4475 0.4582 0.3237
GTP 0.5055 0.5742 0.5285 0.3933
Table 2. Ablation studies on the different components in GTP.

5.4. Analysis of the GTP Model

Ablation Studies on the GTP Components. To verify the contribution of each part of our proposed GTP model, we perform three ablation studies as follows.

(1) GTP-G: We drop the sentence specified video graph convolutional network, and directly feed the concatenation of the average feature of words and video clip feature into the temporal conditioned pointer network.

(2) GTP-P

: We drop the temporal conditioned pointer network, and instead establish a 0-1 classifier on the graph convolved video features

to predict the probability of selecting each video clip as the video thumbnail. The top-5 ranked clips with the highest probabilities are concatenated as the final video thumbnail.

(3) GTP-C: We remove the temporal conditioned constraint in the proposed temporal conditioned pointer network. In this case, the selected video clips will further be post-processed by dropping the repetitive ones to produce the final video thumbnail.

Table 2 lists the results of the aforementioned ablation studies. It can be observed that our full model GTP outperforms all its variants, which clearly verifies the effectiveness of our proposed sentence specified video graph convolutional network and temporal conditioned pointer network. Concretely, the graph convolution establishes sentence specified relationships between different video clips and links the semantically related ones, which thereby supports the in-video reasoning when selecting video clips according to the given sentence semantics. The temporal conditioned pointer network learns the video thumbnail selection pattern from the training dataset, which can flexibly determine the video clip selection and termination based on the former predictions. In contrast, GTP-P drops the pointer network and takes the video clip ranking strategy. In this case, the temporal and contextual information within video thumbnails are not fully characterized and the video thumbnail lengths are also fixed to a pre-defined value (5 clips), which inevitably leads into inferior results and makes the video thumbnail generation quite inflexible. Moreover, although the temporal conditioned constraint is simple, it can naturally avoid the disordered and repetitive video clips, and further preserves the logical relations and semantic coherence of the generated video thumbnails. Therefore, incorporating this constraint from GTP-C to GTP makes a significant performance improvement for the overall model.

Method Precision Recall F1 IoU
GTP-1 0.5028 0.5686 0.5245 0.3880
GTP-2 0.5055 0.5742 0.5285 0.3933
GTP-3 0.5036 0.5710 0.5257 0.3899
GTP-4 0.4985 0.5677 0.5216 0.3854
Table 3. Ablation studies on the graph convolution layers in GTP.

Ablation Studies on the Number of Graph Convolution Layers. Table 3 lists the results of our proposed GTP model with different numbers of graph convolution layers. It can be observed that GTP with two layers of graph convolutions achieves the best results. When adding more graph convolution layers, the overall performances gradually decrease but still stay stable, with narrow margins compared to the best. The main reason may be that overfitting can become an issue as the number of parameters increases with model depth (Kipf and Welling, 2016).

Figure 4. Qualitative examples for sentence specified dynamic video thumbnail generation. On the left, we use different color bars to show the video clip selection results for different methods, with the selected video clips highlighted in darker colors. Ground-truth video thumbnails are indicated by green color. On the right, we provide two kinds of heat maps (red and blue) to illustrate the word-by-clip attention matrix and the video clip adjacency matrix, respectively.
Figure 5. Evolution of the learned adjacency matrices during the sentence specified video graph convolution. The graph edge, representing video clip relationships, are more clearly learned along with the model training procedure.

5.5. Qualitative Results

Video Thumbnail Generation Examples. Several qualitative examples for sentence specified dynamic video thumbnail generation are shown in Figure 4. It can be observed that the selected video clips of our GTP model are more semantically consistent with the given sentence description. Even in the second example, the ground-truth thumbnails are divided into three separate parts, our GTP can still predict the positions of them accurately. It indicates that our GTP not only measures the semantic correlations between video clips and sentences, but also captures the long range dependencies and internal relationships of videos, and thus can generate video thumbnails providing good content previews of the original videos.

For better demonstrating the word-by-clip interaction and the video graph convolution in the video thumbnail generation procedure, we also provide two kinds of heat maps (red and blue) in Figure 4 to illustrate the word-by clip attention matrix and the video clip adjacency matrix, respectively. From the word-by-clip attention matrix, it can be observed that some words with higher attention weights well match the video contents. For example, in the first qualitative example, the action “man runs and jumps” appears in the 3 7 video clips, and accordingly the concepts “man”, “runs” and “jumps” get higher attention values in these video clips. For the stop words like “the” and “and”, their attention weights are very small and present an even distribution across the whole video.

For the video clip adjacency matrix, the values in the diagonal region are always higher than the others. It is consistent with the fact that video clips always have higher similarities with their adjacent clips. Additionally, for the second qualitative example, the last video clip is highly correlated to the first 5 clips under the sentence semantics, illustrating high entry values in the adjacency matrix. Based on the adjacency matrix, our GTP performs reasoning on the video clip graph with graph convolution operation, and thus it can easily link the last video clip to the first 5 video clips. This can also provide an interpretation of why our proposed GTP can accurately predict the position of the separated last video clip.

Video Clip Graph Learning

. To investigate whether our GTP model can learn the sentence specified video clip graph structure in the model training procedure, we select two samples in our training set, and record the evolution of their corresponding video clip adjacency matrices in different training epochs, which are illustrated in Figure 

5. We can observe that the adjacency matrices tend to an even distribution at Epoch 1. Along with the model training procedure, the block boundaries gradually show up clearly in the adjacency matrices, which means that the video graph structures are gradually learned. Meanwhile, by examining video contents with respect to the learned adjacency matrices, we can find that video clips linked with higher edge values also present strong semantic correlations. It indicates that our model can indeed learn the sentence specified semantic relationships between different video clips, and further facilitates the video thumbnail generation.

6. Conclusions

In this paper, we defined a distinctively new task, namely sentence specified dynamic video thumbnail generation, which aims at selecting and synthesizing several video clips from video to constitute the video thumbnail, such that the video thumbnail semantically corresponds to the given sentence description. To facilitate the proposed video thumbnail generation task, we created a new dataset by re-annotating the videos in the ActivityNet Caption dataset. Furthermore, we proposed a novel GTP model, leveraging the graph convolution operation to explore the sentence specified semantic relationships between different video clips. The informative video thumbnail is thereafter sequentially predicted by a novel temporal conditioned pointer network. Extensive experimental results demonstrate the superiority of our proposed model, which outperforms baseline methods with considerable margins.

7. Acknowledgments

This work was supported by National Program on Key Basic Research Project No. 2015CB352300, National Natural Science Foundation of China Major Project No.U1611461 and Shenzhen Nanshan District Ling-Hang Team Grant under No.LHTD20170005.

References

  • (1)
  • Ba et al. (2016) Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450 (2016).
  • Caba Heilbron et al. (2015) Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. 2015. Activitynet: A large-scale video benchmark for human activity understanding. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    . 961–970.
  • Chen et al. (2018) Jingyuan Chen, Xinpeng Chen, Lin Ma, Zequn Jie, and Tat-Seng Chua. 2018. Temporally Grounding Natural Sentence in Video. In

    EMNLP 2018: 2018 Conference on Empirical Methods in Natural Language Processing

    . 162–171.
  • Chen et al. (2019a) Jingyuan Chen, Lin Ma, Xinpeng Chen, Zequn Jie, and Jiebo Luo. 2019a. Localizing Natural Language in Videos. In AAAI.
  • Chen et al. (2019b) Zhenfang Chen, Lin Ma, Wenhan Luo, and Kwan-Yee K Wong. 2019b. Weakly-Supervised Spatio-Temporally Grounding Natural Sentence in Video. In ACL.
  • Cho et al. (2014) Kyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. Computer Science (2014).
  • Dirfaux (2000) F Dirfaux. 2000. Key frame selection to represent a video. In IEEE International Conference on Image Processing, Vol. 2. 275–278.
  • Frome et al. (2013) Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Tomas Mikolov, et al. 2013. Devise: A deep visual-semantic embedding model. In Advances in neural information processing systems. 2121–2129.
  • Gao et al. (2017) Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. 2017. TALL: Temporal Activity Localization via Language Query. In Proceedings of the IEEE International Conference on Computer Vision.
  • Gao et al. (2009) Yuli Gao, Tong Zhang, and Jun Xiao. 2009. Thematic video thumbnail selection. In IEEE International Conference on Image Processing. 4333–4336.
  • Girshick (2015) Ross Girshick. 2015. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision. 1440–1448.
  • Girshick et al. (2014) Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer vision and Pattern Recognition. 580–587.
  • Gygli et al. (2016) Michael Gygli, Yale Song, and Liangliang Cao. 2016. Video2gif: Automatic generation of animated gifs from video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1001–1009.
  • Hasebe et al. (2004) Satoshi Hasebe, Makoto Nagumo, Shogo Muramatsu, and Hisakazu Kikuchi. 2004. Video key frame selection by clustering wavelet coefficients. In Signal Processing Conference, 2004 12th European. 2303–2306.
  • Hendricks et al. (2017) Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. 2017. Localizing Moments in Video with Natural Language. In Proceedings of the IEEE International Conference on Computer Vision.
  • Hua et al. (2013) Xian-Sheng Hua, Linjun Yang, Jingdong Wang, Jing Wang, Ming Ye, Kuansan Wang, Yong Rui, and Jin Li. 2013. Clickage: Towards bridging semantic and intent gaps via mining click logs of search engines. In Proceedings of the 21st ACM international conference on Multimedia. 243–252.
  • Kang and Hua (2005) Hong-Wen Kang and Xian-Sheng Hua. 2005. To learn representativeness of video frames. In Proceedings of the 13th annual ACM international conference on Multimedia. 423–426.
  • Kipf and Welling (2016) Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
  • Krishna et al. (2017) Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. 2017. Dense-Captioning Events in Videos.. In Proceedings of the IEEE International Conference on Computer Vision. 706–715.
  • Liu et al. (2011) Chunxi Liu, Qingming Huang, and Shuqiang Jiang. 2011. Query sensitive dynamic web video thumbnail generation. In IEEE International Conference on Image Processing. 2449–2452.
  • Liu et al. (2007) Jing Liu, Bin Wang, Mingjing Li, Zhiwei Li, Weiying Ma, Hanqing Lu, and Songde Ma. 2007. Dual cross-media relevance model for image annotation. In Proceedings of the 15th ACM international conference on Multimedia. ACM, 605–614.
  • Liu et al. (2018) Meng Liu, Xiang Wang, Liqiang Nie, Xiangnan He, Baoquan Chen, and Tat-Seng Chua. 2018. Attentive Moment Retrieval in Videos. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval. 15–24.
  • Liu et al. (2015) Wu Liu, Tao Mei, Yongdong Zhang, Cherry Che, and Jiebo Luo. 2015. Multi-task deep visual-semantic embedding for video thumbnail selection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3707–3715.
  • Luo et al. (2009) Jiebo Luo, Christophe Papin, and Kathleen Costello. 2009. Towards extracting semantically meaningful key frames from personal video clips: from humans to computers. IEEE Transactions on Circuits and Systems for Video Technology 19, 2 (2009), 289–301.
  • Manning et al. (2014) Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations. 55–60.
  • Mei et al. (2009) Tao Mei, Bo Yang, Shi-Qiang Yang, and Xian-Sheng Hua. 2009. Video collage: presenting a video sequence using a single image. The Visual Computer 25, 1 (2009), 39–51.
  • Pan et al. (2014) Yingwei Pan, Ting Yao, Tao Mei, Houqiang Li, Chong-Wah Ngo, and Yong Rui. 2014. Click-through-based cross-view learning for image search. In Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval. 717–726.
  • Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). 1532–1543.
  • Song et al. (2016) Yale Song, Miriam Redi, Jordi Vallmitjana, and Alejandro Jaimes. 2016. To click or not to click: Automatic selection of beautiful thumbnails from videos. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. 659–668.
  • Tran et al. (2015) Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. 2015. Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision. 4489–4497.
  • Vasudevan et al. (2017) Arun Balajee Vasudevan, Michael Gygli, Anna Volokitin, and Luc Van Gool. 2017. Query-adaptive Video Summarization via Quality-aware Relevance Estimation. In Proceedings of the 2017 ACM on Multimedia Conference. 582–590.
  • Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. 5998–6008.
  • Vinyals et al. (2015) Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems. 2692–2700.
  • Wang and Gupta (2018) Xiaolong Wang and Abhinav Gupta. 2018. Videos as Space-Time Region Graphs. arXiv preprint arXiv:1806.01810 (2018).
  • Wang et al. (2011) Zheshen Wang, Mrityunjay Kumar, Jiebo Luo, and Baoxin Li. 2011. Extracting key frames from consumer videos using bi-layer group sparsity. In Proceedings of the 19th ACM international conference on Multimedia. 1505–1508.
  • Xu et al. (2015) Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In

    Proceedings of the 32nd International Conference on Machine Learning

    . 2048–2057.
  • Yuan et al. (2019) Yitian Yuan, Tao Mei, and Wenwu Zhu. 2019. To Find Where You Talk: Temporal Sentence Localization in Video with Attention Based Location Regression. In

    Thirty-Third AAAI Conference on Artificial Intelligence

    .

Appendix A The Dataset Annotation Detail

Figure 6 illustrates our implemented annotation website for the sentence specified dynamic video thumbnail generation task. For each video and its paired sentence description in our collected dataset, we place them on the website simultaneously for the convenience of the annotation participants’ browsing. Moreover, in order to speed up the annotation, we evenly split the video into 2-second video clips (We split the video into 2-second length clips mainly because we find that the smallest video thumbnail gifs in some video websites like YouTube are 1 to 2 seconds long), and all these video clips are displayed in their chronological order. Participants are required to select no more than 5 video clips that semantically correspond to the sentence description to compose the video thumbnail. The video clip will be highlighted in red bounding box after selected. The selected video clips are not required to be consecutive in time. If one participant finishes the video clip selection for the current video-sentence pair, he (or she) only needs to click the “submit” button to proceed to the next annotation task.

The annotations of different participants are completely independent, with the video-sentence pairs randomly illustrated on the website. There are 10,204 video-sentence pairs in our collected dataset, and we ensure that each pair will have 4 video thumbnail annotations from 4 different participants. Therefore, we totally get annotation results for our constructed dataset.

Figure 6. The annotation interface for the sentence specified dynamic video thumbnail generation task.

Some video thumbnail annotation examples are shown in Figure 7. For each showing example, we provide two video thumbnail annotations, and the selected video clips in these two annotations are highlighted with orange and yellow bounding boxes, respectively. We can observe that in example (a), the two annotations are exactly the same, while in other examples, the annotations are partially aligned with each others. It illustrates that when annotating video thumbnails, different participants have different opinions, making the differences between the annotated video thumbnails. However, the jointly selected video clips also indicate that the participants still have their common cognition for the given sentence descriptions. In addition, example (a) and example (b) share the same video but are with different sentence descriptions. We can see that the sentence descriptions highly influence the resulting video thumbnails and cause great discrepancy, which further verifies that it is very necessary to generate specific video thumbnails for different sentences.

Figure 7. Video thumbnail annotation examples. For each showing video-sentence pair, we provide two video thumbnail annotations, and the selected video clips in these two annotations are highlighted with orange and yellow bounding boxes, respectively.
Figure 8. The video thumbnail annotation consistency distribution over all the video-sentence pairs.

Appendix B Dataset Statistical Analysis

Video Length. The minimal, maximal, and average video lengths over all the videos in our constructed dataset are 20.0s, 238.4s and 60.7s, respectively. The average length of the annotated video thumbnails is 8.7s.

Video Thumbnail Annotation Consistency. As indicated in Figure 7, video thumbnail annotation is a very subjective task, with different annotation participants having different opinions. To measure the consistency of the selected video thumbnails between different participants, we define a metric as follows:

(15)

Here means the set of selected video clips composing the -th annotated video thumbnail for the -th video-sentence pair. indicates the annotation consistency between the -th annotated video thumbnail and all the other annotations for the -th video-sentence pair. means the average annotation consistency of the 4 video thumbnail annotations for the -th video-sentence pair. If the selected video clips of all the annotations are exactly the same, the value of will be equal to 1. The annotation consistency distributed over all the video-sentence pairs is illustrated in Figure 8. It can be observed that for most of the video-sentence pairs, the selected video clips of different participants do not have a exact match, but there are still some clips that are jointly selected by several participants. It further demonstrates that the video thumbnail generation is an indeed subjective task, while people still express their consensus to generate the thumbnail with respect to the given sentence descriptions.

Ground Truth. Since there are 4 video thumbnail annotations for each video-sentence pair, we take the annotation result with the highest consistency among the 4 annotations as the ground truth during the training process. While in the testing stage, the predicted video thumbnail will be evaluated with respect to all the 4 annotations.

Appendix C Qualitative Results

Evolution of the Sentence Specified Video Clip Graph. Figure 9 shows the evolution of 4 groups of video clip adjacency matrices in our GTP model training procedure. We can observe that the first two qualitative examples (a) and (b) present similar evolution process with the examples we have shown in the main paper. The adjacency matrices tend to a even distribution at the initial model training stage, and along with the model training procedure the block boundaries gradually show up clearly. In contrast, in the qualitative examples (c) and (d), the sentence specified video clip graph structures have been initially learned in Epoch 1, with the following training epochs only adjusting and emphasizing the learned video clip relationships. Overall, all of the above results verify that our GTP model can indeed learn the sentence specified video clip graph according to the sentence and video semantics.

Figure 9. Evolution of the learned video clip adjacency matrices during the sentence specified video graph convolution.
Figure 10. Qualitative results of our proposed GTP model for sentence specified dynamic video thumbnail generation. Blue bars show the video thumbnail generation results for our proposed GTP model, with the selected video clips highlighted in darker colors. Green bars show the ground-truth video thumbnails.

Video Thumbnail Generation Results of the GTP Model. Figure 10 illustrates some qualitative results of our proposed GTP model for the sentence specified dynamic video thumbnail generation. We can observe that the selected video clips by GTP are consistent with the clips in the ground-truths, which indicates the effectiveness of our proposed GTP model. Meanwhile, the generated video thumbnails are quite flexible. As shown in case (a) and (e), the video thumbnails are temporally inconsecutive and provide a good preview of the overall video content. Comparing the show case (c) to others, we can find that the lengths of video thumbnails are also not fixed. Since most video contents shown in case (c) are irrelevant to “skateboarding” described by the sentence, GTP only selects the last clip that presents the matching activity.

Besides, the predicted video thumbnail in case (d) does not exactly match the ground-truth annotation. The main reason lies on the indistinguishable video scenes in the video. From the 8-th video clip in case (d) to the end of the video, all the middle clips present the same scene of “people rafting”. Therefore, not only the GTP model, the annotators are also hard to decide which clip to choose. However, since all these clips are matched with the sentence description, the generated video thumbnail by our proposed GTP is still reasonable and accurate.