Sentence Specified Dynamic Video Thumbnail Generation

08/12/2019
by   Yiitan Yuan, et al.
4

With the tremendous growth of videos over the Internet, video thumbnails, providing video content previews, are becoming increasingly crucial to influencing users' online searching experiences. Conventional video thumbnails are generated once purely based on the visual characteristics of videos, and then displayed as requested. Hence, such video thumbnails, without considering the users' searching intentions, cannot provide a meaningful snapshot of the video contents that users concern. In this paper, we define a distinctively new task, namely sentence specified dynamic video thumbnail generation, where the generated thumbnails not only provide a concise preview of the original video contents but also dynamically relate to the users' searching intentions with semantic correspondences to the users' query sentences. To tackle such a challenging task, we propose a novel graph convolved video thumbnail pointer (GTP). Specifically, GTP leverages a sentence specified video graph convolutional network to model both the sentence-video semantic interaction and the internal video relationships incorporated with the sentence information, based on which a temporal conditioned pointer network is then introduced to sequentially generate the sentence specified video thumbnails. Moreover, we annotate a new dataset based on ActivityNet Captions for the proposed new task, which consists of 10,000+ video-sentence pairs with each accompanied by an annotated sentence specified video thumbnail. We demonstrate that our proposed GTP outperforms several baseline methods on the created dataset, and thus believe that our initial results along with the release of the new dataset will inspire further research on sentence specified dynamic video thumbnail generation. Dataset and code are available at https://github.com/yytzsy/GTP.

READ FULL TEXT

page 8

page 10

page 11

page 12

page 13

research
10/31/2019

Semantic Conditioned Dynamic Modulation for Temporal Sentence Grounding in Videos

Temporal sentence grounding in videos aims to detect and localize one ta...
research
08/31/2020

Sentence Guided Temporal Modulation for Dynamic Video Thumbnail Generation

We consider the problem of sentence specified dynamic video thumbnail ge...
research
08/05/2018

Video Re-localization

Many methods have been developed to help people find the video contents ...
research
12/02/2021

Syntax Customized Video Captioning by Imitating Exemplar Sentences

Enhancing the diversity of sentences to describe video contents is an im...
research
09/26/2022

Multi-modal Video Chapter Generation

Chapter generation becomes practical technique for online videos nowaday...
research
03/21/2019

The CASE Dataset of Candidate Spaces for Advert Implantation

With the advent of faster internet services and growth of multimedia con...
research
10/21/2021

Video and Text Matching with Conditioned Embeddings

We present a method for matching a text sentence from a given corpus to ...

Please sign up or login with your details

Forgot password? Click here to reset