Jointly Localizing and Describing Events for Dense Video Captioning

04/23/2018
by   Yehao Li, et al.
0

Automatically describing a video with natural language is regarded as a fundamental challenge in computer vision. The problem nevertheless is not trivial especially when a video contains multiple events to be worthy of mention, which often happens in real videos. A valid question is how to temporally localize and then describe events, which is known as "dense video captioning." In this paper, we present a novel framework for dense video captioning that unifies the localization of temporal event proposals and sentence generation of each proposal, by jointly training them in an end-to-end manner. To combine these two worlds, we integrate a new design, namely descriptiveness regression, into a single shot detection structure to infer the descriptive complexity of each detected proposal via sentence generation. This in turn adjusts the temporal locations of each event proposal. Our model differs from existing dense video captioning methods since we propose a joint and global optimization of detection and captioning, and the framework uniquely capitalizes on an attribute-augmented video captioning architecture. Extensive experiments are conducted on ActivityNet Captions dataset and our framework shows clear improvements when compared to the state-of-the-art techniques. More remarkably, we obtain a new record: METEOR of 12.96 official test set.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2018

Joint Event Detection and Description in Continuous Video Streams

As a fine-grained video understanding task, dense video captioning invol...
research
11/24/2015

DenseCap: Fully Convolutional Localization Networks for Dense Captioning

We introduce the dense captioning task, which requires a computer vision...
research
01/15/2020

University of Amsterdam and Renmin University at TRECVID 2017: Searching Video, Detecting Events and Describing Video

In this paper, we summarize our TRECVID 2017 video recognition and retri...
research
05/02/2017

Dense-Captioning Events in Videos

Most natural videos contain numerous events. For example, in a video of ...
research
11/13/2019

Crowd Video Captioning

Describing a video automatically with natural language is a challenging ...
research
03/31/2018

Bidirectional Attentive Fusion with Context Gating for Dense Video Captioning

Dense video captioning is a newly emerging task that aims at both locali...
research
03/11/2023

Learning Grounded Vision-Language Representation for Versatile Understanding in Untrimmed Videos

Joint video-language learning has received increasing attention in recen...

Please sign up or login with your details

Forgot password? Click here to reset