Video Captioning in Compressed Video

01/02/2021
by   Mingjian Zhu, et al.
0

Existing approaches in video captioning concentrate on exploring global frame features in the uncompressed videos, while the free of charge and critical saliency information already encoded in the compressed videos is generally neglected. We propose a video captioning method which operates directly on the stored compressed videos. To learn a discriminative visual representation for video captioning, we design a residuals-assisted encoder (RAE), which spots regions of interest in I-frames under the assistance of the residuals frames. First, we obtain the spatial attention weights by extracting features of residuals as the saliency value of each location in I-frame and design a spatial attention module to refine the attention weights. We further propose a temporal gate module to determine how much the attended features contribute to the caption generation, which enables the model to resist the disturbance of some noisy signals in the compressed videos. Finally, Long Short-Term Memory is utilized to decode the visual representations into descriptions. We evaluate our method on two benchmark datasets and demonstrate the effectiveness of our approach.

READ FULL TEXT

page 1

page 2

page 6

research
06/15/2016

Bidirectional Long-Short Term Memory for Video Description

Video captioning has been attracting broad research attention in multime...
research
03/13/2022

Global2Local: A Joint-Hierarchical Attention for Video Captioning

Recently, automatic video captioning has attracted increasing attention,...
research
04/25/2023

TCR: Short Video Title Generation and Cover Selection with Attention Refinement

With the widespread popularity of user-generated short videos, it become...
research
03/27/2016

Recurrent Mixture Density Network for Spatiotemporal Visual Attention

In many computer vision tasks, the relevant information to solve the pro...
research
10/06/2022

Compressed Vision for Efficient Video Understanding

Experience and reasoning occur across multiple temporal scales: millisec...
research
11/23/2016

Adaptive Feature Abstraction for Translating Video to Text

Previous models for video captioning often use the output from a specifi...
research
11/20/2016

Recurrent Memory Addressing for describing videos

In this paper, we introduce Key-Value Memory Networks to a multimodal se...

Please sign up or login with your details

Forgot password? Click here to reset