Top-down Visual Saliency Guided by Captions

12/21/2016
by   Vasili Ramanishka, et al.
0

Neural image/video captioning models can generate accurate descriptions, but their internal process of mapping regions to words is a black box and therefore difficult to explain. Top-down neural saliency methods can find important regions given a high-level semantic task such as object classification, but cannot use a natural language sentence as the top-down input for the task. In this paper, we propose Caption-Guided Visual Saliency to expose the region-to-word mapping in modern encoder-decoder networks and demonstrate that it is learned implicitly from caption training data, without any pixel-level annotations. Our approach can produce spatial or spatiotemporal heatmaps for both predicted captions, and for arbitrary query sentences. It recovers saliency without the overhead of introducing explicit attention layers, and can be used to analyze a variety of existing model architectures and improve their design. Evaluation on large-scale video and image datasets demonstrates that our approach achieves comparable captioning performance with existing methods while providing more accurate saliency heatmaps. Our code is available at visionlearninggroup.github.io/caption-guided-saliency/.

READ FULL TEXT

page 1

page 6

page 8

research
11/02/2020

Boost Image Captioning with Knowledge Reasoning

Automatically generating a human-like description for a given image is a...
research
08/09/2023

Decoding Layer Saliency in Language Transformers

In this paper, we introduce a strategy for identifying textual saliency ...
research
06/26/2017

Paying More Attention to Saliency: Image Captioning with Saliency and Context Attention

Image captioning has been recently gaining a lot of attention thanks to ...
research
08/08/2021

Discriminative Latent Semantic Graph for Video Captioning

Video captioning aims to automatically generate natural language sentenc...
research
04/01/2019

Multi-source weak supervision for saliency detection

The high cost of pixel-level annotations makes it appealing to train sal...
research
05/29/2019

Vision-to-Language Tasks Based on Attributes and Attention Mechanism

Vision-to-language tasks aim to integrate computer vision and natural la...
research
07/22/2013

Saliency-Guided Perceptual Grouping Using Motion Cues in Region-Based Artificial Visual Attention

Region-based artificial attention constitutes a framework for bio-inspir...

Please sign up or login with your details

Forgot password? Click here to reset