Paying Attention to Descriptions Generated by Image Captioning Models

04/24/2017
by   Hamed R. Tavakoli, et al.
0

To bridge the gap between humans and machines in image understanding and describing, we need further insight into how people describe a perceived scene. In this paper, we study the agreement between bottom-up saliency-based visual attention and object referrals in scene description constructs. We investigate the properties of human-written descriptions and machine-generated ones. We then propose a saliency-boosted image captioning model in order to investigate benefits from low-level cues in language models. We learn that (1) humans mention more salient objects earlier than less salient ones in their descriptions, (2) the better a captioning model performs, the better attention agreement it has with human descriptions, (3) the proposed saliency-boosted model, compared to its baseline form, does not improve significantly on the MS COCO database, indicating explicit bottom-up boosting does not help when the task is well learnt and tuned on a data, (4) a better generalization is, however, observed for the saliency-boosted model on unseen data.

READ FULL TEXT

page 3

page 4

page 5

page 6

09/25/2020

Are scene graphs good enough to improve Image Captioning?

Many top-performing image captioning models rely solely on object featur...
03/06/2019

A Synchronized Multi-Modal Attention-Caption Dataset and Analysis

In this work, we present a novel multi-modal dataset consisting of eye m...
06/26/2017

Paying More Attention to Saliency: Image Captioning with Saliency and Context Attention

Image captioning has been recently gaining a lot of attention thanks to ...
11/07/2015

Generation and Comprehension of Unambiguous Object Descriptions

We propose a method that can generate an unambiguous description (known ...
11/28/2018

Towards Task Understanding in Visual Settings

We consider the problem of understanding real world tasks depicted in vi...
03/24/2020

Learning Compact Reward for Image Captioning

Adversarial learning has shown its advances in generating natural and di...
07/26/2019

Cooperative image captioning

When describing images with natural language, the descriptions can be ma...

Code Repositories

captionGAN

Source code for the paper "Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training"


view repo