Paying Attention to Descriptions Generated by Image Captioning Models

04/24/2017
by   Hamed R. Tavakoli, et al.
0

To bridge the gap between humans and machines in image understanding and describing, we need further insight into how people describe a perceived scene. In this paper, we study the agreement between bottom-up saliency-based visual attention and object referrals in scene description constructs. We investigate the properties of human-written descriptions and machine-generated ones. We then propose a saliency-boosted image captioning model in order to investigate benefits from low-level cues in language models. We learn that (1) humans mention more salient objects earlier than less salient ones in their descriptions, (2) the better a captioning model performs, the better attention agreement it has with human descriptions, (3) the proposed saliency-boosted model, compared to its baseline form, does not improve significantly on the MS COCO database, indicating explicit bottom-up boosting does not help when the task is well learnt and tuned on a data, (4) a better generalization is, however, observed for the saliency-boosted model on unseen data.

READ FULL TEXT

page 3

page 4

page 5

page 6

research
09/25/2020

Are scene graphs good enough to improve Image Captioning?

Many top-performing image captioning models rely solely on object featur...
research
03/06/2019

A Synchronized Multi-Modal Attention-Caption Dataset and Analysis

In this work, we present a novel multi-modal dataset consisting of eye m...
research
06/26/2017

Paying More Attention to Saliency: Image Captioning with Saliency and Context Attention

Image captioning has been recently gaining a lot of attention thanks to ...
research
11/07/2015

Generation and Comprehension of Unambiguous Object Descriptions

We propose a method that can generate an unambiguous description (known ...
research
01/03/2023

An Empirical Investigation into the Use of Image Captioning for Automated Software Documentation

Existing automated techniques for software documentation typically attem...
research
03/20/2023

Sketch2Saliency: Learning to Detect Salient Objects from Human Drawings

Human sketch has already proved its worth in various visual understandin...
research
07/25/2018

Attend Before you Act: Leveraging human visual attention for continual learning

When humans perform a task, such as playing a game, they selectively pay...

Please sign up or login with your details

Forgot password? Click here to reset