Log In Sign Up

Fine-Grained Image Captioning with Global-Local Discriminative Objective

by   Jie Wu, et al.

Significant progress has been made in recent years in image captioning, an active topic in the fields of vision and language. However, existing methods tend to yield overly general captions and consist of some of the most frequent words/phrases, resulting in inaccurate and indistinguishable descriptions (see Figure 1). This is primarily due to (i) the conservative characteristic of traditional training objectives that drives the model to generate correct but hardly discriminative captions for similar images and (ii) the uneven word distribution of the ground-truth captions, which encourages generating highly frequent words/phrases while suppressing the less frequent but more concrete ones. In this work, we propose a novel global-local discriminative objective that is formulated on top of a reference model to facilitate generating fine-grained descriptive captions. Specifically, from a global perspective, we design a novel global discriminative constraint that pulls the generated sentence to better discern the corresponding image from all others in the entire dataset. From the local perspective, a local discriminative constraint is proposed to increase attention such that it emphasizes the less frequent but more concrete words/phrases, thus facilitating the generation of captions that better describe the visual details of the given images. We evaluate the proposed method on the widely used MS-COCO dataset, where it outperforms the baseline methods by a sizable margin and achieves competitive performance over existing leading approaches. We also conduct self-retrieval experiments to demonstrate the discriminability of the proposed method.


page 1

page 4

page 5

page 10

page 12

page 13

page 15


Show, Tell and Discriminate: Image Captioning by Self-retrieval with Partially Labeled Data

The aim of image captioning is to generate similar captions by machine a...

FOIL it! Find One mismatch between Image and Language caption

In this paper, we aim to understand whether current language and vision ...

Contrastive Learning for Image Captioning

Image captioning, a popular topic in computer vision, has achieved subst...

Context-aware Captions from Context-agnostic Supervision

We introduce an inference technique to produce discriminative context-aw...

A Deep Decoder Structure Based on WordEmbedding Regression for An Encoder-Decoder Based Model for Image Captioning

Generating textual descriptions for images has been an attractive proble...

Switching to Discriminative Image Captioning by Relieving a Bottleneck of Reinforcement Learning

Discriminativeness is a desirable feature of image captions: captions sh...

Goal-driven text descriptions for images

A big part of achieving Artificial General Intelligence(AGI) is to build...