Enhanced Modality Transition for Image Captioning

02/23/2021
by   Ziwei Wang, et al.
0

Image captioning model is a cross-modality knowledge discovery task, which targets at automatically describing an image with an informative and coherent sentence. To generate the captions, the previous encoder-decoder frameworks directly forward the visual vectors to the recurrent language model, forcing the recurrent units to generate a sentence based on the visual features. Although these sentences are generally readable, they still suffer from the lack of details and highlights, due to the fact that the substantial gap between the image and text modalities is not sufficiently addressed. In this work, we explicitly build a Modality Transition Module (MTM) to transfer visual features into semantic representations before forwarding them to the language model. During the training phase, the modality transition network is optimised by the proposed modality loss, which compares the generated preliminary textual encodings with the target sentence vectors from a pre-trained text auto-encoder. In this way, the visual vectors are transited into the textual subspace for more contextual and precise language generation. The novel MTM can be incorporated into most of the existing methods. Extensive experiments have been conducted on the MS-COCO dataset demonstrating the effectiveness of the proposed framework, improving the performance by 3.4 state-of-the-arts.

READ FULL TEXT
research
08/30/2019

Reflective Decoding Network for Image Captioning

State-of-the-art image captioning methods mostly focus on improving visu...
research
06/21/2021

TCIC: Theme Concepts Learning Cross Language and Vision for Image Captioning

Existing research for image captioning usually represents an image using...
research
10/28/2020

Fusion Models for Improved Visual Captioning

Visual captioning aims to generate textual descriptions given images. Tr...
research
12/01/2019

Integrate Image Representation to Text Model on Sentence Level: a Semi-supervised Framework

Integrating visual features has been proved useful in language represent...
research
03/06/2020

Show, Edit and Tell: A Framework for Editing Image Captions

Most image captioning frameworks generate captions directly from images,...
research
05/28/2021

New Image Captioning Encoder via Semantic Visual Feature Matching for Heavy Rain Images

Image captioning generates text that describes scenes from input images....
research
05/15/2019

Aligning Visual Regions and Textual Concepts: Learning Fine-Grained Image Representations for Image Captioning

In image-grounded text generation, fine-grained representations of the i...

Please sign up or login with your details

Forgot password? Click here to reset