Improving Image Captioning by Leveraging Intra- and Inter-layer Global Representation in Transformer Network

12/13/2020
by   Jiayi Ji, et al.
0

Transformer-based architectures have shown great success in image captioning, where object regions are encoded and then attended into the vectorial representations to guide the caption decoding. However, such vectorial representations only contain region-level information without considering the global information reflecting the entire image, which fails to expand the capability of complex multi-modal reasoning in image captioning. In this paper, we introduce a Global Enhanced Transformer (termed GET) to enable the extraction of a more comprehensive global representation, and then adaptively guide the decoder to generate high-quality captions. In GET, a Global Enhanced Encoder is designed for the embedding of the global feature, and a Global Adaptive Decoder are designed for the guidance of the caption generation. The former models intra- and inter-layer global representation by taking advantage of the proposed Global Enhanced Attention and a layer-wise fusion module. The latter contains a Global Adaptive Controller that can adaptively fuse the global information into the decoder to guide the caption generation. Extensive experiments on MS COCO dataset demonstrate the superiority of our GET over many state-of-the-arts.

READ FULL TEXT

page 3

page 6

page 7

research
02/13/2023

Towards Local Visual Modeling for Image Captioning

In this paper, we study the local visual modeling with grid features for...
research
01/26/2021

CPTR: Full Transformer Network for Image Captioning

In this paper, we consider the image captioning task from a new sequence...
research
04/03/2018

Learning to Guide Decoding for Image Captioning

Recently, much advance has been made in image captioning, and an encoder...
research
05/20/2019

Multimodal Transformer with Multi-View Visual Representation for Image Captioning

Image captioning aims to automatically generate a natural language descr...
research
01/05/2023

Adaptively Clustering Neighbor Elements for Image Captioning

We design a novel global-local Transformer named Ada-ClustFormer (ACF) t...
research
03/31/2020

X-Linear Attention Networks for Image Captioning

Recent progress on fine-grained visual recognition and visual question a...
research
08/27/2019

Controllable Video Captioning with POS Sequence Guidance Based on Gated Fusion Network

In this paper, we propose to guide the video caption generation with Par...

Please sign up or login with your details

Forgot password? Click here to reset