Geometry Attention Transformer with Position-aware LSTMs for Image Captioning

10/01/2021
by   Chi Wang, et al.
0

In recent years, transformer structures have been widely applied in image captioning with impressive performance. For good captioning results, the geometry and position relations of different visual objects are often thought of as crucial information. Aiming to further promote image captioning by transformers, this paper proposes an improved Geometry Attention Transformer (GAT) model. In order to further leverage geometric information, two novel geometry-aware architectures are designed respectively for the encoder and decoder in our GAT. Besides, this model includes the two work modules: 1) a geometry gate-controlled self-attention refiner, for explicitly incorporating relative spatial information into image region representations in encoding steps, and 2) a group of position-LSTMs, for precisely informing the decoder of relative word position in generating caption texts. The experiment comparisons on the datasets MS COCO and Flickr30K show that our GAT is efficient, and it could often outperform current state-of-the-art image captioning models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/19/2020

Normalized and Geometry-Aware Self-Attention Network for Image Captioning

Self-attention (SA) network has shown profound value in image captioning...
research
06/14/2019

Image Captioning: Transforming Objects into Words

Image captioning models typically follow an encoder-decoder architecture...
research
12/08/2018

Attend More Times for Image Captioning

Most attention-based image captioning models attend to the image once pe...
research
09/29/2021

Geometry-Entangled Visual Semantic Transformer for Image Captioning

Recent advancements of image captioning have featured Visual-Semantic Fu...
research
09/16/2021

Label-Attention Transformer with Geometrically Coherent Objects for Image Captioning

Automatic transcription of scene understanding in images and videos is a...
research
03/05/2023

Comparative study of Transformer and LSTM Network with attention mechanism on Image Captioning

In a globalized world at the present epoch of generative intelligence, m...
research
12/28/2021

Synchronized Audio-Visual Frames with Fractional Positional Encoding for Transformers in Video-to-Text Translation

Video-to-Text (VTT) is the task of automatically generating descriptions...

Please sign up or login with your details

Forgot password? Click here to reset