GRIT: Faster and Better Image captioning Transformer Using Dual Visual Features

07/20/2022
by   Van-Quang Nguyen, et al.
0

Current state-of-the-art methods for image captioning employ region-based features, as they provide object-level information that is essential to describe the content of images; they are usually extracted by an object detector such as Faster R-CNN. However, they have several issues, such as lack of contextual information, the risk of inaccurate detection, and the high computational cost. The first two could be resolved by additionally using grid-based features. However, how to extract and fuse these two types of features is uncharted. This paper proposes a Transformer-only neural architecture, dubbed GRIT (Grid- and Region-based Image captioning Transformer), that effectively utilizes the two visual features to generate better captions. GRIT replaces the CNN-based detector employed in previous methods with a DETR-based one, making it computationally faster. Moreover, its monolithic design consisting only of Transformers enables end-to-end training of the model. This innovative design and the integration of the dual visual features bring about significant performance improvement. The experimental results on several image captioning benchmarks show that GRIT outperforms previous methods in inference accuracy and speed.

READ FULL TEXT

page 24

page 25

page 26

page 27

research
05/20/2019

Multimodal Transformer with Multi-View Visual Representation for Image Captioning

Image captioning aims to automatically generate a natural language descr...
research
12/09/2021

Injecting Semantic Concepts into End-to-End Image Captioning

Tremendous progress has been made in recent years in developing better i...
research
05/18/2021

Dependent Multi-Task Learning with Causal Intervention for Image Captioning

Recent work for image captioning mainly followed an extract-then-generat...
research
11/02/2020

Dual Attention on Pyramid Feature Maps for Image Captioning

Generating natural sentences from images is a fundamental learning task ...
research
03/29/2022

End-to-End Transformer Based Model for Image Captioning

CNN-LSTM based architectures have played an important role in image capt...
research
02/27/2020

Visual Commonsense R-CNN

We present a novel unsupervised feature representation learning method, ...
research
01/16/2021

Dual-Level Collaborative Transformer for Image Captioning

Descriptive region features extracted by object detection networks have ...

Please sign up or login with your details

Forgot password? Click here to reset