Aligning Linguistic Words and Visual Semantic Units for Image Captioning

08/06/2019
by   Longteng Guo, et al.
HUAWEI Technologies Co., Ltd.
Nanjing University
ia.ac.cn
6

Image captioning attempts to generate a sentence composed of several linguistic words, which are used to describe objects, attributes, and interactions in an image, denoted as visual semantic units in this paper. Based on this view, we propose to explicitly model the object interactions in semantics and geometry based on Graph Convolutional Networks (GCNs), and fully exploit the alignment between linguistic words and visual semantic units for image captioning. Particularly, we construct a semantic graph and a geometry graph, where each node corresponds to a visual semantic unit, i.e., an object, an attribute, or a semantic (geometrical) interaction between two objects. Accordingly, the semantic (geometrical) context-aware embeddings for each unit are obtained through the corresponding GCN learning processers. At each time step, a context gated attention module takes as inputs the embeddings of the visual semantic units and hierarchically align the current word with these units by first deciding which type of visual semantic unit (object, attribute, or interaction) the current word is about, and then finding the most correlated visual semantic units under this type. Extensive experiments are conducted on the challenging MS-COCO image captioning dataset, and superior results are reported when comparing to state-of-the-art approaches.

READ FULL TEXT

page 1

page 8

09/19/2018

Exploring Visual Relationship for Image Captioning

It is always well believed that modeling relationships between objects w...
09/14/2020

GINet: Graph Interaction Network for Scene Parsing

Recently, context reasoning using image regions beyond local convolution...
06/14/2022

Comprehending and Ordering Semantics for Image Captioning

Comprehending the rich semantics in an image and ordering them in lingui...
05/06/2021

Exploring Explicit and Implicit Visual Relationships for Image Captioning

Image captioning is one of the most challenging tasks in AI, which aims ...
10/04/2022

Learning to Collocate Visual-Linguistic Neural Modules for Image Captioning

Humans tend to decompose a sentence into different parts like sth do sth...
09/29/2021

Geometry-Entangled Visual Semantic Transformer for Image Captioning

Recent advancements of image captioning have featured Visual-Semantic Fu...
10/12/2018

Quantifying the amount of visual information used by neural caption generators

This paper addresses the sensitivity of neural image caption generators ...

Code Repositories

VSUA-Captioning

Code for "Aligning Linguistic Words and Visual Semantic Units for Image Captioning", ACM MM 2019


view repo

Please sign up or login with your details

Forgot password? Click here to reset