Learning to Collocate Visual-Linguistic Neural Modules for Image Captioning

10/04/2022
by   Xu Yang, et al.
0

Humans tend to decompose a sentence into different parts like sth do sth at someplace and then fill each part with certain content. Inspired by this, we follow the principle of modular design to propose a novel image captioner: learning to Collocate Visual-Linguistic Neural Modules (CVLNM). Unlike the widely used neural module networks in VQA, where the language (, question) is fully observable, the task of collocating visual-linguistic modules is more challenging. This is because the language is only partially observable, for which we need to dynamically collocate the modules during the process of image captioning. To sum up, we make the following technical contributions to design and train our CVLNM: 1) distinguishable module design – four modules in the encoder including one linguistic module for function words and three visual modules for different content words (, noun, adjective, and verb) and another linguistic one in the decoder for commonsense reasoning, 2) a self-attention based module controller for robustifying the visual reasoning, 3) a part-of-speech based syntax loss imposed on the module controller for further regularizing the training of our CVLNM. Extensive experiments on the MS-COCO dataset show that our CVLNM is more effective, , achieving a new state-of-the-art 129.5 CIDEr-D, and more robust, , being less likely to overfit to dataset bias and suffering less when fewer training samples are available. Codes are available at <https://github.com/GCYZSL/CVLMN>

READ FULL TEXT

page 2

page 3

page 5

page 6

page 11

page 12

research
04/18/2019

Learning to Collocate Neural Modules for Image Captioning

We do not speak word by word from scratch; our brain quickly structures ...
research
07/17/2020

Learning to Discretely Compose Reasoning Module Networks for Video Captioning

Generating natural language descriptions for videos, i.e., video caption...
research
02/27/2020

Visual Commonsense R-CNN

We present a novel unsupervised feature representation learning method, ...
research
08/06/2019

Aligning Linguistic Words and Visual Semantic Units for Image Captioning

Image captioning attempts to generate a sentence composed of several lin...
research
03/19/2020

Normalized and Geometry-Aware Self-Attention Network for Image Captioning

Self-attention (SA) network has shown profound value in image captioning...
research
07/22/2022

Efficient Modeling of Future Context for Image Captioning

Existing approaches to image captioning usually generate the sentence wo...
research
12/06/2018

Auto-Encoding Graphical Inductive Bias for Descriptive Image Captioning

We propose Scene Graph Auto-Encoder (SGAE) that incorporates the languag...

Please sign up or login with your details

Forgot password? Click here to reset