Neural Twins Talk

09/26/2020
by   Zanyar Zohourianshahzadi, et al.
0

Inspired by how the human brain employs more neural pathways when increasing the focus on a subject, we introduce a novel twin cascaded attention model that outperforms a state-of-the-art image captioning model that was originally implemented using one channel of attention for the visual grounding task. Visual grounding ensures the existence of words in the caption sentence that are grounded into a particular region in the input image. After a deep learning model is trained on visual grounding task, the model employs the learned patterns regarding the visual grounding and the order of objects in the caption sentences, when generating captions. We report the results of our experiments in three image captioning tasks on the COCO dataset. The results are reported using standard image captioning metrics to show the improvements achieved by our model over the previous image captioning model. The results gathered from our experiments suggest that employing more parallel attention pathways in a deep neural network leads to higher performance. Our implementation of NTT is publicly available at: https://github.com/zanyarz/NeuralTwinsTalk.

READ FULL TEXT

page 1

page 6

research
12/02/2021

Consensus Graph Representation Learning for Better Grounded Image Captioning

The contemporary visual captioning models frequently hallucinate objects...
research
03/27/2018

Neural Baby Talk

We introduce a novel framework for image captioning that can produce nat...
research
01/19/2019

Binary Image Selection (BISON): Interpretable Evaluation of Visual Grounding

Providing systems the ability to relate linguistic and visual content is...
research
04/01/2020

More Grounded Image Captioning by Distilling Image-Text Matching Model

Visual attention not only improves the performance of image captioners, ...
research
11/26/2018

Show, Control and Tell: A Framework for Generating Controllable and Grounded Captions

Current captioning approaches can describe images using black-box archit...
research
07/10/2020

Image Captioning with Compositional Neural Module Networks

In image captioning where fluency is an important factor in evaluation, ...
research
05/12/2021

Connecting What to Say With Where to Look by Modeling Human Attention Traces

We introduce a unified framework to jointly model images, text, and huma...

Please sign up or login with your details

Forgot password? Click here to reset