Where to put the Image in an Image Caption Generator

03/27/2017
by   Marc Tanti, et al.
0

When a neural language model is used for caption generation, the image information can be fed to the neural network either by directly incorporating it in a recurrent neural network -- conditioning the language model by injecting image features -- or in a layer following the recurrent neural network -- conditioning the language model by merging the image features. While merging implies that visual features are bound at the end of the caption generation process, injecting can bind the visual features at a variety stages. In this paper we empirically show that late binding is superior to early binding in terms of different evaluation metrics. This suggests that the different modalities (visual and linguistic) for caption generation should not be jointly encoded by the RNN; rather, the multimodal integration should be delayed to a subsequent stage. Furthermore, this suggests that recurrent neural networks should not be viewed as actually generating text, but only as encoding it for prediction in a subsequent layer.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/07/2017

What is the Role of Recurrent Neural Networks (RNNs) in an Image Caption Generator?

In neural image captioning systems, a recurrent neural network (RNN) is ...
research
03/07/2019

Neural Language Modeling with Visual Features

Multimodal language models attempt to incorporate non-linguistic feature...
research
01/01/2019

Transfer learning from language models to image caption generators: Better models may not transfer better

When designing a neural caption generator, a convolutional neural networ...
research
11/09/2019

On Architectures for Including Visual Information in Neural Language Models for Image Description

A neural language model can be conditioned into generating descriptions ...
research
12/20/2014

Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN)

In this paper, we present a multimodal Recurrent Neural Network (m-RNN) ...
research
10/11/2016

From phonemes to images: levels of representation in a recurrent neural model of visually-grounded language learning

We present a model of visually-grounded language learning based on stack...
research
06/12/2017

Encoding of phonology in a recurrent neural model of grounded speech

We study the representation and encoding of phonemes in a recurrent neur...

Please sign up or login with your details

Forgot password? Click here to reset