On Architectures for Including Visual Information in Neural Language Models for Image Description

11/09/2019
by   Marc Tanti, et al.
0

A neural language model can be conditioned into generating descriptions for images by providing visual information apart from the sentence prefix. This visual information can be included into the language model through different points of entry resulting in different neural architectures. We identify four main architectures which we call init-inject, pre-inject, par-inject, and merge. We analyse these four architectures and conclude that the best performing one is init-inject, which is when the visual information is injected into the initial state of the recurrent neural network. We confirm this using both automatic evaluation measures and human annotation. We then analyse how much influence the images have on each architecture. This is done by measuring how different the output probabilities of a model are when a partial sentence is combined with a completely different image from the one it is meant to be combined with. We find that init-inject tends to quickly become less influenced by the image as more words are generated. A different architecture called merge, which is when the visual information is merged with the recurrent neural network's hidden state vector prior to output, loses visual influence much more slowly, suggesting that it would work better for generating longer sentences. We also observe that the merge architecture can have its recurrent neural network pre-trained in a text-only language model (transfer learning) rather than be initialised randomly as usual. This results in even better performance than the other architectures, provided that the source language model is not too good at language modelling or it will overspecialise and be less effective at image description generation. Our work opens up new avenues of research in neural architectures, explainable AI, and transfer learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/01/2019

Transfer learning from language models to image caption generators: Better models may not transfer better

When designing a neural caption generator, a convolutional neural networ...
research
01/13/2017

Efficient Transfer Learning Schemes for Personalized Language Modeling using Recurrent Neural Network

In this paper, we propose an efficient transfer leaning methods for trai...
research
03/27/2017

Where to put the Image in an Image Caption Generator

When a neural language model is used for caption generation, the image i...
research
04/29/2020

GePpeTto Carves Italian into a Language Model

In the last few years, pre-trained neural architectures have provided im...
research
11/20/2014

Learning a Recurrent Visual Representation for Image Caption Generation

In this paper we explore the bi-directional mapping between images and t...
research
02/26/2015

A hypothesize-and-verify framework for Text Recognition using Deep Recurrent Neural Networks

Deep LSTM is an ideal candidate for text recognition. However text recog...
research
07/18/2017

On the State of the Art of Evaluation in Neural Language Models

Ongoing innovations in recurrent neural network architectures have provi...

Please sign up or login with your details

Forgot password? Click here to reset