Comprehension-guided referring expressions

01/12/2017 ∙ by Ruotian Luo, et al. ∙ Toyota Technological Institute at Chicago 0

We consider generation and comprehension of natural language referring expression for objects in an image. Unlike generic "image captioning" which lacks natural standard evaluation criteria, quality of a referring expression may be measured by the receiver's ability to correctly infer which object is being described. Following this intuition, we propose two approaches to utilize models trained for comprehension task to generate better expressions. First, we use a comprehension module trained on human-generated expressions, as a "critic" of referring expression generator. The comprehension module serves as a differentiable proxy of human evaluation, providing training signal to the generation module. Second, we use the comprehension module in a generate-and-rerank pipeline, which chooses from candidate expressions generated by a model according to their performance on the comprehension task. We show that both approaches lead to improved referring expression generation on multiple benchmark datasets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Image captioning, defined broadly as automatic generation of text describing images, has seen much recent attention. Deep learning, and in particular recurrent neural networks (RNNs), have led to a significant improvement in state of the art. However, the metrics currently used to evaluate image captioning are mostly borrowed from machine translation. This misses the naturally multi-modal distribution of appropriate captions for many scenes.

Referring expressions are a special case of image captions. Such expressions describe an object or region in the image, with the goal of identifying it uniquely to a listener. Thus, in contrast to generic captioning, referring expression generation has a well defined evaluation metric: it should describe an object/region so that human can easily comprehend the description and find the location of the object being described.

In this paper, we consider two related tasks. One is the comprehension task (called natural language object retrieval in [1]), namely localizing an object in an image given a referring expression. The other is the generation task: generating a discriminative referring expression for an object in an image. Most prior works address both tasks by building a sequence generation model. Such a model can be used discriminatively for the comprehension task, by inferring the region which maximizes the expression posterior.

Figure 1: These are two examples for referring expression generation. For each image, the top two expressions are generated by baseline models proposed in [2]; the bottom two expressions are generated by our methods.

We depart from this paradigm, and draw inspiration from the generator-discriminator structure in Generative Adversarial Networks[3, 4]. In GANs, the generator module tries to generate a signal (e.g., natural image), and the discriminator module tries to tell real images apart from the generated ones. For our task, the generator produces referring expressions. We would like these expressions to be both intelligible/fluent and unambiguous to human. Fluency can be encouraged by using the standard cross entropy loss with respect to human-generated expressions). On the other hand, we adopt a comprehension model as the “discriminator” which tells if the expression can be correctly dereferenced. Note that we can also regard the comprehension model as a “critic” of the “action” made by the generator where the “action” is each generated word.

Instead of an adversarial relationship between the two modules in GANs, our architecture is explicitly collaborative – the comprehension module “tells” the generator how to improve the expressions it produces. Our methods are much simpler than GANs as it avoids the alternating optimization strategy – the comprehension model is separately trained on ground truth data and then fixed. To achieve this, we adapt the comprehension model so it becomes differentiable with respect to the expression input. Thus we turn it into a proxy for human understanding that can provide training signal for the generator.

Thus, our main contribution is the first (to our knowledge) attempt to integrate automatic referring expression generation with a discriminative comprehension model in a collaborative framework.

Specifically there are two ways that we utilize the comprehension model. The generate-and-rerank method uses comprehension on the fly, similarly to [5], where they tried to produce unambiguous captions for clip-art images. The generation model generates some candidate expressions and passes them through the comprehension model. The final output expression is the one with highest generation-comprehension score which we will describe later.

The training by proxy method is closer in spirit to GANs. The generation and comprehension model are connected and the generation model is optimized to lower discriminative comprehension loss (in addition to the cross-entropy loss). We investigate several training strategies for this method and a trick to make proxy model trainable by standard back-propagation. Compared to generate-and-rerank method, the training by proxy method doesn’t require additional region proposals during test time.

2 Related work

The main approach in modern image captioning literature [6, 7, 8]

is to encode an image using a convolutional neural network (CNN), and then feed this as input to an RNN, which is able to generate a arbitrary-length sequence of words.

While captioning typically aims to describe an entire image, some work takes regions into consideration, by incorporating them in an attention mechanism [9, 10], alignment of words/phrases within sentences to regions [7], or by defining “dense” captioning on a per-region basis [11]. The latter includes a dataset of captions collected without requirement to be unambiguous, so they cannot be regarded as referring expression.

Text-based image retrieval has been considered as a task relying on image captioning 

[6, 7, 8, 9]. However, it can also be regarded as a multi-modal embedding task. In previous works [12, 13, 14] such embeddings have been trained separately for visual and textual input, with the objective to minimize matching loss, e.g., hinge loss on cosine distance, or to enforce partial order on captions and images [15]. [16] tried to retrieve from fine-grained images given a text, where they explored different network structures for text embedding.

Closer to the focus of this paper, referring expressions have attracted interest after the release of the standard datasets [17, 18, 2]. In [1]

a caption generation model is appropriated for a generation task, by evaluating the probability of a sentence given an image

as the matching score. Concurrently, [19] at the same time proposed a joint model, in which comprehension and generation aspects are trained using max-margin Maximum Mutual Information (MMI) training. Both papers used whole image, region and location/size features. Based on the model in [2], both [20] and [21] try to model context regions in their frameworks.

Our method is trying to combine simple models and replace the max margin loss, which is orthogonal to modeling context, with a surrogate closer to the eventual goal – human comprehension. This requires a comprehension model, which, given a referring expression, infers the appropriate region in the image.

Among comprehension models proposed in literature, [22] uses multi-modal embedding and sets up the comprehension task as a multi-class classification. Later, [23] achieves a slight improvement by replacing the concatenation layer with a compact bilinear pooling layer. The comprehension model used in this paper belongs to this multi-modal embedding category.

Figure 2: Illustration of how the generation model describes region inside the blue bounding box. bos and eos stand for beginning and end of sentence.
Figure 3: Illustration of comprehension model using softmax loss. The blue bounding box is the target region, and the red ones are incorrect regions. The CNNs share the weights.

The “speaker-listener” model in [5] attempts to produce discriminative captions that can tell images apart. The speaker is trained to generate captions, and a listener to prefer the correct image over a wrong one, given the caption. At test time, the listener reranks the captions sampled from the speaker. Our generate-and-rerank method is based on translating this idea to referring expression generation.

3 Generation and comprehension models

We start by defining the two modules used in the collaborative architecture we propose. Each of these can be trained as a standalone machine for the task it solves, given a data set with ground truth regions/referring expressions.

3.1 Expression generation model

We use a simple expression generation model introduced in [18, 2]. The generation task takes inputs of an image and an internal region , and outputs an expression .

(1)

To address this task, we build a model . With this , we have

(2)

To train , we need a set of image, region and expression tuples, . We can then train this model by maximizing the likelihood

(3)

Specifically, the generation model is an encoder-decoder network. First we need to encode the visual information from and . As in [1, 18, 20], we use such representation: target object representation , global context feature and location/size feature . In our experiments, is the activation on the cropped region of the last fully connected layer fc7 of VGG-16 [24]; is the fc7 activation on the whole image ;

is a 5D vector encoding the opposite corners of the bounding box of

, as well as the bounding box size relative to the image size. The final visual feature of the region is an affine transformation of the concatenation of three features:

(4)

To generate a sequence we use a uni-directional LSTM decoder[25]. Inputs of LSTM at each time step include the visual features and the previous word embedding. The output of the LSTM at a time step is the distribution of predicted next word. Then the whole model is trained to minimize cross entropy loss, which is equivalent to maximize the likelihood.

(5)
(6)

where , is the t-th word of ground truth expression , and is the length of .

In practice, instead of precisely inferring the , people use beam search, greedy search or sampling to get the output.

Figure 3 shows the structure of our generation model.

3.2 Comprehension

The comprehension task is to select a region (bounding box) from a set of regions given a query expression and the image .

(7)

We also define the comprehension model as a posterior distribution

. The estimated region given a comprehension model is:

.

In general, our comprehension model is very similar to [22]. To build the model, we first define a similarity function . We use the same visual feature encoder structure as in generation model. For the query expression, we use a one-layer bi-directional LSTM [26] to encode it. We take the averaging over the hidden vectors of each timestep so that we can get a fixed-length representation for an arbitrary length of query.

(8)

where is the word embedding matrix initialized from pretrained word2vec[27] and is a one-hot representation of the query expression, i.e. .

Unlike [22], which uses concatenation + MLP to calculate the similarity, we use a simple dot product as in [28].

(9)

We consider two formulations of the comprehension task as classification. The per-region logistic loss

(10)
(11)

where is ground truth region, corresponds to a per-region classification: is this region the right match for the expression or not. The softmax loss

(12)
(13)

where , frames the task as a multi-class classification: which region in the set should be matched to the expression.

The model is trained to minimize the comprehension loss. , where is either or .

Figure 3 shows the structure of our generation model under multi-class classification formulation.

4 Comprehension-guided generation

Once we have trained the comprehension model, we can start using it as a proxy for human comprehension, to guide expression generator. Below we describe two such approaches: one is applied at training time, and the other at test time.

4.1 Training by proxy

Consider a referring expression generated by for a given training example of an image/region pair . The generation loss will inform the generator how to modify its model to maximize the probability of the ground truth expression . The comprehension model can provide an alternative, complementary signal: how to modify to maximize the discriminativity of the generated expression, so that selects the correct region among the proposal set . Intuitively, this signal should push down on probability of a word if it’s unhelpful for comprehension, and pull that probability up if it is helpful.

Ideally, we hope to minimize the comprehension loss of the output of the generation model , where is the 1-hot encoding of , with rows (vocabulary size) and columns (sequence length).

We hope to update the generation model according to the gradient of loss with respect to the model parameter

. By chain rule,

(14)

However, is inferred by some algorithm which is not differentiable. To address this issue, [29, 30, 21]

applied reinforcement learning methods. However, here we use an approximate method borrowing from the idea of soft attention mechanism 

[9, 31].

We define a matrix which has the same size as . The i-th column of is – instead of the one-hot vector of the generated word – the distribution of the i-th word produced by , i.e.

(15)

has several good properties. First, has the same size as , so that the we can still compute the query feature by replacing the by , i.e. . Secondly, the sum of each column in is 1, just like . Thirdly, is differentiable with respect to generator’s parameters.

Now, the gradient of is calculated by:

(16)

We will use this approximate gradient in the following three methods.

4.1.1 Compound loss

Here we introduce how we integrate the comprehension model to guide the training of the generation model.

The cross-entropy loss (5) encourages fluency of the generated expression, but disregards its discriminativity. We address this by using the comprehension model as a source of an additional loss signal. Technically, we define a compound loss

(17)

where the comprehension loss is either the logistic (10) or the softmax (12) loss; the balance term determines the relative importance of fluency vs. discriminativity in .

Both and take as input ’s distribution over the -th word , where the preceding words are from the ground truth expression.

Replacing with (Sec. 4.1) allows us to train the model by back-propogation from the compound loss (17).

4.1.2 Modified Scheduled sampling training

Our final goal is to generate comprehensible expression during test time. However, in compound loss, the loss is calculated given the ground truth input while during test time each token is generated by the model, thus yielding a discrepancy between how the model is used during training and at test time. Inspired by similar motivation, [32] proposed scheduled sampling which allows the model to be trained with a mixture of ground truth data and predicted data. Here, we propose this modified schedule sampling training to train our model.

During training, at each iteration

, before forwarding through the LSTMs, we draw a random variable

from a Bernoulli distribution with probability

. If , we feed the ground truth expression to LSTM frames, and minimize cross entropy loss. If , we sample the whole sequence step by step according to the posterior, and the input of comprehension model is , where are the sampled words. We update the model by minimizing the comprehension loss. Therefore, serves as a dispatch mechanism, randomly alternating between the sources of data for the LSTMs and the components of the compound loss.

We start the modified scheduled sampling training from a pretrained generation model trained on cross entropy loss using the ground truth sequences. As the training progresses, we linearly decay until a preset minimum value . The minimum probability prevents the model from degeneration. If we don’t set the minimum, when goes to 0, the model will lose all the ground truth information, and will be purely guided by the comprehension model. This would lead the generation model to discover those pathological optimas that exist in neural classification models[33]. In this case, the generated expressions would do “well” on comprehension model, but no longer be intelligible to human. See Algorithm 1 for the pseudo-code.

1:Train the generation model .
2:Set the offset (), the slope of decay , minimum probability , number of iterations .
3:for  do
4:     
5:     Get a sample from training data,
6:     Sample the from Bernoulli distribution, where
7:     if  then
8:         Minimize with the ground truth input.
9:     else
10:         Sample a sequence from
11:         Minimize with the input ,      
Algorithm 1 Modified scheduled sampling training

4.1.3 Stochastic mixed sampling

Since modified scheduled sampling training samples a whole sentence at a time, it would be hard to get useful signal if there is an error at the beginning of the inference. We hope to find a method that can slowly deviate from the original model and explore.

Here we borrow the idea from mixed incremental cross-entropy reinforce(MIXER)[29]. Again, we start the model from a pretrained generator. Then we introduce model predictions during training with an annealing schedule so as to gradually teach the model to produce stable sequences. For each iteration , We feed the input for the first steps, and sample the rest words, where , and is the maximum length of expressions. We define , where is a base step size which gradually decreases during training, and

is a random variable which follows geometric distribution:

. This is the difference between our method and MIXER. We call this method: Stochastic mixed incremental cross-entropy comprehension(SMIXEC).

By introducing this term , we can control how much supervision we want to get from ground truth by tuning the value . This is also for preventing the model from producing pathological optimas. Note that, when is 0, will always be large enough so that it’s just cross entropy loss training. When is 1, will always equal to 0, which is equivalent to MIXER annealing schedule. See Algorithm 2 for the pseudo-code.

1:Train the generation model .
2:Set the geometric distribution parameter , maximum sequence length , period of decay , number of iterations .
3:for  do
4:     
5:     Sample from geometric distribution with success probability
6:     
7:     Get a sample from training data,
8:     Run the with ground truth input in the first steps, and sampled input in the remaining
9:     Get on first steps, and on whole sentence but with input
10:     Minimize . (Not backprop through )
Algorithm 2 Stochastic mixed incremental cross-entropy comprehension (SMIXEC)

4.2 Generate-and-rerank

Here we propose a different strategy to generate better expressions. Instead of using comprehension model for training a generation model, we compose the comprehension model during test time. The pipeline is similar to [5].

Unlike in Sec. 3.1, we not only need image and region as input, but also a region set . Suppose we have a generation model and a comprehension model which are trained pretrained. The steps are as follows:

  1. Generate candidate expressions according to .

  2. Select with .

Here, we don’t use beam search because we want the candidate set to be more diverse. And we define the score function as a weighted combination of the log perplexity and comprehension loss (we assume to use softmax loss here).

(18)

where is the k-th token of , is the length of .

This can be viewed as a weighted joint log probability that an expression to be both nature and unambiguous. The log perplexity term ensures the fluency, and the comprehension loss ensures the chosen expression to be discriminative.

RefCOCO RefCOCO+ RefCOCOg
Test A Test B Test A Test B Val
GT DET GT DET GT DET GT DET GT DET
MLE[18] 63.15% 58.32% 64.21% 48.48% 48.73% 46.86% 42.13% 34.04% 55.16% 40.75%
MMI[18] 71.72% 64.90% 71.09% 54.51% 52.44% 54.03% 47.51% 42.81% 62.14% 45.85%
visdif+MMI[18] 73.98% 67.64% 76.59% 55.16% 59.17% 55.81% 55.62% 43.43% 64.02% 46.86%
Neg Bag[20] 75.6% 58.6% 78.0% 56.4% - - - - 68.4% 39.5%
Ours 74.14% 68.11% 71.46% 54.65% 59.87% 56.61% 54.35% 43.74% 63.39% 47.60%
Ours(w2v) 74.04% 67.94% 73.43% 55.18% 60.26% 57.05% 55.03% 43.33% 65.36% 49.07%
Table 1: Comprehensions results on RefCOCO, RefCOCO+, RefCOCOg datasets. GT: the region set contains ground truth bounding boxes; DET: region set contains proposals generated from detectors. w2v means initializing the embedding layer using pretrained word2vec.
RefCLEF Test
SCRC[1] 17.93%
GroundR[22] 26.93%
MCB[23] 28.91%
Ours 31.25%
Ours(w2v) 31.85%
Table 2: Comprehension on RefClef (EdgeBox proposals)

5 Experiments

We base our experiments on the following data sets.

RefClef(ReferIt)[17] contains 20,000 images from IAPR TC-12 dataset[34], together with segmented image regions from SAIAPR-12 dataset[35]. The dataset is split into 10,000 for training and validation and 10,000 for test. There are 59,976 (image, bounding box, description) tuples in the trainval set and 60,105 in the test set.

RefCOCO(UNC RefExp)[18] consists of 142,209 referring expressions for 50,000 objects in 19,994 images from COCO[36], collected using the ReferitGame [17]

RefCOCO+[18] has 141,564 expressions for 49,856 objects in 19,992 images from COCO. RefCOCO+ dataset players are disallowed from using location words, so this dataset focuses more on purely appearance based description.

RefCOCOg(Google RefExp)[2] consists of 85,474 referring expressions for 54,822 objects in 26,711 images from COCO; it contains longer and more flowery expressions than RefCOCO and RefCOCO+.

5.1 Comprehension

We first evaluate our comprehension model on human-made expressions, to assess its ability to provide useful signal.

There are two comprehension experiment settings as in [2, 18, 20]. First, the input region set contains only ground truth bounding boxes for objects, and a hit is defined by the model choosing the correct region the expression refers to. In the second setting, contains proposal regions generated by FastRCNN detector[37], or by other proposal generation methods[38]. Here a hit occurs when the model chooses a proposal with intersection over union(IoU) with the ground truth of 0.5 or higher. We used precomputed proposals from [18, 2, 1] for all four datasets.

In RefCOCO and RefCOCO+, we have two test sets: testA contains people and testB contains all other objects. For RefCOCOg, we evaluate on the validation set. For RefClef, we evaluate on the test set.

We train the model using Adam optimizer [39]. The word embedding size is 300, and the hidden size of bi-LSTM is 512. The length of visual feature is 1024. For RefCOCO, RefCOCO+ and RefCOCOg, we train the model using softmax loss, with ground truth regions as training data. For RefClef dataset, we use the logistic loss. The training regions are composed of ground truth regions and all the proposals from Edge Box [38]. The binary classification is to tell if the proposal is a hit or not.

RefCOCO [b]
Test A Test B
Acc BLEU 1 BLEU 2 ROUGE METEOR Acc BLEU 1 BLEU 2 ROUGE METEOR
MLE[18] 74.80% 0.477 0.290 0.413 0.173 72.81% 0.553 0.343 0.499 0.228
MMI[18] 78.78% 0.478 0.295 0.418 0.175 74.01% 0.547 0.341 0.497 0.228
CL 80.14% 0.4586 0.2552 0.4096 0.178 75.44% 0.5434 0.3266 0.5056 0.2326
MSS 79.94% 0.4574 0.2532 0.4126 0.1759 75.93% 0.5403 0.3232 0.5010 0.2297
SMIXEC 79.99% 0.4855 0.2800 0.4212 0.1848 75.60% 0.5536 0.3426 0.5012 0.2320
MLE+sample 78.38% 0.5201 0.3391 0.4484 0.1974 73.08% 0.5842 0.3686 0.5161 0.2425
Rerank 97.23% 0.5209 0.3391 0.4582 0.2049 94.96% 0.5935 0.3763 0.5259 0.2505
RefCOCO+
Test A Test B
Acc BLEU 1 BLEU 2 ROUGE METEOR Acc BLEU 1 BLEU 2 ROUGE METEOR
MLE[18] 62.10% 0.391 0.218 0.356 0.140 46.21% 0.331 0.174 0.322 0.135
MMI[18] 67.79% 0.370 0.203 0.346 0.136 55.21% 0.324 0.167 0.320 0.133
CL 68.54% 0.3683 0.2041 0.3386 0.1375 55.87% 0.3409 0.1829 0.3432 0.1455
MSS 69.41% 0.3763 0.2126 0.3425 0.1401 55.59% 0.3386 0.1823 0.3365 0.1424
SMIXEC 69.05% 0.3847 0.2125 0.3507 0.1436 54.71% 0.3275 0.1716 0.3194 0.1354
MLE+sample 62.45% 0.3925 0.2256 0.3581 0.1456 47.86% 0.3354 0.1819 0.3370 0.1470
Rerank 77.32% 0.3956 0.2284 0.3636 0.1484 67.65% 0.3368 0.1843 0.3441 0.1509
Table 3: Expression generation evaluated by automated metrics. Acc: accuracy of the trained comprehension model on generated expressions. We separately mark in bold the best results for single-output methods (top) and sample-based methods (bottom) that generate multiple expressions and select one.
Figure 4: Generation results. The images are from RefCOCO testA, RefCOCO testB, RefCOCO+ testA and RefCOCO+ testB (from left to right).

Table 1 shows our results on RefCOCO, RefCOCO+ and RefCOCOg compared to recent algorithms. Among these, MMI represents Maximum Mutual Information which uses max-margin loss to help the generation model better comprehend. With the same visual feature encoder, our model can get a better result compared to MMI in [18]. Our model is also competitive with recent, more complex state-of-the-art models [18, 20]. Table 2 shows our results on RefClef where we only test in the second setting to compare to existing results; our model, which is a modest modification of [22], obtains state of the art accuracy in this experiment.

5.2 Generation

We evaluate our expression generation methods, along with baselines, on RefCOCO and RefCOCO+. Table 3 shows our evaluation of different methods based on automatic caption generation metrics. We also add an ‘Acc’ column, which is the “comprehension accuracy” of the generated expressions according to our comprehension model: how well our comprehension model can comprehend the generated expressions.

The two baseline models are max likelihood(MLE) and maximum mutual information(MMI) from [18]. Our methods include compound loss(CL), modified scheduled sampling(MSS), stochastic mixed incremental cross-entropy comprehension(SMIXEC) and also generate-and-rerank(Rerank). And the MLE+sample is designed for better analyzing rerank model.

For the two baseline models and our three strategies for training by proxy method, we use greedy search to generate an expression. The MLE+sample and Rerank methods generate an expression by choosing a best one from 100 sampled expressions.

Our generate-and-rerank (Rerank in Table 3) model gets consistently better results on automatic comprehension accuracy and on fluency-based metrics like BLEU. To see if the improvement is from sampling or reranking, we also sampled 100 expressions on MLE model and choose the one with the lowest perplexity (MLE+sample in Table 3). The generate-and-rerank method still has better results, showing benefit from comprehension-guided reranking.

We can see that our training by proxy method can get higher accuracy under the comprehension model. This confirms the effectiveness of our collaborative training by proxy method, despite the differentiable approximation (16).

Among the three training schedules of training by proxy, there is no clear winner. In RefCOCO, our SMIXEC method outperforms basic MMI method with higher comprehension accuracy and higher caption generation metrics. The compound loss and modified scheduled sampling seem to suffer from optimizing over the accuracy. However, in RefCOCO+, our three models seem to perform very differently. The compound loss works better on TestB; the SMIXEC works best on TestA and the MSS method works reasonably well on both. We currently don’t have a concrete explanation why this is happening.

Human evaluations

From [18], we know the human evaluations on the expressions are not perfectly correlated with language-based caption metrics. Thus, we performed human evaluations on the expression generation for 100 images randomly chosen from each split of RefCOCO and RefCOCO+. Subjects clicked onto the object which they thought was the most probable match for a generated expression. Each image/expression example was presented to two subjects, with a hit recorded only when both subjects clicked inside the correct region.

RefCOCO RefCOCO+
Test A Test B TestA TestB
MMI[18] 53% 61% 39% 35%
SMIXEC 62% 68% 46% 25%
Rerank 66% 75% 43% 47%
Table 4: Human evaluations

The results from human evaluations with MMI, SMIXEC and our generate-and-rerank method are in Table 4. On RefCOCO, both of our comprehension-guided methods appear to generate better (more informative) referring expressions. On RefCOCO+, the result are similar to those on RefCOCO on TestA, but our training by proxy methods performs less well on TestB.

Fig 4 shows some example generation results on test images.

6 Conclusion

In this paper, we propose to use learned comprehension models to guide generating better referring expressions. Comprehension guidance can be incorporated at training time, with a training by proxy method, where the discriminative comprehension loss (region retrieval based on generated referring expressions) is included in training the expression generator. Alternatively comprehension guidance can be used at test time, with a generate-and-rerank method which uses model comprehension score to select among multiple proposed expressions. Empirical evaluation shows both to be promising, with the generate-and-rerank method obtaining particularly good results across data sets.

Among directions for future work we are interested to explore alternative training regimes, in particular an adaptation of the GAN protocol to referring expression generation. We will try to incorporate context objects (other regions in the image) into representation for a reference region. Finally, while at the moment the generation and comprehension models are completely separate, it is interesting to consider weight sharing.

Acknowledgements

We thank Licheng Yu for providing the baseline code. We also thank those who participated in evaluations. Finally, we gratefully acknowledge the support of NVIDIA Corporation with the donation of GPUs used for this research.

References

  • [1] Ronghang Hu, Huazhe Xu, Marcus Rohrbach, Jiashi Feng, Kate Saenko, and Trevor Darrell. Natural Language Object Retrieval. arXiv preprint, pages 4555–4564, 2015.
  • [2] Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan Yuille, and Kevin Murphy. Generation and Comprehension of Unambiguous Object Descriptions. Cvpr, pages 11–20, 2016.
  • [3] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680, 2014.
  • [4] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv, pages 1–15, 2015.
  • [5] Jacob Andreas and Dan Klein. Reasoning About Pragmatics with Neural Listeners and Speakers. 1604.00562V1, 2016.
  • [6] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 3156–3164, 2015.
  • [7] Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on, pages 3128–3137, 2015.
  • [8] Lin Ma, Zhengdong Lu, Lifeng Shang, and Hang Li. Multimodal convolutional neural networks for matching image and sentence. Proceedings of the IEEE International Conference on Computer Vision, 11-18-Dece:2623–2631, 2016.
  • [9] Kelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. Icml-2015, 2015.
  • [10] Chenxi Liu, Junhua Mao, Fei Sha, and Alan Yuille. Attention Correctness in Neural Image Captioning. pages 1–11, 2016.
  • [11] Justin Johnson, Andrej Karpathy, and Li Fei-Fei.

    DenseCap: Fully Convolutional Localization Networks for Dense Captioning.

    arXiv preprint, 2015.
  • [12] Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Tomas Mikolov, et al. Devise: A deep visual-semantic embedding model. In Advances in neural information processing systems, 2013.
  • [13] Liwei Wang, Yin Li, and Svetlana Lazebnik. Learning Deep Structure-Preserving Image-Text Embeddings. Cvpr, (Figure 1):5005–5013, 2016.
  • [14] Jason Weston, Samy Bengio, and Nicolas Usunier. Wsabie: Scaling up to large vocabulary image annotation. In

    Proceedings of the International Joint Conference on Artificial Intelligence, IJCAI

    , 2011.
  • [15] Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-Embeddings of Images and Language. arXiv preprint, (2005):1–13, 2015.
  • [16] Scott Reed, Zeynep Akata, Honglak Lee, and Bernt Schiele. Learning Deep Representations of Fine-Grained Visual Descriptions. Cvpr, pages 49–58, 2016.
  • [17] Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara L Berg. ReferItGame: Referring to Objects in Photographs of Natural Scenes. Emnlp, pages 787–798, 2014.
  • [18] Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg. Modeling Context in Referring Expressions. In Eccv, 2016.
  • [19] Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan Yuille, and Kevin Murphy. Generation and Comprehension of Unambiguous Object Descriptions. Cvpr, pages 11–20, 2016.
  • [20] Varun K Nagaraja, Vlad I Morariu, and Larry S Davis. Modeling Context Between Objects for Referring Expression Understanding. Eccv, 2016.
  • [21] Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient. 2016.
  • [22] Anna Rohrbach, Marcus Rohrbach, Ronghang Hu, Trevor Darrell, and Bernt Schiele. Grounding of Textual Phrases in Images by Reconstruction. 1511.03745V1, 1:1–10, 2015.
  • [23] Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding. Arxiv, 2016.
  • [24] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • [25] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
  • [26] Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In 2013 IEEE international conference on acoustics, speech and signal processing, pages 6645–6649. IEEE, 2013.
  • [27] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
  • [28] Bhuwan Dhingra, Hanxiao Liu, William W. Cohen, and Ruslan Salakhutdinov. Gated-Attention Readers for Text Comprehension. ArXiV, 2016.
  • [29] Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence Level Training with Recurrent Neural Networks. Iclr, pages 1–15, 2016.
  • [30] Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. An Actor-Critic Algorithm for Sequence Prediction. arXiv:1607.07086v1 [cs.LG], 2016.
  • [31] Dzmitry Bahdana, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural Machine Translation By Jointly Learning To Align and Translate. Iclr 2015, pages 1–15, 2014.
  • [32] Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems, pages 1171–1179, 2015.
  • [33] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
  • [34] M Grübinger, P Clough, H Müller, and T Deselaers. The IAPR TC-12 Benchmark: A New Evaluation Resource for Visual Information Systems. LREC Workshop OntoImage Language Resources for Content-Based Image Retrieval, pages 13–23, 2006.
  • [35] Hugo Jair Escalante, Carlos A. Hernández, Jesus A. Gonzalez, A. López-López, Manuel Montes, Eduardo F. Morales, L. Enrique Sucar, Luis Villaseñor, and Michael Grubinger. The segmented and annotated IAPR TC-12 benchmark. Computer Vision and Image Understanding, 114(4):419–428, 2010.
  • [36] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European Conference on Computer Vision, pages 740–755. Springer, 2014.
  • [37] Ross Girshick. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, pages 1440–1448, 2015.
  • [38] Piotr Dollar Larry Zitnick. Edge boxes: Locating object proposals from edges. In ECCV. European Conference on Computer Vision, September 2014.
  • [39] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.