Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training

03/30/2017
by   Rakshith Shetty, et al.
0

While strong progress has been made in image captioning over the last years, machine and human captions are still quite distinct. A closer look reveals that this is due to the deficiencies in the generated word distribution, vocabulary size, and strong bias in the generators towards frequent captions. Furthermore, humans -- rightfully so -- generate multiple, diverse captions, due to the inherent ambiguity in the captioning task which is not considered in today's systems. To address these challenges, we change the training objective of the caption generator from reproducing groundtruth captions to generating a set of captions that is indistinguishable from human generated captions. Instead of handcrafting such a learning target, we employ adversarial training in combination with an approximate Gumbel sampler to implicitly match the generated distribution to the human one. While our method achieves comparable performance to the state-of-the-art in terms of the correctness of the captions, we generate a set of diverse captions, that are significantly less biased and match the word statistics better in several aspects.

READ FULL TEXT VIEW PDF

page 1

page 2

page 12

page 13

page 14

page 15

page 16

08/08/2019

Towards Generating Stylized Image Captions via Adversarial Training

While most image captioning aims to generate objective descriptions of i...
04/03/2018

Generating Diverse and Accurate Visual Captions by Comparative Adversarial Learning

We study how to generate captions that are not only accurate in describi...
10/13/2021

Diverse Audio Captioning via Adversarial Training

Audio captioning aims at generating natural language descriptions for au...
08/28/2021

Goal-driven text descriptions for images

A big part of achieving Artificial General Intelligence(AGI) is to build...
05/16/2018

Defoiling Foiled Image Captions

We address the task of detecting foiled image captions, i.e. identifying...
12/24/2020

SubICap: Towards Subword-informed Image Captioning

Existing Image Captioning (IC) systems model words as atomic units in ca...
06/26/2015

Humor in Collective Discourse: Unsupervised Funniness Detection in the New Yorker Cartoon Caption Contest

The New Yorker publishes a weekly captionless cartoon. More than 5,000 r...

Code Repositories

captionGAN

Source code for the paper "Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training"


view repo

1 Introduction

Image captioning systems have a variety of applications ranging from media retrieval and tagging to assistance for the visually impaired. In particular, models which combine state-of-the-art image representations based on deep convolutional networks and deep recurrent language models have led to ever increasing performance on evaluation metrics such as CIDEr 

[39] and METEOR [8] as can be seen e.g. on the COCO image Caption challenge leaderboard [6].

Despite these advances, it is often easy for humans to differentiate between machine and human captions – particularly when observing multiple captions for a single image. As we analyze in this paper, this is likely due to artifacts and deficiencies in the statistics of the generated captions, which is more apparent when observing multiple samples. Specifically, we observe that state-of-the-art systems frequently “reveal themselves” by generating a different word distribution and using smaller vocabulary. Further scrutiny reveals that generalization from the training set is still challenging and generation is biased to frequent fragments and captions.

Also, today’s systems are evaluated to produce a single caption. Yet, multiple potentially distinct captions are typically correct for a single image – a property that is reflected in human ground-truth. This diversity is not equally reproduced by state-of-the-art caption generators [40, 23].

Therefore, our goal is to make image captions less distinguishable from human ones – similar in the spirit to a Turing Test. We also embrace the ambiguity of the task and extend our investigation to predicting sets of captions for a single image and evaluating their quality, particularly in terms of the diversity in the generated set. In contrast, popular approaches to image captioning are trained with an objective to reproduce the captions as provided by the ground-truth.

Instead of relying on handcrafting loss-functions to achieve our goal, we propose an adversarial training mechanism for image captioning. For this we build on Generative Adversarial Networks (GANs) 

[14], which have been successfully used to generate mainly continuous data distributions such as images [9, 30], although exceptions exist [27]

. In contrast to images, captions are discrete, which poses a challenge when trying to backpropagate through the generation step. To overcome this obstacle, we use a Gumbel sampler

[20, 28] that allows for end-to-end training.

We address the problem of caption set generation for images and discuss metrics to measure the caption diversity and compare it to human ground-truth. We contribute a novel solution to this problem using an adversarial formulation. The evaluation of our model shows that accuracy of generated captions is on par to the state-of-the-art, but we greatly increase the diversity of the caption sets and better match the ground-truth statistics in several measures. Qualitatively, our model produces more diverse captions across images containing similar content (Figure 1) and when sampling multiple captions for an image (see supplementary)111https://goo.gl/3yRVnq .

2 Related Work

Image Description. Early captioning models rely on first recognizing visual elements, such as objects, attributes, and activities, and then generating a sentence using language models such as a template model [13]

, n-gram model

[22], or statistical machine translation [34]

. Advances in deep learning have led to end-to-end trainable models that combine deep convolutional networks to extract visual features and recurrent networks to generate sentences

[11, 41, 21].

Though modern description models are capable of producing coherent sentences which accurately describe an image, they tend to produce generic sentences which are replicated from the train set [10]. Furthermore, an image can correspond to many valid descriptions. However, at test time, sentences generated with methods such as beam search are generally very similar. [40, 23]

focus on increasing sentence diversity by integrating a diversity promoting heuristic into beam search.

[42] attempts to increase the diversity in caption generation by training an ensemble of caption generators each specializing in different portions of the training set. In contrast, we focus on improving diversity of generated captions using a single model. Our method achieves this by learning a corresponding model using a different training loss as opposed to after training has completed. We note that generating diverse sentences is also a challenge in visual question generation, see concurrent work [19], and in language-only dialogue generation studied in the linguistic community, see e.g. [23, 24].

When training recurrent description models, the most common method is to predict a word conditioned on an image and all previous ground truth words. At test time, each word is predicted conditioned on an image and previously predicted words. Consequently, at test time predicted words may be conditioned on words that were incorrectly predicted by the model. By only training on ground truth words, the model suffers from exposure bias [31] and cannot effectively learn to recover when it predicts an incorrect word during training. To avoid this, [4] proposes a scheduled sampling training scheme which begins by training with ground truth words, but then slowly conditions generated words on words previously produced by the model. However, [17] shows that the scheduled sampling algorithm is inconsistent and the optimal solution under this objective does not converge to the true data distribution. Taking a different direction, [31]

proposes to address the exposure bias by gradually mixing a sequence level loss (BLEU score) using REINFORCE rule with the standard maximum likelihood training. Several other works have followed this up with using reinforcement learning based approaches to directly optimize the evaluation metrics like BLEU, METEOR and CIDER 

[33, 25]. However, optimizing the evaluation metrics does not directly address the diversity of the generated captions. Since all current evaluation metrics use n-gram matching to score the captions, captions using more frequent n-grams are likely to achieve better scores than ones using rarer and more diverse n-grams.

In this work, we formulate our caption generator as a generative adversarial network. We design a discriminator that explicitly encourages generated captions to be diverse and indistinguishable from human captions. The generator is trained with an adversarial loss with this discriminator. Consequently, our model generates captions that better reflect the way humans describe images while maintaining similar correctness as determined by a human evaluation.

Generative Adversarial Networks. The Generative Adversarial Networks (GANs) [14] framework learns generative models without explicitly defining a loss from a target distribution. Instead, GANs learn a generator using a loss from a discriminator which tries to differentiate real and generated samples, where the generated samples come from the generator. When training to generate real images, GANs have shown encouraging results [9, 30]. In all these works the target distribution is continuous. In contrast our target, a sequence of words, is discrete. Applying GANs to discrete sequences is challenging as it is unclear how to best back-propagate the loss through the sampling mechanism.

A few works have looked at generating discrete distributions using GANs. [27] aim to generate a semantic image segmentation with discrete semantic labels at each pixel. [46]

uses REINFORCE trick to train an unconditional text generator using the GAN framework but diversity of the generated text is not considered.

Most similar to our work are concurrent works which use GANs for dialogue generation [24] and image caption generation [7]. While [24, 46, 7] rely on the reinforcement rule [43] to handle backpropagation through the discrete samples, we use the Gumbel Softmax [20]. See Section 3.1 for further discussion. [24] aims to generate a diverse dialogue of multiple sentences while we aim to produce diverse sentences for a single image. Additionally, [24] uses both the adversarial and the maximum likelihood loss in each step of generator training. We however train the generator with only adversarial loss after pre-training. Concurrent work [7] also applies GANs to diversify generated image captions. Apart from using the gumbel softmax as discussed above, our work differs from [7] in the discriminator design and quantitative evaluation of the generator diversity.

3 Adversarial Caption Generator

The image captioning task can be formulated as follows: given an input image the generator produces a caption, , describing the contents of the image. There is an inherent ambiguity in the task, with multiple possible correct captions for an image, which is also reflected in diverse captions written by human annotators  (we quantify this in Table 4). However, most image captioning architectures ignore this diversity during training. The standard approach to model is to use a recurrent language model conditioned on the input image  [11, 41], and train it using a maximum likelihood (ML) loss considering every image–caption pair as an independent sample. This ignores the diversity in the human captions and results in models that tend to produce generic and commonly occurring captions from the training set, as we will show in Section 5.3.

We propose to address this by explicitly training the generator to produce multiple diverse captions for an input image using the adversarial framework [14]. In adversarial frameworks, a generative model is trained by pairing it with adversarial discriminator which tries to distinguish the generated samples from true data samples. The generator is trained with the objective to fool the discriminator, which is optimal when exactly matches the data distribution. This is well-suited for our goal because, with an appropriate discriminator network we could coax the generator to capture the diversity in the human written captions, without having to explicitly design a loss function for it.

To enable adversarial training, we introduce a second network, , which takes as input an image and a caption set

and classifies it as either real or fake. Providing a set of captions per image as input to the discriminator allows it to factor in the diversity in the caption set during the classification. The discriminator can penalize the generator for producing very similar or repeated captions and thus encourage the diversity in the generator.

Specifically, the discriminator is trained to classify the captions drawn from the reference captions set, , as real while classifying the captions produced by the generator, , as fake. The generator can now be trained using an adversarial objective, i.e. is trained to fool the discriminator to classify as real.

3.1 Caption generator

Figure 3: Caption generator model. Deep visual features are input to an LSTM to generate a sentence. A Gumbel sampler is used to obtain soft samples from the softmax distribution, allowing for backpropagation through the samples.

We use a near state-of-the art caption generator model based on [36]

. It uses the standard encoder-decoder framework with two stages: the encoder model which extracts feature vectors from the input image and the decoder which translates these features into a word sequence.

Image features.

Images are encoded as activations from a pre-trained convolutional neural network (CNN). Captioning models also benefit from augmenting the CNN features with explicit object detection features 

[36]

. Accordingly, we extract a feature vector containing the probability of occurrence of an object and provide it as input to the generator.

Language Model. Our decoder shown in Figure 3

, is adopted from a Long-Short Term Memory (LSTM) based language model architecture presented in 

[36]

for image captioning. It consists of a three-layered LSTM network with residual connections between the layers. The LSTM network takes two features as input. First is the object detection feature,

, which is input to the LSTM at only time step and shares the input matrix with the word vectors. Second is the global image CNN feature, , and is input to the LSTM at all time-steps through its own input matrix.

The softmax layer at the output of the generator produces a probability distribution over the vocabulary at each step.

(1)
(2)

where is the LSTM cell state at time and is a scalar parameter which controls the peakyness of the distribution. Parameter allows us to control how large a hypothesis space the generator explores during adversarial training. An additional uniform random noise vector , is input to the LSTM in adversarial training to allow the generator to use the noise to produce diversity.

Discreteness Problem. To produce captions from the generator we could simply sample from this distribution , recursively feeding back the previously sampled word at each step, until we sample the END token. One can generate multiple sentences by sampling and pick the sentence with the highest probability as done in [12]. Alternatively we could also use greedy search approaches like beam-search. However, directly providing these discrete samples as input to the discriminator does not allow for backpropagation through them as they are discontinuous. Alternatives to overcome this are the reinforce rule/trick [43], using the softmax distribution, or using the Gumbel-Softmax approximation [20, 28].

Using policy gradient algorithms with the reinforce rule/trick [43]

allows estimation of gradients through discrete samples 

[16, 2, 46, 24]

. However, learning using reinforce trick can be unstable due to high variance 

[38] and some mechanisms to make learning more stable, like estimating the action-value for intermediate states by generating multiple possible sentence completions (e.g used in [46, 7]), can be computationally intensive.

Another option is to input the softmax distribution to the discriminator instead of samples. We experimented with this, but found that the discriminator easily distinguishes between the softmax distribution produced by the generator and the sharp reference samples, and the GAN training fails.

The last option, which we rely on in this work, it to use a continuous relaxation of the samples encoded as one-hot vectors using the Gumbel-Softmax approximation proposed in [20] and [28]. This continuous relaxation combined with the re-parametrization of the sampling process allows backpropagation through samples from a categorical distribution. The main benefit of this approach is that it plugs into the model as a differentiable node and does not need any additional steps to estimate the gradients. Whereas most previous methods to applying GAN to discrete output generators use policy gradient algorithms, we show that Gumbel-Softmax approximation can also be used successfully in this setting. An empirical comparison between the two approaches can be found in [20].

The Gumbel-Softmax approximation consists of two steps. First Gumbel-Max trick is used to re-parametrize sampling from a categorical distribution. Given a random variable

drawn from a categorical distribution parametrized by , can be expressed as:

(3)

where ’s are i.i.d. random variables from the standard gumbel distribution. Next the argmax in Equation (3

) is replaced with softmax to obtain a continuous relaxation of the discrete random variable

.

(4)

where is the temperature parameter which controls how close is to , with when .

We use straight-through variation of the Gumbel-Softmax approximation [20] at the output of our generator to sample words during the adversarial training. In the straight-through variation, sample is used in the forward path and soft approximation is used in the backward path to allow backpropogation.

3.2 Discriminator model

Figure 4: Discriminator Network. Caption set sampled from the generator is used to compute image to sentence and sentence-to-sentence distances. They are used to score the set as real/fake.

The discriminator network, takes an image , represented using CNN feature , and a set of captions as input and classifies as either real or fake. Ideally, we want to base this decision on two criteria: a) do describe the image correctly ? b) is the set is diverse enough to match the diversity in human captions ?

To enable this, we use two separate distance measuring kernels in our discriminator network as shown in Figure 4. The first kernel computes the distances between the image and each sentence in . The second kernel computes the distances between the sentences in . The architecture of these distance measuring kernels is based on the minibatch discriminator presented in [35]. However, unlike [35], we only compute distances between captions corresponding to the same image and not over the entire minibatch.

Input captions are encoded into a fixed size sentence embedding vector using an LSTM encoder to obtain vectors . The image feature, , is also embedded into a smaller image embedding vector . The distances between are computed as

(5)
(6)
(7)
(8)

where is a

dimensional tensor and

is the number of different distance kernels to use.

Distances between and are obtained with similar procedure as above, but using a different tensor of dimensions to yield . These two distance vectors capture the two aspects we want our discriminator to focus on. captures how well matches the image and captures the diversity in . The two distance vectors are concatenated and multiplied with a output matrix followed by softmax to yield the discriminator output probability, , for to be drawn from reference captions.

3.3 Adversarial Training

In adversarial training both the generator and the discriminator are trained alternatively for and steps respectively. The discriminator tries to classify as real and as fake. In addition to this, we found it important to also train the discriminator to classify few reference captions drawn from a random image as fake, i.e. . This forces the discriminator to learn to match images and captions, and not just rely on diversity statistics of the caption set. The complete loss function of the discriminator is defined by

(9)

The training objective of the generator is to fool the discriminator into classifying as real. We found helpful to additionally use the feature matching loss [35]. This loss trains the generator to match activations induced by the generated and true data at some intermediate layer of the discriminator. In our case we use an loss to match the expected value of distance vectors and between real and generated data. The generator loss function is given by

(10)

where the expectation is over a training mini-batch.

4 Experimental Setup

We conduct all our experiments on the MS-COCO dataset [5]. The training set consists of  83k images with five human captions each. We use the publicly available test split of 5000 images [21] for all our experiments.  Section 5.4 uses a validation split of 5000 images.

For image feature extraction, we use activations from

res5c layer of the 152-layered ResNet [15]

convolutional neural network (CNN) pre-trained on ImageNet. The input images are scaled to

dimensions for ResNet feature extraction. Additionally we use features from the VGG network [37] in our ablation study in Section 5.4. Following [36], we additionally extract 80-dimensional object detection features using a Faster Region-Based Convolutional Neural Network (RCNN) [32] trained on the 80 object categories in the COCO dataset. The CNN features are input to both the generator (at ) and the discriminator. Object detection features are input only to the generator at the input and is used in all the generator models reported here.

4.1 Insights in Training the GAN

As is well known [3], we found GAN training to be sensitive to hyper-parameters. Here we discuss some settings which helped stabilize the training of our models.

We found it necessary to pre-train the generator using standard maximum likelihood training. Without pre-training, the generator gets stuck producing incoherent sentences made of random word sequences. We also found pre-training the discriminator on classifying correct image-caption pairs against random image-caption pairs helpful to achieve stable GAN training. We train the discriminator for 5 iterations for every generator update. We also periodically monitor the classification accuracy of the discriminator and train it further if it drops below 75%. This prevents the generator from updating using a bad discriminator.

Without the feature matching term in the generator loss, the GAN training was found to be unstable and needed additional maximum likelihood update to stabilize it. This was also reported in [24]. However with the feature matching loss, training is stable and the ML update is not needed.

A good range of values for the Gumbel temperature was found to be . Beyond this range training was unstable, but within this range the results were not sensitive to it. We use a fixed temperature setting of in the experiments reported here. The softmax scaling factor, in (2), is set to value for training of all the adversarial models reported here. The sampling results are also with .

5 Results

We conduct experiments to evaluate our adversarial caption generator w.r.t. two aspects: how human-like the generated captions are and how accurately they describe the contents of the image. Using diversity statistics and word usage statistics as a proxy for measuring how closely the generated captions mirror the distribution of the human reference captions, we show that the adversarial model is more human-like than the baseline. Using human evaluation and automatic metrics we also show that the captions generated by the adversarial model performs similar to the baseline model in terms of correctness of the caption.

Henceforth, Base and Adv refer to the baseline and adversarial models, respectively. Suffixes bs and samp indicate decoding using beamsearch and sampling respectively.

5.1 Measuring if captions are human-like

Diversity. We analyze -gram usage statistics, compare vocabulary sizes and other diversity metrics presented below to understand and measure the gaps between human written captions and the automatic methods and show that the adversarial training helps bridge some of these gaps.

To measure the corpus level diversity of the generated captions we use:

  • [noitemsep,topsep=0pt]

  • Vocabulary Size - number of unique words used in all generated captions

  • % Novel Sentences - percentage of generated captions not seen in the training set.

To measure diversity in a set of captions, , corresponding to a single image we use:

  • [noitemsep,topsep=0pt]

  • Div-1 - ratio of number of unique unigrams in to number of words in . Higher is more diverse.

  • Div-2 - ratio of number of unique bigrams in to number of words in . Higher is more diverse.

  • mBleu - Bleu score is computed between each caption in against the rest. Mean of these Bleu scores is the mBleu score. Lower values indicate more diversity.

Correctness. Just generating diverse captions is not useful if they do not correctly describe the content of an image. To measure the correctness of the generated captions we use two automatic evaluation metrics Meteor [8] and SPICE [1]. However since it is known that the automatic metrics do not always correlate very well with human judgments of the correctness, we also report results from human evaluations comparing the baseline model to our adversarial model.

5.2 Comparing caption accuracy

Table 1 presents the comparison of our adversarial model to the baseline model. Both the baseline and the adversarial models use ResNet features. The beamsearch results are with beam size 5 and sampling results are with taking the best of 5 samples. Here the best caption is obtained by ranking the captions as per probability assigned by the model.

Table 1 also shows the metrics from some recent methods from the image captioning literature. The purpose of this comparison is to illustrate that we use a strong baseline and that our baseline model is competitive to recent published work, as seen from the Meteor and Spice metrics.

Comparing baseline and adversarial models in Table 1 the adversarial model does worse in-terms of Meteor scores and overall spice metrics. When we look at Spice scores on individual categories shown in Table 2 we see that adversarial models excel at counting relative to the baseline and describing the size of an object correctly.

However, it is well known that automatic metrics do not always correlate with human judgments on correctness of a caption. A primary reason the adversarial models do poorly on automatic metrics is that they produce significantly more unique sentences using a much larger vocabulary and rarer n-grams, as shown in  Section 5.3. Thus, they are less likely to do well on metrics relying on -gram matches.

To verify this claim, we conduct human evaluations comparing captions from the baseline and the adversarial model. Human evaluators from Amazon Mechanical Turk are shown an image and a caption each from the two models and are asked “Judge which of the two sentences is a better description of the image (w.r.t. correctness and relevance)!”. The choices were either of the two sentences or to report that they are the same. Results from this evaluation are presented in Table 3. We can see that both adversarial and baseline models perform similarly, with adversarial models doing slightly better. This shows that despite the poor performance in automatic evaluation metrics, the adversarial models produce captions that are similar, or even slightly better, in accuracy to the baseline model.

Method Meteor Spice
ATT-FCN [45] 0.243
MSM [44] 0.251
KWL [26] 0.266 0.194
Ours Base-bs 0.272 0.187
Ours Base-samp 0.265 0.186
Ours Adv-bs 0.239 0.167
Ours Adv-samp 0.236 0.166
Table 1: Meteor and Spice metrics comparing performance of baseline and adversarial models.
Method Spice
Color Attribute Object Relation Count Size
Base-bs 0.101 0.085 0.345 0.049 0.025 0.034
Base-samp 0.059 0.069 0.352 0.052 0.032 0.033
Adv-bs 0.079 0.082 0.318 0.034 0.080 0.052
Adv-samp 0.078 0.082 0.316 0.033 0.076 0.053
Table 2: Comparing baseline and adversarial models in different categories of Spice metric.
Comparison Adversarial - Better Adversarial - Worse
Beamsearch 36.9 34.8
Sampling 35.7 33.2
Table 3: Human evaluation comparing adversarial model vs the baseline model on 482 random samples. Correctness of captions. With agreement of at least 3 out of 5 judges in %. Humans agreed in 89.2% and 86.7% of images in beamsearch and sampling cases respectively.

5.3 Comparing vocabulary statistics

Figure 5: Comparison of -gram count ratios in generated test-set captions by different models. Left side shows the mean -gram count-ratios as a function of counts on training set. Right side shows the histogram of the count-ratios.

To characterize how well the captions produced by the automatic methods match the statistics of the human written captions, we look at -gram usage statistics in the generated captions. Specifically, we compute the ratio of the actual count of an -gram in the caption set produced by a model to the expected -gram count based on the training data.

Given that an -gram occurred times in the training set we can expect that it occurs times in the test set. However actual counts may vary depending on how different the test set is from the training set. We compute these ratios for reference captions in the test set to get an estimate of the expected variance of the count ratios.

Vocab- % Novel
Method n Div-1 Div-2 mBleu-4 ulary Sentences
Base-bs 1 of 5 756 34.18
5 of 5 0.28 0.38 0.78 1085 44.27
Base-samp 1 of 5 839 52.04
5 of 5 0.31 0.44 0.68 1460 55.24
Adv-bs 1 of 5 1508 68.62
5 of 5 0.34 0.44 0.70 2176 72.53
Adv-samp 1 of 5 1616 73.92
5 of 5 0.41 0.55 0.51 2671 79.84
Human 1 of 5 3347 92.80
captions 5 of 5 0.53 0.74 0.20 7253 95.05
Table 4: Diversity Statistics described in Section 5.1. Higher values correspond to more diversity in all except mBleu-4, where lower is better.
Adv-bs a group of friends enjoying a dinner at the restauarant several cows in their pen at the farm A dog is trying to get something out of the snow
Base-bs a group of people sitting around a wooden table a herd of cattle standing next to each other a couple of dogs that are in the snow
Figure 6: Some qualitative examples comparing comparing captions generated by the our model to the baseline model.

The left side of Figure 5 shows the mean count ratios for uni-, bi- and tri-grams in the captions generated on test-set plotted against occurrence counts in the training set. Histogram of these ratios are shown on the right side.

Count ratios for the reference captions from the test-set are shown in green. We see that the -gram counts match well between the training and test set human captions and the count ratios are spread around with a small variance.

The baseline model shows a clear bias towards more frequently occurring n-grams. It consistently overuses more frequent n-grams (ratio1.0) from the training set and under-uses less frequent ones (ratio1.0). This trend is seen in all the three plots, with more frequent tri-grams particularly prone to overuse. It can also be observed in the histogram plots of the count ratios, that the baseline model does a poor job of matching the statistics of the test set.

Our adversarial model does a much better job in matching these statistics. The histogram of the uni-gram count ratios are clearly closer to that of test reference captions. It does not seem to be significantly overusing the popular words, but there is still a trend of under utilizing some of the rarer words. It is however clearly better than the baseline model in this aspect. The improvement is less pronounced with the bi- and tri-grams, but still present.

Another clear benefit from using the adversarial training is observed in terms of diversity in the captions produced by the model. The diversity in terms of both global statistics and per image diversity statistics is much higher in captions produced by the adversarial models compared to the baseline models. This result is presented in Table 4. We can see that the vocabulary size approximately doubles from 1085 in the baseline model to 2176 in the adversarial model using beamsearch. A similar trend is also seen comparing the sampling variants. As expected more diversity is achieved when sampling from the adversarial model instead of using beamsearch with vocabulary size increasing to 2671 in Adv-samp. The effect of this increased diversity can be in the qualitative examples shown in Figure 6. More qualitative samples are included in the supplementary material.

We can also see that the adversarial model learns to construct significantly more novel sentences compared to the baseline model with Adv-bs producing novel captions 72.53% of the time compared to just 44.27% by the beam-bs. All three per-image diversity statistics also improve in the adversarial models indicating that they can produce a more diverse set of captions for any input image.

Table 4 also shows the diversity statistics on the reference captions on the test set. This shows that although adversarial models do considerably better than the baseline, there is still a gap in diversity statistics when compared to the human written captions, especially in vocabulary size.

Finally, Figure 7 plots the vocabulary size as a function of word count threshold, k. We see that the curve for the adversarial model better matches the human written captions compared to the baseline for all values of k. This illustrates that the gains in vocabulary size in adversarial models does not arise from using words with specific frequency, but is instead distributed evenly across word frequencies.

Figure 7: Vocabulary size as a function of word counts.

5.4 Ablation Study

We conducted experiments to understand the importance of different components of our architecture. The results are presented in Table 5. The baseline model for this experiment uses VGG [37] features as input and is trained using maximum likelihood loss and is shown in the first row of Table 5. The other four models use adversarial training.

Comparing rows 1 and 2 of Table 5, we see that adversarial training with a discriminator evaluating a single caption does badly. Both the diversity and Meteor score drop compared to the baseline. In this setting the generator can get away with producing one good caption (mode collapse) for an image as the discriminator is unable to penalize the lack of diversity in the generator.

However, comparing rows 1 and 3, we see that adversarial training using a discriminator evaluating 5 captions simultaneously does much better in terms of Div-2 and vocabulary size. Adding feature matching loss further improves the diversity and also slightly improves accuracy in terms of Meteor score. Thus simultaneously evaluating multiple captions and using feature matching loss allows us to alleviate mode collapse generally observed in GANs.

Upgrading to the ResNet[15] increases the Meteor score greatly and slightly increases the vocabulary size. ResNet features provide richer visual information which is used by the generator to produce diverse but still correct captions.

We also notice that the generator learns to ignore the input noise. This is because there is sufficient stochasticity in the generation process due to sequential sampling of words and thus the generator doesn’t need the additional noise input to increase output diversity. Similar observation was reported in other conditional GAN architectures [18, 29]

Image Feature Evalset size (p) Feature Matching Meteor Div-2 Vocab. Size
VGG baseline 0.247 0.44 1367
VGG 1 No 0.179 0.40 812
VGG 5 No 0.197 0.52 1810
VGG 5 yes 0.207 0.59 2547
ResNet 5 yes 0.236 0.55 2671
Table 5: Performance comparison of various configurations of the adversarial caption generator on the validation set.

6 Conclusions

We have presented an adversarial caption generator model which is explicitly trained to generate diverse captions for images. We achieve this by utilizing a discriminator network designed to promote diversity and use the adversarial learning framework to train our generator. Results show that our adversarial model produces captions which are diverse and match the statistics of human generated captions significantly better than the baseline model. The adversarial model also uses larger vocabulary and is able to produce significantly more novel captions. The increased diversity is achieved while preserving accuracy of the generated captions, as shown through a human evaluation.

Acknowledgements

This research was supported by the German Research Foundation (DFG CRC 1223) and by the Berkeley Artificial Intelligence Research (BAIR) Lab.

References

  • [1] P. Anderson, B. Fernando, M. Johnson, and S. Gould. Spice: Semantic propositional image caption evaluation. In European Conference on Cmoputer Vision (ECCV), 2016.
  • [2] J. Andreas and D. Klein. Reasoning about pragmatics with neural listeners and speakers. In

    Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)

    , 2016.
  • [3] M. Arjovsky and L. Bottou. Towards principled methods for training generative adversarial networks. In Proceedings of the International Conference on Learning Representations (ICLR), 2017.
  • [4] S. Bengio, O. Vinyals, N. Jaitly, and N. Shazeer.

    Scheduled sampling for sequence prediction with recurrent neural networks.

    In Advances in Neural Information Processing Systems (NIPS), 2015.
  • [5] X. Chen, T.-Y. L. Hao Fang, R. Vedantam, S. Gupta, P. Dollár, and C. L. Zitnick. Microsoft COCO captions: Data collection and evaluation server. arXiv preprint arxiv:1504.00325, 2015.
  • [6] COCO. Microsoft COCO Image Captioning Challenge. https://competitions.codalab.org/competitions/3221#results, 2017.
  • [7] B. Dai, D. Lin, R. Urtasun, and S. Fidler. Towards diverse and natural image descriptions via a conditional gan. 2017.
  • [8] M. Denkowski and A. Lavie. Meteor universal: Language specific translation evaluation for any target language. ACL 2014, 2014.
  • [9] E. L. Denton, S. Chintala, R. Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in Neural Information Processing Systems (NIPS), 2015.
  • [10] J. Devlin, H. Cheng, H. Fang, S. Gupta, L. Deng, X. He, G. Zweig, and M. Mitchell. Language models for image captioning: The quirks and what works. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), 2015.
  • [11] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , 2015.
  • [12] J. Donahue, L. A. Hendricks, M. Rohrbach, S. Venugopalan, S. Guadarrama, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016.
  • [13] A. Farhadi, M. Hejrati, M. Sadeghi, P. Young, C. Rashtchian, J. Hockenmaier, and D. Forsyth. Every picture tells a story: Generating sentences from images. In Proceedings of the European Conference on Computer Vision (ECCV), 2010.
  • [14] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), 2014.
  • [15] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [16] L. A. Hendricks, Z. Akata, M. Rohrbach, J. Donahue, B. Schiele, and T. Darrell. Generating visual explanations. In Proceedings of the European Conference on Computer Vision (ECCV), 2016.
  • [17] F. Huszar. How (not) to train your generative model: Scheduled sampling, likelihood, adversary? arXiv preprint arXiv:1511.05101, 2015.
  • [18] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [19] U. Jain, Z. Zhang, and A. Schwing.

    Creativity: Generating diverse questions using variational autoencoders.

    2017.
  • [20] E. Jang, S. Gu, and B. Poole. Categorical reparameterization with gumbel-softmax. Proceedings of the International Conference on Learning Representations (ICLR), 2016.
  • [21] A. Karpathy and L. Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
  • [22] G. Kulkarni, V. Premraj, V. Ordonez, S. Dhar, S. Li, Y. Choi, A. C. Berg, and T. L. Berg. Babytalk: Understanding and generating simple image descriptions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(12), 2013.
  • [23] J. Li, M. Galley, C. Brockett, J. Gao, and B. Dolan. A diversity-promoting objective function for neural conversation models. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2016.
  • [24] J. Li, W. Monroe, T. Shi, A. Ritter, and D. Jurafsky. Adversarial learning for neural dialogue generation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017.
  • [25] S. Liu, Z. Zhu, N. Ye, S. Guadarrama, and K. Murphy. Optimization of image description metrics using policy gradient methods. arXiv preprint arXiv:1612.00370, 2016.
  • [26] J. Lu, C. Xiong, D. Parikh, and R. Socher. Knowing when to look: Adaptive attention via a visual sentinel for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [27] P. Luc, C. Couprie, S. Chintala, and J. Verbeek. Semantic segmentation using adversarial networks. In Advances in Neural Information Processing Systems Workshops (NIPS Workshops), 2016.
  • [28] C. J. Maddison, A. Mnih, and Y. W. Teh. The concrete distribution: A continuous relaxation of discrete random variables. Proceedings of the International Conference on Learning Representations (ICLR), 2016.
  • [29] M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error. Proceedings of the International Conference on Learning Representations (ICLR), 2016.
  • [30] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. Proceedings of the International Conference on Learning Representations (ICLR), 2016.
  • [31] M. Ranzato, S. Chopra, M. Auli, and W. Zaremba. Sequence level training with recurrent neural networks. In Proceedings of the International Conference on Learning Representations (ICLR), 2016.
  • [32] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NIPS), 2015.
  • [33] S. J. Rennie, E. Marcheret, Y. Mroueh, J. Ross, and V. Goel. Self-critical sequence training for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [34] M. Rohrbach, W. Qiu, I. Titov, S. Thater, M. Pinkal, and B. Schiele. Translating video content to natural language descriptions. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013.
  • [35] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems (NIPS), 2016.
  • [36] R. Shetty, H. R-Tavakoli, and J. Laaksonen. Exploiting scene context for image captioning. In ACMMM Vision and Language Integration Meets Multimedia Fusion Workshop, 2016.
  • [37] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations (ICLR), 2015.
  • [38] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998.
  • [39] R. Vedantam, C. Lawrence Zitnick, and D. Parikh. CIDEr: Consensus-based image description evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
  • [40] A. K. Vijayakumar, M. Cogswell, R. R. Selvaraju, Q. Sun, S. Lee, D. Crandall, and D. Batra. Diverse beam search: Decoding diverse solutions from neural sequence models. arXiv preprint arXiv:1610.02424, 2016.
  • [41] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
  • [42] Z. Wang, F. Wu, W. Lu, J. Xiao, X. Li, Z. Zhang, and Y. Zhuang. Diverse image captioning via grouptalk. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 2016.
  • [43] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4), 1992.
  • [44] T. Yao, Y. Pan, Y. Li, Z. Qiu, and T. Mei. Boosting image captioning with attributes. arXiv preprint arXiv:1611.01646, 2016.
  • [45] Q. You, H. Jin, Z. Wang, C. Fang, and J. Luo. Image captioning with semantic attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [46] L. Yu, W. Zhang, J. Wang, and Y. Yu. SeqGAN: sequence generative adversarial nets with policy gradient. Proceedings of the Conference on Artificial Intelligence (AAAI), 2016.

7 Supplementary Material

We present several qualitative examples to illustrate the strengths of our adversarially trained caption generator. All the examples are from the sampled versions of the adversarial (adv-samp) and the baseline (base-samp) models presented above. We show qualitative examples to highlight two main merits of the adversarial caption generator. First, we demonstrate diversity when sampling multiple captions for each image in Section 7.1. Next, we illustrate diversity across images in Section 7.2.

7.1 Illustrating diversity in captions for each image

To qualitatively demonstrate the diverse captions produced by the our adversarial model for each image, we visualize three captions produced by the adversarial and the baseline model for each input image. This is shown in figures 8 and 9. The captions are obtained by retaining the top three caption samples out of five (ranked by models’ probability) from each model. Here bi-grams which are top-100 frequent bi-grams in the training set are highlighted in red (e.g., “a group” and “group of”). Additionally captions which are replicas from training set are marked with a ‘[rgb]0.3,0.1,0.5• ’ symbol. We can see that adversarial generator produces more diverse sets of captions for each image without over-using more frequent bi-grams and producing more novel sentences. For example, in the two figures, we see that the baseline model produces 22 captions (out of 45) which are copies from the training set, whereas the adversarial model does so only six times.

Adv-samp [rgb]0.3,0.1,0.5• reda red motorcycle parked redon the side of the road a motor cycle parked outside reda building with people nearby a motorcycle parked redin front of a group of people
[rgb]0.3,0.1,0.5• a motorcycle is parked redon a city street [rgb]0.3,0.1,0.5• a motor bike parked redin front of a building a police officer redon a motorcycle redin front of a crowd redof people
reda red motorcycle parked redon a street in a city a row of bicycles parked outside redof a building a police officer on his motorcycle redin front of a crowd
Base-samp [rgb]0.3,0.1,0.5• reda man redriding a motorcycle reddown a street [rgb]0.3,0.1,0.5• reda group of people riding bikes reddown a street [rgb]0.3,0.1,0.5• a motorcycle parked redon the side of a road
[rgb]0.3,0.1,0.5• reda person redriding a motorcycle redon a city street reda group of people on a street with motorcycles a motorcycle is parked redon the side of a road
[rgb]0.3,0.1,0.5• a motorcycle is parked redon the side of the road reda group of people on a street with motorcycles a motorcycle parked redon the side of a road redwith a person walking by
Adv-samp a skier is jumping over a snow covered hill reda group of people watching a skateboarder do stunts [rgb]0.3,0.1,0.5• a stop sign with graffiti written redon it
reda person on skis jumping over a hill reda group of skateboarders performing tricks redat a skate park a stop sign redwith a few stickers redon it
a skier is in mid air after completing a jump reda group of skateboarders watch as others watch a stop sign has graffiti written all over it
Base-samp [rgb]0.3,0.1,0.5• reda man riding skis reddown a snow covered slope [rgb]0.3,0.1,0.5• reda man doing a trick redon a skateboard at a skate park a stop sign redwith a street sign redon top of it
[rgb]0.3,0.1,0.5• reda man riding skis reddown a snow covered slope [rgb]0.3,0.1,0.5• reda man riding reda skateboard down a rail a stop sign redwith a street sign redon top of it
reda person on a snowboard jumping over a ramp [rgb]0.3,0.1,0.5• reda man doing a trick redon a skateboard in a park [rgb]0.3,0.1,0.5• a stop sign redwith a street sign above it
Adv-samp reda man standing next to a mail truck a bouquet of flowers redin a vase redon a table a plant redin a vase redon a wooden porch
reda picture of people standing outside a business a bouquet of flowers redin a vase redon a table reda small blue flower vase redsitting on a wooden porch
reda couple of people standing redby a bus a vase full of purple flowers redsitting on a table reda large flower arrangement redin a vase redon a corner
Base-samp reda group of people standing around reda white truck [rgb]0.3,0.1,0.5• a vase redfilled with flowers redsitting on top of a table [rgb]0.3,0.1,0.5• a vase with flowers in it redsitting on a table
reda group of people standing around reda white truck [rgb]0.3,0.1,0.5• a vase redfilled with flowers redsitting on top of a table [rgb]0.3,0.1,0.5• a vase of flowers redsitting on a table
reda group of people standing on a street next to a white truck [rgb]0.3,0.1,0.5• a vase of flowers redsitting on a table [rgb]0.3,0.1,0.5• a vase with flowers in it redon a table
Figure 8: Comparing 3 captions sampled from adversarial model to the baseline model. Bi-grams which are top-100 frequent bi-grams in the training set are highlighted in red. Captions which are replicas from training set are marked with a [rgb]0.3,0.1,0.5• .
Adv-samp a long line of stairs leading redto a church reda large church redwith a very tall tower several stop signs redin front of some buildings
[rgb]0.3,0.1,0.5• reda large cathedral redfilled with lots of pews reda large tall brick building redwith a clock on it a stop sign redin front of some graffiti writing
a cathedral with stained glass windows redand a few people a church steeple redwith a clock on its side several stop signs lined up redin a row
Base-samp reda large building redwith a large window redand a building reda large clock tower redin a city [rgb]0.3,0.1,0.5• a stop sign with graffiti redon it
a row of benches redin front of a building reda large clock tower redin a city with a sky background a stop sign is shown redwith a lot of graffiti redon it
a church redwith a large window redand a large building [rgb]0.3,0.1,0.5• reda large clock tower redin the middle redof a park a stop sign redand a stop sign redin front of a building
Adv-samp a family enjoying pizza redat a restaurant party a view redof a city street at dusk reda white toilet sitting underneath a shower curtain
reda group of friends enjoying pizza and drinking beer a city street with many buildings and buildings [rgb]0.3,0.1,0.5• reda white toilet sitting underneath reda bathroom window
reda group of kids enjoying pizza and drinking beer a view redof a city intersection redin the evening a bathroom redwith a shower curtain open redand a toilet in it
Base-samp reda group of people sitting redat a table with pizza a traffic light redin a city with tall buildings [rgb]0.3,0.1,0.5• reda bathroom with a toilet redand a shower
reda group of people sitting redat a table with pizza and drinks a traffic light redin a city with tall buildings [rgb]0.3,0.1,0.5• reda bathroom with a toilet redand a shower
reda group of people sitting redat a table with pizza [rgb]0.3,0.1,0.5• a traffic light redand a street sign redon a city street reda white toilet sitting rednext to a shower redin a bathroom
Figure 9: Comparing 3 captions sampled from adversarial model to the baseline model.

7.2 Illustrating diversity across images

Adversarial model produces diverse captions across different images containing similar content, whereas the baseline model tends to use common generic captions which is mostly correct but not descriptive. We quantify this by looking at the most frequently generated captions by the baseline model on the test set in Table 6. Note that here we consider only the most likely caption according to the model. Table 6 shows that the baseline model tends to repeatedly generate identical captions for different images. Compared to this, adversarial model is less prone to repeating generic captions, as seen in Table 7. This is visualized in Figures 10, 11 and 12. Here we show sets of five images for which the baseline model generates identical generic caption. The five images are picked from among the images corresponding to captions in Table 6, starting from the most frequent caption. Some entries are skipped, for example the caption in the last row, to avoid repeated concepts. While the baseline model produces a fixed generic caption for these images, we see that the adversarial model produces diverse captions and which is often more specific to the contents of image. For example, in the last row of Figure 10 we can see that the baseline model produces the generic caption ”a man riding skis down a snow covered slope”, whereas captions produced by the adversarial model include more image specific terms like ”jumping”, ”turn”, ”steep” and ”corss country skiing”.

Sentence # baseline # adversarial
a man riding a wave on top of a surfboard 54 20
a bathroom with a toilet and a sink 44 1
a baseball player swinging a bat at a ball 37 2
a man riding skis down a snow covered slope 29 5
a man holding a tennis racquet on a tennis court 26 3
a bathroom with a sink and a mirror 25 3
a man riding a snowboard down a snow covered slope 24 4
a baseball player holding a bat on a field 22 1
a man riding a skateboard down a street 21 2
a bus is parked on the side of the road 20 0
Table 6: Most frequently repeated captions generated by the baseline model on the test set of 5000 images. The caption ”a man riding a wave on top of a surfboard” is also the most frequently generated caption by the adversarial model, albeit less than half the times of the baseline model.
Sentence # adversarial # baseline
a man riding a wave on top of a surfboard 20 54
a skateboarder is attempting to do a trick 16 0
a female tennis player in action on the court 16 0
a living room filled with furniture and a flat screen tv 15 0
a bus that is sitting in the street 15 3
a long long train on a steel track 11 0
a close up of a sandwich on a plate 10 0
a baseball player swinging at a pitched ball 10 0
a bus that is driving down the street 9 1
a boat that is floating in the water 8 3
Table 7: Most frequently repeated captions generated by the adversarial model on the test set of 5000 images.
Adv-samp a surfer rides a large wave in the ocean a surfer is falling off his board as he rides a wave a person on a surfboard riding a wave a man surfing on a surfboard in rough waters a surfer rides a small wave in the ocean
Base-samp   a man riding a wave on top of a surfboard   

Adv-samp a bathroom with a walk in shower and a sink a dirty bathroom with a broken toilet and sink a view of a very nice looking rest room a white toilet in a public restroom stall a small bathroom has a broken toilet and a broken sink
Base-samp   a bathroom with a toilet and a sink   

Adv-samp a baseball player getting ready to swing at a pitch a boy in a baseball uniform swinging a bat a group of kids playing baseball on a field a baseball game in progress with the batter upt to swing a crowd watches a baseball game being played
Base-samp   a baseball player swinging a bat at a ball   


Adv-samp a person on skis jumping over a ramp a skier is making a turn on a course a cross country skier makes his way through the snow a skier is headed down a steep slope a person cross country skiing on a trail
Base-samp   a man riding skis down a snow covered slope   

Figure 10: Illustrating diversity across images
Adv-samp a tennis player gets ready to return a serve two men dressed in costumes and holding tennis rackets a tennis player hits the ball during a match a male tennis player in action on the court a man in white is about to serve a tennis ball
Base-samp   a man holding a tennis racquet on a tennis court   

Adv-samp a young boy riding a skateboard down a street a skateboarder is attempting to do a trick

a boy wearing a helmet and knee pads riding a skateboard

a boy in white shirt doing a trick on a skateboard a boy is skateboarding down a street in a neighborhood
Base-samp   a man riding a skateboard down a steet   

Adv-samp a dish with noodles and vegetables in it a plate of food that has some fried eggs on it a meal consisting of rice meat and vegetables a close up of some meat and potatoes a plate with some meat and vegetables on it
Base-samp   a plate of food with meat and vegetables   

Adv-samp a group of people standing around a shop a group of young people standing around talking on cell phones a group of soldiers stand in front of microphones a couple of women standing next to a man in front of a store a group of people posing for a photo in formal wear
Base-samp   a group of people standing around a table   

Figure 11: Illustrating diversity across images
Adv-samp a group of men sitting around a meeting room a group of people sitting at a bar drinking wine a group of friends enjoying lunch outdoors at a outdoor event a group of people sitting at tables outside a couple of men that are working on laptops
Base-samp   a group of people sitting around a table   

Adv-samp a laptop and a desktop computer sit on a desk a person is working on a computer screen a cup of coffee sitting next to a laptop a laptop computer sitting on top of a desk next to a desktop computer a picture of a computer on a desk
Base-samp   a laptop computer sitting on top of a desk   

Figure 12: Illustrating diversity across images