Automatically describing an image with a sentence is a long-standing challenge in computer vision and natural language processing. Due to recent progress in object detection, attribute classification, action recognition, etc., there is renewed interest in this area. However, evaluating the quality of descriptions has proven to be challenging. We propose a novel paradigm for evaluating image descriptions that uses human consensus. This paradigm consists of three main parts: a new triplet-based method of collecting human annotations to measure consensus, a new automated metric (CIDEr) that captures consensus, and two new datasets: PASCAL-50S and ABSTRACT-50S that contain 50 sentences describing each image. Our simple metric captures human judgment of consensus better than existing metrics across sentences generated by various sources. We also evaluate five state-of-the-art image description approaches using this new protocol and provide a benchmark for future comparisons. A version of CIDEr named CIDEr-D is available as a part of MS COCO evaluation server to enable systematic evaluation and benchmarking.READ FULL TEXT VIEW PDF
Sentence/Caption evaluation using automated metrics
Adds SPICE metric to coco-caption evaluation server codes
python codes for CIDEr - Consensus-based Image Caption Evaluation
This is the caption metrics provided by coco used for VQG evaluation
have increased the interest in solving higher level scene understanding problems. One such problem is generating human-like descriptions of an image. In spite of the growing interest in this area, the evaluation of novel sentences generated by automatic approaches remains challenging. Evaluation is critical for measuring progress and spurring improvements in the state of the art. This has already been shown in various problems in computer vision, such as detection[13, 7], segmentation [13, 28], and stereo .
Existing evaluation metrics for image description attempt to measure several desirable properties. These include grammaticality, saliency (covering main aspects), correctness/truthfulness,etc. Using human studies, these properties may be measured, e.g. on separate one to five [29, 37, 43, 11] or pairwise scales . Unfortunately, combining these various results into one measure of sentence quality is difficult. Alternatively, other works [22, 18] ask subjects to judge the overall quality of a sentence.
An important yet non-obvious property exists when image descriptions are judged by humans: What humans like often does not correspond to what is human-like.111This is a subtle but important distinction. We show qualitative examples of this in the appendix. That is, the sentence that is most similar to a typical human generated description is often not judged to be the “best” description. In this paper, we propose to directly measure the “human-likeness” of automatically generated sentences. We introduce a novel consensus-based evaluation protocol, which measures the similarity of a sentence to the majority, or consensus of how most people describe the image (Fig. 1). One realization of this evaluation protocol uses human subjects to judge sentence similarity between a candidate sentence and human-provided ground truth sentences. The question “Which of two sentences is more similar to this other sentence?” is posed to the subjects. The resulting quality score is based on how often a sentence is labeled as being more similar to a human-generated sentence. The relative nature of the question helps make the task objective. We encourage the reader to review how a similar protocol has been used in 
to capture human perception of image similarity. These annotation protocols for similarity may be understood as instantiations of 2AFC (two alternative forced choice), a popular modality in psychophysics.
Since human studies are expensive, hard to reproduce, and slow to evaluate, automatic evaluation measures are commonly desired. To be useful in practice, automated metrics should agree well with human judgment. Some popular metrics used for image description evaluation are BLEU  (precision-based) from the machine translation community and ROUGE  (recall-based) from the summarization community. Unfortunately, these metrics have been shown to correlate weakly with human judgment [22, 11, 4, 18]. For the task of judging the overall quality of a description, the METEOR  metric has shown better correlation with human subjects. Other metrics rely on the ranking of captions  and cannot evaluate novel image descriptions.
We propose a new automatic consensus
metric of image description quality – CIDEr (Consensus-based Image Description Evaluation). Our metric measures the similarity of a generated sentence against a set of ground truth sentences written by humans. Our metric shows high agreement with consensus as assessed by humans. Using sentence similarity, the notions of grammaticality, saliency, importance and accuracy (precision and recall) are inherently captured by our metric.
Existing datasets popularly used to evaluate image description approaches have a maximum of only five descriptions per image [35, 18, 32]. However, we find that five sentences are not sufficient for measuring how a “majority” of humans would describe an image. Thus, to accurately measure consensus, we collect two new evaluation datasets containing 50 descriptions per image – PASCAL-50S and ABSTRACT-50S. The PASCAL-50S dataset is based on the popular UIUC Pascal Sentence Dataset, which has 5 descriptions per image. This dataset has been used for both training and testing in numerous works [29, 22, 14, 37]. The ABSTRACT-50S dataset is based on the dataset of Zitnick and Parikh . While previous methods have only evaluated using 5 sentences, we explore the use of 1 to 50 reference sentences. Interestingly, we find that most metrics improve in performance with more sentences.222Except BLEU computed on unigrams Inspired by this finding, the MS COCO testing dataset now contains 5K images with 40 reference sentences to boost the accuracy of automatic measures .
Contributions: In this work, we propose a consensus-based evaluation protocol for image descriptions. We introduce a new annotation modality for human judgment, a new automated metric, and two new datasets. We compare the performance of five state-of-the-art machine generation approaches [29, 22, 14, 37]. Our code and datasets are available on the author’s webpages. Finally, to facilitate the adoption of this protocol, we have made CIDEr available as a metric on the newly released MS COCO caption evaluation server .
Vision and Language: Numerous papers have studied the relationship between language constructs and image content. Berg et al.  characterize the relative importance of objects (nouns). Zitnick and Parikh  study relationships between visual and textual features by creating a synthetic Abstract Scenes Dataset. Other works have modeled prepositional relationships , attributes (adjectives) [23, 34], and visual phrases (i.e. visual elements that co-occur) 
. Recent works have utilized techniques in deep learning to learn joint embeddings of text and image fragments.
Image Description Generation: Various methods have been explored for generating full descriptions for images. Broadly, the techniques are either retrieval- [14, 32, 18] or generation-based [29, 22, 44, 37]. While some retrieval-based approaches use global retrieval , others retrieve text phrases and stitch them together in an approach inspired by extractive summarization 
. Recently, generative approaches based on combination of Convolutional and Recurrent Neural Networks[19, 6, 10, 42, 27, 21] have created a lot of excitement. Other generative approaches have explored creating sentences by inference over image detections and text-based priors  or exploiting word co-occurrences using syntactic trees . Rohrbach et al.  propose a machine translation approach that goes from an intermediate semantic representation to sentences. Some other approaches include [17, 24, 43, 44]. Most of the approaches use the UIUC Pascal Sentence [14, 22, 29, 37, 17] and the MS COCO datasets [19, 6, 10, 42, 27, 21] for evaluation. In this work we focus on the problem of evaluating image captioning approaches.
: Automated evaluation metrics have been used in many domains within Artificial Intelligence (AI), such as statistical machine translation and text summarization. Some of the popular metrics in machine translation include those based on precision, such as BLEU and those based on precision as well as recall, such as METEOR . While BLEU (BiLingual Evaluation Understudy) has been the most popular metric, its effectiveness has been repeatedly questioned [22, 11, 4, 18]. A popular metric in the summarization community is ROUGE  (Recall Oriented Understudy of Gisting Evaluation). This metric is primarily recall-based and thus has a tendency to reward long sentences with high recall. These metrics have been shown to have weak to moderate correlation with human judgment . Recently, METEOR has been used for image description evaluation with more promising results . Another metric proposed by Hodosh et al.  can only evaluate ranking-based approaches, it cannot evaluate novel sentences. We propose a consensus-based metric that rewards a sentence for being similar to the majority of human written descriptions. Interestingly, similar ideas have been used previously to evaluate text summarization .
Datasets: Numerous datasets have been proposed for studying the problem of generating image descriptions. The most popular dataset is the UIUC Pascal Sentence Dataset . This dataset contains 5 human written descriptions for 1,000 images. This dataset has been used by a number of approaches for training and testing. The SBU captioned photo dataset  contains one description per image for a million images, mined from the web. These are commonly used for training image description approaches. Approaches are then tested on a query set of 500 images with one sentence each. The Abstract Scenes dataset  contains cartoon-like images with two descriptions. The recently released MS COCO dataset  contains five sentences for a collection of over 100K images. This dataset is gaining traction with recent image description approaches [19, 6, 10, 42, 27, 21]. Other datasets of images and associated descriptions include ImageClef  and Flickr8K . In this work, we introduce two new datasets. First is the PASCAL-50S dataset where we collected 50 sentences per image for the 1,000 images from UIUC Pascal Sentence dataset. The second is the ABSTRACT-50S dataset where we collected 50 sentences for a subset of 500 images from the Abstract Scenes dataset. We demonstrate that more sentences per image are essential for reliable automatic evaluation.
The rest of this paper is organized as follows. We first give details of our triplet human annotation modality (Sec. 3). Then we provide the details of our consensus-based automated metric, CIDEr (Sec. 4). In Sec. 5 we provide the details of our two new image-sentence datasets, PASCAL-50S and ABSTRACT-50S. Our contributions of triplet annotation, metric and dataset make consensus-based image description evaluation feasible. Our results (Sec. 7) demonstrate that our automated metric and our proposed datasets capture consensus better than existing choices.
All our human studies are performed on the Amazon Mechanical Turk (AMT). Subjects are restricted to the United States, and other qualification criteria are imposed based on worker history.333Approval rate greater than 95%, minimum 500 HITs approved
Given an image and a collection of human generated reference sentences describing it, the goal of our consensus-based protocol is to measure the similarity of a candidate sentence to a majority of how most people describe the image (i.e. the reference sentences). In this section, we describe our human study protocol for generating ground truth consensus scores. In Sec. 7, these ground truth scores are used to evaluate several automatic metrics including our proposed CIDEr metric.
An illustration of our human study interface is shown in Fig. 2. Subjects are shown three sentences: A, B and C. They are asked to pick which of two sentences (B or C) is most similar to sentence A. Sentences B and C are two candidate sentences, while sentence A is a reference sentence. For each choice of B and C, we form triplets using all the reference sentences for an image. We provide no explicit concept of “similarity”. Interestingly, even though we do not say that the sentences are image descriptions, some workers commented that they were imagining the scene to make the choice. The relative nature of the task – “Which of the two sentences, B or C, is more similar to A?” – helps make the assessment more objective. That is, it is easier to judge if one sentence is more similar than another to a sentence, than to provide an absolute rating from 1 to 5 of the similarity between two sentences .
We collect three human judgments for each triplet. For every triplet, we take the majority vote of the three judgments. For each pair of candidate sentences (B, C), we assign B the winner if it is chosen as more similar by a majority of triplets, and similarly for C. These pairwise relative rankings are used to evaluate the performance of the automated metrics. That is, when automatic metrics give both sentences B and C a score, we check whether B received a higher score or C. Accuracy is computed as the proportion of candidate pairs on which humans and the automatic metric agree on which of the two sentences is the winner.
Our goal is to automatically evaluate for image how well a candidate sentence matches the consensus of a set of image descriptions . All words in the sentences (both candidate and references) are first mapped to their stem or root forms. That is, “fishes”, “fishing” and “fished” all get reduced to “fish.” We represent each sentence using the set of -grams present in it. An -gram is a set of one or more ordered words. In this paper we use -grams containing one to four words.
Intuitively, a measure of consensus would encode how often -grams in the candidate sentence are present in the reference sentences. Similarly, -grams not present in the reference sentences should not be in the candidate sentence. Finally, -grams that commonly occur across all images in the dataset should be given lower weight, since they are likely to be less informative. To encode this intuition, we perform a Term Frequency Inverse Document Frequency (TF-IDF) weighting for each -gram . The number of times an -gram occurs in a reference sentence is denoted by or for the candidate sentence . We compute the TF-IDF weighting for each -gram using:
where is the vocabulary of all -grams and is the set of all images in the dataset. The first term measures the TF of each -gram , and the second term measures the rarity of using its IDF. Intuitively, TF places higher weight on -grams that frequently occur in the reference sentence describing an image, while IDF reduces the weight of -grams that commonly occur across all images in the dataset. That is, the IDF provides a measure of word saliency by discounting popular words that are likely to be less visually informative. The IDF is computed using the logarithm of the number of images in the dataset divided by the number of images for which occurs in any of its reference sentences.
Our CIDEr score for -grams of length
is computed using the average cosine similarity between the candidate sentence and the reference sentences, which accounts for both precision and recall:
is a vector formed bycorresponding to all -grams of length and is the magnitude of the vector . Similarly for .
We use higher order (longer) -grams to capture grammatical properties as well as richer semantics. We combine the scores from -grams of varying lengths as follows:
Empirically, we found that uniform weights work the best. We use = 4.
We propose two new datasets – PASCAL-50S and ABSTRACT-50S – for evaluating image caption generation methods. Both the datasets have 50 reference sentences per image for 1,000 and 500 images respectively. These are intended as “testing” datasets, crafted to enable consensus-based evaluation. For a list of training datasets, we encourage the reader to explore [25, 32]. The PASCAL-50S dataset uses all 1,000 images from the UIUC Pascal Sentence Dataset  whereas the ABSTRACT-50S dataset uses 500 random images from the Abstract Scenes Dataset . The Abstract Scenes Dataset contains scenes made from clipart objects. Our two new datasets are different from each other both visually and in the type of image descriptions produced.
Our goal was to collect image descriptions that are objective and representative of the image content. Subjects were shown an image and a text box, and were asked to “Describe what is going on in the image”. We asked subjects to capture the main aspects of the scene and provide descriptions that others are also likely to provide. This includes writing descriptions rather than “dialogs” or overly descriptive sentences. Workers were told that a good description should help others recognize the image from a collection of similar images. Instructions also mentioned that work with poor grammar would be rejected. Snapshots of our interface can be found in the appendix. Overall, we had 465 subjects for ABSTRACT-50S and 683 subjects for PASCAL-50S datasets. We ensure that each sentence for an image is written by a different subject. The average sentence length for the ABSTRACT-50S dataset is 10.59 words compared to 8.8 words for PASCAL-50S.
The goals of our experiments are two-fold:
Evaluating how well our proposed metric CIDEr captures human judgement of consensus, as compared to existing metrics.
Comparing existing state-of-the-art automatic image description approaches in terms of how well the descriptions they produce match human consensus of image descriptions.
We first describe how we select candidate sentences for evaluation and the metrics we use for comparison to CIDEr. Finally, we list the various automatic image description approaches and our experimental set up.
On ABSTRACT-50S, we use 48 of our 50 sentences as reference sentences (sentence A in our triplet annotation). The remaining 2 sentences per image can be used as candidate sentences. We form 400 pairs of candidate sentences (B and C in our triplet annotation). These include two kinds of pairs. The first are 200 human–human correct pairs (HC), where we pick two human sentences describing the same image. The second kind are 200 human–human incorrect pairs (HI), where one of the sentences is a human description for the image and the other is also a human sentence but describing some other image from the dataset picked at random.
For PASCAL-50S, our candidate sentences come from a diverse set of sources: human sentences from the UIUC Pascal Sentence Dataset as well as machine-generated sentences from five automatic image description methods. These span both retrieval-based and generation-based methods: Midge , Babytalk , Story , and two versions of Translating Video Content to Natural Language Descriptions  (Video and Video+).444We thank the authors of these approaches for making their outputs available to us. We form 4,000 pairs of candidate sentences (again, B and C for our triplet annotation). These include four types of pairs (1,000 each). The first two are human–human correct (HC) and human–human incorrect (HI) similar to ABSTRACT-50S. The third are human–machine (HM) pairs formed by pairing a human sentence describing an image with a machine generated sentence describing the same image. Finally, the fourth are machine–machine (MM) pairs, where we compare two machine generated sentences describing the same image. We pick the machine generated sentences randomly, so that each method participates in roughly equal number of pairs, on a diverse set of images. Ours is the first work to perform a comprehensive evaluation across these different kinds of sentences.
For consistency, we drop two reference sentences for the PASCAL-50S evaluations so that we evaluate on both datasets (ABSTRACT-50S and PASCAL-50S) with a maximum of 48 reference sentences.
The existing metrics used in the community for evaluation of image description approaches are BLEU , ROUGE  and METEOR . BLEU is precision-based and ROUGE is recall-based. More specifically, image description methods have used versions of BLEU called BLEU and BLEU, and a version of ROUGE called ROUGE. A recent survey paper  has used a different version of ROUGE called ROUGE, as well as the machine translation metric called METEOR . We now briefly describe these metrics. More details can be found in the appendix. BLEU (BiLingual Evaluation Understudy)  is a popular metric for Machine Translation (MT) evaluation. It computes an -gram based precision for the candidate sentence with respect to the references. The key idea of BLEU is to compute precision by clipping
. Clipping computes precision for a word, based on the maximum number of times it occurs in any reference sentence. Thus, a candidate sentence saying “The The The”, would get credit for saying only one “The”, if the word occurs at most once across individual references. BLEU computes the geometric mean of the n-gram precisions and adds a “brevity-penalty” to discourage overly short sentences. The most common formulation of BLEU is BLEU4, which uses 1-grams up to 4-grams, though lower-order variations such as BLEU1 (unigram BLEU) and BLEU2 (unigram and bigram BLEU) are also used. Similar to[12, 18] for evaluating image descriptions, we compute BLEU at the sentence level. For machine translation BLEU is most often computed at the corpus level where correlation with human judgment is high; the correlation is poor at the level of individual sentences. In this paper we are specifically interested in the evaluation of accuracies on individual sentences. ROUGE stands for Recall Oriented Understudy of Gisting Evaluation . It computes -gram based recall for the candidate sentence with respect to the references. It is a popular metric for summarization evaluation. Similar to BLEU, versions of ROUGE can be computed by varying the -gram count. Two other versions of ROUGE are ROUGE and ROUGE. These compute an F-measure with a recall bias using skip-bigrams and longest common subsequence respectively, between the candidate and each reference sentence. Skip-bigrams are all pairs of ordered words in a sentence, sampled non-consecutively. Given these scores, they return the maximum score across the set of references as the judgment of quality. METEOR stands for Metric for Evaluation of Translation with Explicit ORdering . Similar to ROUGE and ROUGE, it also computes the F-measure based on matches, and returns the maximum score over a set of references as its judgment of quality. However, it resolves word-level correspondences in a more sophisticated manner, using exact matches, stemming and semantic similarity. It optimizes over matches minimizing chunkiness. Minimizing chunkiness implies that matches should be consecutive, wherever possible. It also sets parameters favoring recall over precision in its F-measure computation. We implement all the metrics, except for METEOR, for which we use  (version 1.5). Similar to BLEU, we also aggregate METEOR scores at the sentence level.
We comprehensively evaluate which machine generation methods are best at matching consensus sentences. For this experiment, we select a subset of 100 images from the UIUC Pascal Sentence Dataset for which we have outputs for all the five machine description methods used in our evaluation: Midge , Babytalk , Story , and two versions of Translating Video Content to Natural Language Descriptions  (Video and Video+). For each image, we form all pairs of machine–machine sentences. This ensures that each machine approach gets compared to all other machine approaches on each image. This gives us 1,000 pairs. We form triplets by “tripling” each pair with 20 random reference sentences. We collect human judgement of consensus using our triplet annotation modality as well as evaluate our proposed automatic consensus metric CIDEr using the same reference sentences. In both cases, we count the fraction of times a machine description method beats another method in terms of being more similar to the reference sentences. To the best of our knowledge, we are the first work to perform an exhaustive evaluation of automated image captioning, across retrieval- and generation-based methods.
In this section we evaluate the effectiveness of our consensus-based metric CIDEr on the PASCAL-50S and ABSTRACT-50S datasets. We begin by exploring how many sentences are sufficient for reliably evaluating our consensus metric. Next, we compare our metric against several other commonly used metrics on the task of matching human consensus. Then, using CIDEr we evaluate several existing automatic image description approaches. Finally, we compare performance of humans and CIDEr at predicting consensus.
We begin by analyzing how the number of reference sentences affects the accuracy of automated metrics. To quantify this, we collect 120 sentences for a subset of 50 randomly sampled images from the UIUC Pascal Sentence Dataset. We then pool human–human correct, human–machine, machine–machine and human–human incorrect sentence pairs (179 in total) and get triplet annotations. This gives us the ground truth consensus score for all pairs. We evaluate BLEU, ROUGE and CIDEr with up to 100 reference sentences used to score the candidate sentences. We find that the accuracy improves for the first 10 sentences (Fig. 7) for all metrics. From 1 to 5 sentences, the agreement for ROUGE improves from 0.63 to 0.77. Both ROUGE and CIDEr continue to improve until reaching 50 sentences, after which the results begin to saturate somewhat. Curiously, BLEU shows a decrease in performance with more sentences. BLEU does a max operation over sentence level matches, and thus as more sentences are used, the likelihood of matching a lower quality reference sentence increases. Based on this pilot, we collect 50 sentences per image for our ABSTRACT-50S and PASCAL-50S datasets. For the remaining experiments we report results using 1 to 50 sentences.
We evaluate the performance of CIDEr, BLEU, ROUGE and METEOR at matching the human consensus scores in Fig. 11. That is, for each metric we compute the scores for two candidate sentences. The metric is correct if the sentence with higher score is the same as the sentence chosen by our human studies as being more similar to the reference sentences. The candidate sentences are both human and machine generated. For BLEU and ROUGE we show both their popular versions and the version we found to give best performance. We sample METEOR at fewer points due to high run-time. For a more comprehensive evaluation across different versions of each metric, please see the appendix.
At 48 sentences, we find that CIDEr is the best performing metric, on both ABSTRACT-50S as well as PASCAL-50S. It is followed by METEOR on each dataset. Even using only 5 sentences, both CIDEr and METEOR perform well in comparison to BLEU and ROUGE. CIDEr beats METEOR at 5 sentences on ABSTRACT-50S, whereas METEOR does better at five sentences on PASCAL-50S. This is because METEOR incorporates soft-similarity, which helps when using fewer sentences. However, METEOR, despite its sophistication does a max across reference scores, which limits its ability to utilize larger numbers of reference sentences. Popular metrics like ROUGE and BLEU are not as good at capturing consensus. CIDEr provides consistent performance across both the datasets, giving 84% and 84% accuracy on PASCAL-50S and ABSTRACT-50S respectively.
Considering previous papers only used 5 reference sentences per image for evaluation, the relative boost in performance is substantial. Using BLEU or ROUGE at 5 sentences, we obtained 76% and 74% accuracy on PASCAL-50S. With CIDEr at 48 sentences, we achieve 84% accuracy. This brings automated evaluation much closer to human performance (90%, details in Sec. 7.4). On the Flickr8K dataset  with human judgments on 1-5 ratings, METEOR has a correlation (Spearman’s ) of 0.56 , whereas CIDEr achieves a correlation of 0.58 with human judgments.555We thank Desmond Elliot for the result.
We next show the best performing versions of the metrics CIDEr, BLEU, ROUGE and METEOR on PASCAL-50S and ABSTRACT-50S, respectively, for different kinds of candidate pairs (Table 1). As discussed in Sec. 5 we have four kinds of pairs: (human–human correct) HC, (human–human incorrect) HI, (human–machine) HM, and (machine–machine) MM. We find that out of six cases, our proposed automated metric is best in five. We show significant gains on the challenging MM and HC tasks that involve differentiating between fine-grained differences between sentences (two machine generated sentences and two human generated sentences). This result is encouraging because it indicates that the CIDEr metric will continue to perform well as image description methods continue to improve. On the easier tasks of judging consensus on HI and HM pairs, all methods perform well.
We have shown that CIDEr and our new datasets containing 50 sentences per image provide a more accurate metric over previous approaches. We now use it to evaluate some existing automatic image description approaches. Our methodology for conducting this experiment is described in Sec. 6. Our results are shown in Fig. 12. We show the fraction of times an approach is rated better than other approaches on the y-axis. We note that Midge  is rated as having the best consensus by both humans and CIDEr, followed by Babytalk . Story  is the lowest ranked, by both humans and CIDEr. Humans and CIDEr differ on the ranking of the two video approaches (Video and Video+) . We calcuate the Pearson’s correlation between the fraction of wins for a method on human annotations and using CIDEr. We find that humans and CIDEr agree with a high correlation (0.98).
In our final set of experiments we measure human performance at predicting which of two candidate sentences better matches the consensus. Human performance puts into context how clearly consensus is defined, and provides a loose bound on how well we can expect automated metrics to perform. We evaluate both human and machine performance at predicting consensus on all 4,000 pairs from PASCAL-50S dataset and 400 pairs from the ABSTRACT-50S dataset described in Sec. 6. To create the same experimental set up for both humans and machines, we obtain ground truth consensus for each of the pairs using our triplet annotation on 24 references out of 48. For predicting consensus, humans (via triplet annotations) and machines both use the remaining 24 sentences as reference sentences. We find that the best machine performance is 82% on PASCAL-50S using CIDEr, in contrast to human performance which is at 90%. On the ABSTRACT-50S dataset, CIDEr is at 82% accuracy, whereas human performance is at 83%.
When optimizing an algorithm for a specific metric undesirable results may be achieved. The “gaming” of a metric may result in sentences with high scores, yet produce poor results when judged by a human. To help defend against the future gaming of the CIDEr metric, we propose several modifications to the basic CIDEr metric called CIDEr-D.
First, we propose the removal of stemming. When performing stemming the singular and plural forms of nouns and different tenses of verbs are mapped to the same token. The removal of stemming ensures the correct forms of words are used. Second, in some cases the basic CIDEr metric produces higher scores when words of higher confidence are repeated over long sentences. To reduce this effect, we introduce a Gaussian penalty based on the difference between candidate and reference sentence lengths. Finally, the sentence length penalty may be gamed by repeating confident words or phrases until the desired sentence length is achieved. We combat this by adding clipping to the -gram counts in the CIDEr numerator. That is, for a specific -gram we clip the number of candidate occurrences to the number of reference occurrences. This penalizes the repetition of specific -grams beyond the number of times they occur in the reference sentence. These changes result in the following equation (analogous to Equation 2):
Where and denote the lengths of candidate and reference sentences respectively. We use . A factor of 10 is added to make the CIDEr-D scores numerically similar to other metrics.
The final CIDEr-D metric is computed in a similar manner to CIDEr (analogous to Equation 3):
Similar to CIDEr, uniform weights are used. We found that this version of the metric has a rank correlation (Spearman’s ) of 0.94 with the original CIDEr metric while being more robust to gaming. Qualitative examples of ranking can be found in the appendix.
To enable systematic evaluation and benchmarking of image description approaches based on consensus, we have made CIDEr-D available as a metric in the MS COCO caption evaluation server .
In this work we proposed a consensus-based evaluation protocol for image description evaluation. Our protocol enables an objective comparison of machine generation approaches based on their “human-likeness”, without having to make arbitrary calls on weighing content, grammar, saliency, etc. with respect to each other. We introduce an annotation modality for measuring consensus, a metric CIDEr for automatically computing consensus, and two datasets, PASCAL-50S and ABSTRACT-50S with 50 sentences per image. We demonstrate CIDEr has improved accuracy over existing metrics for measuring consensus.
We thank Chris Quirk, Margaret Mitchell and Michel Galley for helpful discussions in formulating CIDEr-D. This work was supported in part by The Paul G. Allen Family Foundation Allen Distinguished Investigator award to D.P.
Beyond nouns: Exploiting prepositions and comparative adjectives for learning visual classifiers.In D. A. Forsyth, P. H. S. Torr, and A. Zisserman, editors, ECCV (1), volume 5302 of Lecture Notes in Computer Science, pages 16–29. Springer, 2008.
Action recognition from a distributed representation of pose and appearance.In
IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2011.
Proceedings of the Seventh International Natural Language Generation Conference, INLG ’12, pages 131–133, Stroudsburg, PA, USA, 2012. Association for Computational Linguistics.
List of items:
Comparison of metrics on triplet annotations to pairwise annotations: Compares the accuracy of CIDEr on triplet annotation to existing choices of metrics on pairwise annotations
Ranking of reference sentences for various automated metrics: Qualitative examples of the kind of sentences preferred by each metric
Comparison of rankings from CIDEr and CIDEr-D: Establishes that both CIDEr and CIDEr-D are similar qualitatively, in terms of how they rank reference sentences
Difference between human-like and what humans like: Shows examples of differences between pairwise and triplet annotations. Pairwise annotations often favor longer sentences
Sentence collection interface for PASCAL-50S and ABSTRACT-50S: Shows a snapshot of the interface used to collect our datasets, and explains the instructions
Equations for BLEU, ROUGE, and METEOR: Formulates some existing metrics in terms of the notation used in the rest of the paper
Qualitative examples of outputs of image description methods evaluated in the paper: Gives a sense for the kind of outputs produced by each of the image description methods evaluated in the paper
Performance of different versions of metrics on consensus: Benchmarks the performance of different versions of metrics discussed in the paper at matching human consensus
We consider some alternate annotation modalities and compare the performance of present metrics on them with that of CIDEr on consensus. The first such modality is a pairwise interface described as follows. Subjects on Amazon Mechanical Turk (AMT) are shown just the two candidate sentences (B and C) with the image (instead of sentence A), and asked to pick the better description out of the two. 11 such human judgments are collected for each such pair. These annotations are collected for the same PASCAL-50S candidate sentences as those used for the triplet experiments in the paper. We compare accuracy on consensus for CIDEr to accuracy of other metrics on picking the better candidate sentence. We find that ROUGE at 5 sentences performs at 75.6% whereas the BLEU version performs at 74.75%. ROUGE and BLEU perform at 73.15% and 73.4% respectively at 5 sentences. With METEOR at 5 sentences, the performance is at 79.5%. In contrast, CIDEr at 48 sentences reaches an accuracy of 84% on consensus. Thus the consensus-based protocol comprising of our proposed metric, dataset and human annotation modality provides more accurate automated evaluation.
We now show a ranking of the 48 sentences collected for a particular image as per the CIDEr, BLEU, BLEU without Brevity Penalty and ROUGE scores (Fig. 5). Each reference sentence is considered in turn as a candidate and scored with the remaining (47) reference sentences using the corresponding metric. Note how the top-ranked CIDEr sentences show high consensus. The top-ranked ROUGE sentences are typically more detailed, whereas the top ranked BLEU sentences are not as consistent as those with CIDEr. If BLEU was used without the brevity penalty, as some previous works have [22, 32] one would see that really short sentences get high scores. Intuitively, we can see that the ranking produced by CIDEr is more meaningful.
In our experiments, we found that there can often be a difference in the sentence that is rated as “better” (measured via pairwise annotation) by subjects versus the kind of sentences written by subjects when asked to describe the image (measured via consensus annotation). We refer to this distinction as human-like vs what humans like. Some qualitative examples are shown in Fig. 7. Candidate sentences shown in bold are those that the consensus-based measure picks and those shown in thin font are those picked by the pairwise evaluation based on “better”. Reference sentences rated similar to the winning candidate sentence using the triplet annotation are shown in bold.
As we report in Sec. 8, we find that CIDEr and CIDEr-D agree with a high correlation (Spearman’s =0.94) on ranking of sentences. We now compare CIDEr and CIDEr-D rankings, since results are easier to interpret for the unigram case. An example of ranking can be found in Fig. 6. Notice that the rankings of CIDEr and CIDEr-D are very similar qualitatively. However, the formulation of CIDEr-D avoids gaming effects as explained in Sec. 8.
In the paper, we compared the relative performance of five image description methods: Midge , Babytalk , Story , and two versions of Translating Video Content to Natural Language Descriptions  (Video and Video+). Here, we show a sample image with the descriptions generated by the five methods compared in the paper (Fig. 10). We can see that Midge  and Babytalk  produce the better descriptions on this image, consistent with our finding in the paper.
Our goal is to automatically evaluate for an image how well a candidate sentence matches the consensus of a set of image descriptions . The sentences are represented using sets of -grams, where an -gram is a set of one or more ordered words. In this paper we explore -grams with one to four words. Each word in an -gram is modified to its stemming or root form. That is, “fishes”, “fishing ” and “fished” all get reduced to “fish”. The number of times an -gram occurs in a sentence is denoted or for the candidate sentence .
BLEU  is a popular machine translation metric that analyzes the co-occurrences of -grams between the candidate and reference sentences. As we explain in Sec.6, we compute the sentence level BLEU scores between a candidate sentence and a set of reference sentences. The BLEU score is computed as follows:
where indexes the set of possible -grams of length . The clipped precision metric limits the number of times an -gram may be counted to the maximum number of times it is observed in a single reference sentence. Note that is a precision score and it favors short sentences. So a brevity penalty is also used:
where is the total length of candidate sentences ’s and is the length of the corpus-level effective reference length. When there are multiple references for a candidate sentence, we choose to use the closest reference length for the brevity penalty.
The overall BLEU score is computed using a weighted geometric mean of the individual -gram precision:
where and is typically held constant for all .
BLEU has shown good performance for corpus-level comparisons over which a high number of -gram matches exist. However, at a sentence-level the -gram matches for higher rarely occur. As a result, BLEU performs poorly when comparing individual sentences.
ROUGE is a set of evaluation metrics designed to evaluate text summarization algorithms.
ROUGE: The first ROUGE metric computes a simple -gram recall over all reference summaries given a candidate sentence:
ROUGE: ROUGE uses a measure based on the Longest Common Subsequence (LCS). An LCS is a set words shared by two sentences which occur in the same order. However, unlike -grams there may be words in between the words that create the LCS. Given the length of the LCS between a pair of sentences, ROUGE is found by computing an F-measure:
and are recall and precision of LCS. is usually set to favor recall (). Since -grams are implicit in this measure due to the use of the LCS, they need not be specified.
ROUGE: The final ROUGE metric uses skip bi-grams instead of the LCS or -grams. Skip bi-grams are pairs of ordered words in a sentence. However, similar to the LCS, words may be skipped between pairs of words. Thus, a sentence with 4 words would have skip bi-grams. Precision and recall are again incorporated to compute an F-measure score. If is the skip bi-gram count for sentence , ROUGE is computed as:
Skip bi-grams are capable of capturing long range sentence structure. In practice, skip bi-grams are computed so that the component words occur at a distance of at most 4 from each other.
METEOR is calculated by generating an alignment between the words in the candidate and reference sentences, with an aim of 1:1 correspondence. This alignment is computed while minimizing the number of chunks, , of contiguous and identically ordered tokens in the sentence pair. The alignment is based on exact token matching, followed by WordNet synonyms and then stemmed tokens. Given a set of alignments,
, the METEOR score is the harmonic mean of precision and recall between the best scoring reference and candidate:
Thus, the final METEOR score includes a penalty based on chunkiness of resolved matches and a harmonic mean term that gives the quality of the resolved matches.
We now show the results for different versions of each metric in the family of BLEU and ROUGE metrics, along with some variations of CIDEr. We use only one (latest) version of METEOR, thus it is not a part of this evaluation. The versions of CIDEr shown here are as follows. CIDEr exp refers to an exponential combination of scores obtained by varying -gram counts instead of taking a mean, which we describe in Sec. 4. CIDEr max refers to taking a max across scores with different reference sentences, instead of the mean we discuss in the paper. CIDEr no idf version sets uniform IDF weights in CIDEr. The rest of the versions of other metrics are explained in the previous section. The results on PASCAL-50S are shown in Fig. 11 and ABSTRACT-50S are shown in Fig. 12. We find that removing the IDF weights in the CIDEr no idf version hurts performance significantly. CIDEr max and CIDEr exp perform slightly worse than CIDEr. The best performing version of each of these metrics was discussed in Sec. 7.