deltaBLEU: A Discriminative Metric for Generation Tasks with Intrinsically Diverse Targets

by   Michel Galley, et al.

We introduce Discriminative BLEU (deltaBLEU), a novel metric for intrinsic evaluation of generated text in tasks that admit a diverse range of possible outputs. Reference strings are scored for quality by human raters on a scale of [-1, +1] to weight multi-reference BLEU. In tasks involving generation of conversational responses, deltaBLEU correlates reasonably with human judgments and outperforms sentence-level and IBM BLEU in terms of both Spearman's rho and Kendall's tau.


page 1

page 2

page 3

page 4


BLEU Neighbors: A Reference-less Approach to Automatic Evaluation

Evaluation is a bottleneck in the development of natural language genera...

BLEU is Not Suitable for the Evaluation of Text Simplification

BLEU is widely considered to be an informative metric for text-to-text g...

Perception Score, A Learned Metric for Open-ended Text Generation Evaluation

Automatic evaluation for open-ended natural language generation tasks re...

Simpson's Bias in NLP Training

In most machine learning tasks, we evaluate a model M on a given data po...

Grammar Accuracy Evaluation (GAE): Quantifiable Intrinsic Evaluation of Machine Translation Models

Intrinsic evaluation by humans for the performance of natural language g...

On the Relation between Quality-Diversity Evaluation and Distribution-Fitting Goal in Text Generation

The goal of text generation models is to fit the underlying real probabi...

Enhanced Bilingual Evaluation Understudy

Our research extends the Bilingual Evaluation Understudy (BLEU) evaluati...

Code Repositories


BLEU score for the evaluation of Response Generation

view repo

1 Introduction

Many natural language processing tasks involve the generation of texts where a variety of outputs are acceptable or even desirable. Tasks with intrinsically diverse targets range from machine translation, summarization, sentence compression, paraphrase generation, and image-to-text to generation of conversational interactions. A major hurdle for these tasks is automation of evaluation, since the space of plausible outputs can be enormous, and it is it impractical to run a new human evaluation every time a new model is built or parameters are modified.

In Statistical Machine Translation (SMT), the automation problem has to a large extent been ameliorated by metrics such as Bleu [Papineni et al.2002] and Meteor [Banerjee and Lavie2005] Although Bleu is not immune from criticism (e.g., Callison-Burch et al. Callison-Burch2006), its properties are well understood, Bleu scores have been shown to correlate well with human judgments [Doddington2002, Coughlin2003, Graham and Baldwin2014, Graham et al.2015] in SMT, and it has allowed the field to proceed.

Bleu has been less successfully applied to non-SMT generation tasks owing to the larger space of plausible outputs. As a result, attempts have been made to adapt the metric. To foster diversity in paraphrase generation, Sun and Zhou  DBLP:conf/acl/SunZ12 propose a metric called iBleu in which the Bleu score is discounted by a Bleu score computed between the source and paraphrase. This solution, in addition to being dependent on a tunable parameter, is specific only to paraphrase. In image captioning tasks, Vendantam et al. VedantamZP14, employ a variant of Bleu

in which n-grams are weighted by

tfidf. This assumes the availability of a corpus with which to compute tfidf. Both the above can be seen as attempting to capture a notion of target goodness that is not being captured in Bleu.

In this paper, we introduce Discriminative Bleu (Bleu), a new metric that embeds human judgments concerning the quality of reference sentences directly into the computation of corpus-level multiple-reference Bleu. In effect, we push part of the burden of human evaluation into the automated metric, where it can be repeatedly utilized.

Our testbed for this metric is data-driven conversation, a field that has begun to attract interest [Ritter et al.2011, Sordoni et al.2015] as an alternative to conventional rule-driven or scripted dialog systems. Intrinsic evaluation in this field is exceptionally challenging because the semantic space of possible responses resists definition and is only weakly constrained by conversational inputs.

Below, we describe Bleu and investigate its characteristics in comparison to standard Bleu in the context of conversational response generation. We demonstrate that Bleu correlates well with human evaluation scores in this task and thus can provide a basis for automated training and evaluation of data-driven conversation systems—and, we ultimately believe, other text generation tasks with inherently diverse targets.

2 Evaluating Conversational Responses

Given an input message and a prior conversation history , the goal of a response generation system is to produce a hypothesis that is both well-formed and a pertinent response to message (example in Fig. 1). We assume that a set of references is available for the context and message , where is an index over the test set. In the case of Bleu,111Unless mentioned otherwise, Bleu refers to the original IBM Bleu as first described in [Papineni et al.2002]. the automatic score of the system output is defined as:




where and are respectively hypothesis and reference lengths.222In the case of multiple references, Bleu selects the reference whose length is closest to that of the hypothesis. Then corpus-level -gram precision is defined as:

where is the number of occurrences of -gram in a given sentence, and is a shorthand for .

Figure 1: Example of consecutive utterances of a dialog.

It has been demonstrated that metrics such as Bleu show increased correlation with human judgment as the number of references increases [Przybocki et al.2008, Dreyer and Marcu2012]. Unfortunately, gathering multiple references is difficult in the case of conversations. Data gathered from naturally occurring conversations offer only one response per message. One could search pairs that occur multiple times in conversational data with the hope of finding distinct responses, but this solution is not feasible. Indeed, the larger the context, the less likely we are to find pairs that match exactly. Furthermore, while it is feasible to have writers create additional references when the downstream task is relatively unambiguous (e.g., MT), this approach is more questionable in the case of more subjective tasks such as conversational response generation. Our solution is to mine candidate responses from conversational data and have judges rate the quality of these responses. Our new metric thus naturally incorporates qualitative weights associated with references.

3 Discriminative Bleu

Discriminative Bleu, or Bleu, extends Bleu by exploiting human qualitative judgments associated with references . It is discriminative in that it both rewards matches with “good” reference responses () and penalizes matches with “bad” reference responses (). Formally, Bleu is defined as in Equation 1 and 2, except that is instead defined as:

In a nutshell, this is saying that each -gram match is weighted by the highest scoring reference in which it occurs, and this weight can sometimes be negative. To ensure that the denominator is never zero, we assume that, for each there exists at least one reference whose weight is strictly positive. In addition to its discriminative nature, this metric has two interesting properties. First, if all weights are equal to 1, then the metric score is identical to Bleu. As such, Bleu admits Bleu as a special case. Second, as with IBM Bleu, the maximum theoretical score is also 1. If the hypothesis happens to match the highest weighted reference for each sentence, the numerator equals the denominator and the metric score becomes 1. While we find this metric particularly appropriate for response generation, the metric makes no assumption on the task and is applicable to other text generation tasks such as MT and image captioning.

4 Data

4.1 Multi-reference Datasets

Context Message Response Score
i was about to text you and my two cousins got excited cause they thought you were “rihanna” aww, i can imagine their disappointment they were very disappointed!!! 0.6
yes. my ex-boyfriend, killed my cat. like i say, it was the start of a bad time… i can imagine! yes. luckily, the whole thing feels very much of the past now. 0.8
its good.. for some reason i can’t name stand out tracks but i’ve been playing it since it dropped i can imagine, banks doesn’t disappoint na this is anything but a disappointment.. 0.6
at my lil cousins dancing to “dance for you”. these kids are a mess. lmaoo i can imagine. they were belly rolling, filarious. 0.4
what’s sick about it?? do you know how long it is?? no so how is it sick? i can imagine it your imagination is wrong, very wrong at that. -0.1
it’s saad oh yeah the snow is very beautiful yeah i can imagine the weather in russia is very cool. -0.7
walking outside with lightning and thunder sounds 10x scarier than when you ’re inside play in the rain haha i love listening to thunder! i would def be out there if i wasn’t so sleepy 0.9
also rain(s) no play, rain(s) i’m scared of the thunder 0.5
no beach or running today, stupid weather. lets play in the rain. “yolo” isnt that the vibe?? i would but its thundering to 0.2
then its raining… go outside and play in the rain lol1 if i had a buddy to go play in it with , i would . it’s no fun alone 0.1
my only official day off this week… now what to do play in the rain! yeah right haha. i wan na go swimming 0.1
now playing silence nice music i ’m playing rain ohh i love that song -0.2
yeah it is, i just dont see the point of the first 2 laps me either it would be sick if there was like 5 laps exactly and if there were more riders so they drafted and crashed each other 0.3
yippee how many 711 are there like 5! sweaarr i thought there were more 0.8
i’ve got that muck off dry shower stuff to try out, hope it’s muddy now. i think if we go for it we cud get 5 laps in an hour! i’ve heard its a 30 min lap. but that was from a dh rider! 0.6
how much are they ? like $5 i thought they were more then that but ok 0.4
igot you, wen iroll up ill pass that shit. iaint stingy. me either!! they more the marrier. -0.3
i dont eat gravy on biscuits. me either. well then! why were the biscuits needed? -0.8
Table 1: Sample reference sets created by our multi-reference extraction algorithm, along with the weights used in Bleu. Triples from which additional references are extracted are in italics. Boxed sentences are in our multi-reference dev set.

To create the multi-reference Bleu dev and test sets used in this study, we adapted and extended the methodology of Sordoni et al. Sordoni2015. From a corpus of 29M Twitter context-message-response conversational triples, we randomly extracted approximately 33K candidate triples that were then judged for conversational quality on a 5-point Likert-type scale by 3 crowdsourced annotators. Of these, 4232 triples scored an average 4 or higher; these were randomly binned to create seed dev and test sets of 2118 triples and 2114 triples respectively. Note that the dev set is not used in the experiments of this paper, since Bleu and IBM Bleu are metrics that do not require training. However, the dev set is released along with a test set in the dataset release accompanying this paper.

We then sought to identify candidate triples in the 29M corpus for which both message and response are similar to the original messages and responses in these seed sets. To this end, we employed an information retrieval algorithm with a bag-of-words BM25 similarity function [Robertson et al.1995], as detailed in Sordoni et al. Sordoni2015, to extract the top 15 responses for each message-response pair. Unlike Sordoni et al. Sordoni2015, we further appended the original messages (as if parroted back). The new triples were then scored for quality of the response in light of both context and message by 5 crowdsourced raters each on a 5-point Likert-type scale.333For this work, we sought 2 additional annotations of the seed responses for consistency with the mined responses. As a result, scores for some seed responses slipped below our initial threshold of 4. Nonetheless, these responses were retained. Crucially, and again in contradistinction to Sordoni et al. Sordoni2015, we did not impose a score cutoff on these synthetic multi-reference sets. Instead, we retained all candidate responses and scaled their scores into [, ].

Table 1 presents representative multi-reference examples (from the dev set) together with their converted scores. The context and messages associated with the supplementary mined responses are also shown for illustrative purposes to demonstrate the range of conversations from which they were taken. In the table, negative-weighted mined responses are semantically orthogonal to the intent of their newly assigned context and message. Strongly negatively weighted responses are completely out of the ballpark (“the weather in Russia is very cool”, “well then! Why were the biscuits needed?”); others are a little more plausible, but irrelevant or possibly topic changing (“ohh I love that song”). Higher-valued positive-weighted mined responses are typically reasonably appropriate and relevant (even though extracted from a completely unrelated conversation), and in some cases can outscore the original response, as can be seen in the third set of examples.

4.2 Human Evaluation of System Outputs

Responses generated by the 7 systems used in this study on the 2114-triple test set were hand evaluated by 5 crowdsourced raters each on a 5-point Likert-type scale. From these 7 systems, 12 system pairs were evaluated, for a total of about pairwise 126K ratings (). Here too, raters were asked to evaluate responses in terms of their relevance to both context and message. Outputs from different systems were randomly interleaved for presentation to the raters. We obtained human ratings on the following systems:
Phrase-based MT: A phrase-based MT system similar to [Ritter et al.2011], whose weights have been manually tuned. We also included four variants of that system, which we tuned with MERT [Och2003]. These variants differ in their number of features, and augment [Ritter et al.2011]

with the following phrase-level features: edit distance between source and target, cosine similarity, Jaccard index and distance, length ratio, and DSSM score

[Huang et al.2013].
RNN-based MT

: the log-probability according to the RNN model of

[Sordoni et al.2015].
Baseline: a random baseline.

While Bleu relies on human qualitative judgments, it is important to note that human judgments on multi-references (§ 4.1) and those on system outputs were collected completely independently. We also note that the set of systems listed above specifically does not include a retrieval-based model, as this might have introduced spurious correlation between the two datasets (§ 4.1 and § 4.2).

5 Setup

We use two rank correlation coefficients—Kendall’s and Spearman’s —to assess the level of correlation between human qualitative ratings (§4.2) and automated metric scores. More formally, we compute each correlation coefficient on a series of paired observations . Here, and are respectively differences in automatic metric scores and qualitative ratings for two given systems and on a given subset of the test set.444For each given observation pair , we randomize the order in which and are presented to the raters in order to avoid any positional bias.While much prior work assesses automatic metrics for MT and other tasks [Lavie and Agarwal2007, Hodosh et al.2013] by computing correlations on observations consisting of single-sentence system outputs, it has been shown (e.g., MetricsMATR) that correlation coefficients significantly increase as observation units become larger. For instance, corpus-level or system-level correlations tend to be much higher than sentence-level correlations; Graham2014 show that Bleu is competitive with more recent and advanced metrics when assessed at the system level.555We do not intend to minimize the benefit of a metric that would be competitive at the sentence-level, which would be particularly useful for detailed error analyses. However, our main goal is to reliably evaluate generation systems on test sets of thousands of sentences, in which case any metric with good corpus-level correlation (such as Bleu, as shown in [Graham and Baldwin2014]) would be sufficient.

Therefore, we define our observation unit size to be sentences (responses),666Enumerating all possible ways of assigning sentences to observations would cause a combinatorial explosion. Instead, for all our results we sample 1K assignments and average correlations coefficients over them (using the same 1K assignments across all metrics). These assignments are done in such a way that all sentences within an observation belong to the same system pair. unless stated otherwise. We evaluate by averaging human ratings on the sentences, and by computing metric scores on the same set of sentences.777We refrained from using larger units, as creating larger observation units reduces the total number of units

. This would have caused confidence intervals to be so wide as to make this study inconclusive.

We compare three different metrics: Bleu, Bleu, and sentence-level Bleu (sBleu). The last computes sentence-level Bleu scores [Nakov et al.2012] and averages them on the sentences (akin to macro-averaging). Finally, unless otherwise noted, all versions of Bleu use -gram order up to 2 (Bleu-2), as this achieves better correlation for all metrics on this data.

6 Results

Metric refs. Spearman’s Kendall’s
Bleu single .260 (.178, .337) .171 (.087, .252)
Bleu .343 (.265, .416) .232 (.150, .312)
Bleu all .318 (.239, .392) .212 (.129, .292)
sBleu single .265 (.183, .342) .175 (.091, .256)
sBleu .330 (.252, .404) .222 (.140, .302)
sBleu all .258 (.177, .336) .167 (.083, .249)
Bleu single .280 (.199,.357) .187 (.103, .268)
Bleu .405 (.331,.474) .281 (.200, .357)
Bleu all .484 (.415,.546) .342 (.265, .415)
Table 2: Human correlations for IBM Bleu, sentence-level Bleu, and Bleu with 95% confidence intervals. This compares 3 types of references: single only, high scoring references (), and all references.
Figure 2: A comparison of Bleu, sentence-level Bleu, and Bleu along three dimensions: (A) decreasing the threshold on reference scores ; (B) increasing the unit size for the correlation study from a single sentence (=1) to a size of 100; (C) going from Bleu-1 to Bleu-4 for the different versions of Bleu.

The main results of our study are shown in Table 2. Bleu achieves better correlation with human than Bleu, when comparing the best configuration of each metric.888This is also the case on single reference. While Bleu and Bleu would have the same correlation if original references all had the same score of 1, it is not unusual for original references to get ratings below 1. In the case of Spearman’s , the confidence intervals of Bleu and Bleu barely overlap, while interval overlap is more significant in the case of Kendall’s . Correlation coefficients degrade for Bleu as we go from to using all references. This is expected, since Bleu treats all references as equal and has no way of discriminating between them. On the other hand, correlation coefficients increase for Bleu after adding lower scoring references. It is also worth noticing that Bleu and sBleu obtain roughly comparable correlation coefficients. This may come as a surprise, because it has been suggested elsewhere that sBleu has much worse correlation than Bleu computed at the corpus level [Przybocki et al.2008]. We surmise that (at least for this task and data) the differences in correlations between Bleu and sBleu observed in prior work may be less the result of a difference between micro- and macro-averaging than they are the effect of different observation unit sizes (as discussed in §5).

Finally, Figure 2 shows how Spearman’s is affected along three dimensions of study. In particular, we see that Bleu actually benefits from the references with negative ratings. While the improvement is not pronounced, we note that most references have positive ratings. Negatively-weighted references could have a greater effect if, for example, randomly extracted responses had also been annotated.

7 Conclusions

Bleu correlates well with human quality judgments of generated conversational responses, outperforming both IBM Bleu and sentence-level Bleu in this task and demonstrating that it can serve as a plausible intrinsic metric for system development.999An implementation of Bleu, multi-reference dev and test sets, and human rated outputs are available at:
An upfront cost is paid for human evaluation of the reference set, but following that, the need for further human evaluation can be minimized during system development. Bleu may help other tasks that use multiple references for intrinsic evaluation, including image-to-text, sentence compression, and paraphrase generation, and even statistical machine translation. Evaluation of Bleu in these tasks awaits future work.


We thank the anonymous reviewers, Jian-Yun Nie, and Alan Ritter for their helpful comments and suggestions.


  • [Banerjee and Lavie2005] Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for MT evaluation with improved correlation with human judgments. In Proc. of ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72.
  • [Callison-Burch et al.2006] Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluating the role of Bleu in machine translation research. In EACL, pages 249–256.
  • [Coughlin2003] Deborah Coughlin. 2003. Correlating automated and human assessments of machine translation quality. In Proc. of MT Summit IX, pages 63–70.
  • [Doddington2002] George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In Proc. of HLT, pages 138–145.
  • [Dreyer and Marcu2012] Markus Dreyer and Daniel Marcu. 2012. HyTER: Meaning-equivalent semantics for translation evaluation. In Proc. of HLT-NAACL, pages 162–171.
  • [Graham and Baldwin2014] Yvette Graham and Timothy Baldwin. 2014. Testing for significance of increased correlation with human judgment. In Proc. of EMNLP, pages 172–176.
  • [Graham et al.2015] Yvette Graham, Timothy Baldwin, and Nitika Mathur. 2015. Accurate evaluation of segment-level machine translation metrics. In Proc. of NAACL-HLT, pages 1183–1191.
  • [Hodosh et al.2013] Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013.

    Framing image description as a ranking task: Data, models and evaluation metrics.

    J. Artif. Int. Res., 47(1):853–899.
  • [Huang et al.2013] Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proc. of the 22nd ACM International Conference on Information & Knowledge Management, pages 2333–2338.
  • [Lavie and Agarwal2007] Alon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proc. of the Workshop on Statistical Machine Translation (StatMT), pages 228–231.
  • [Nakov et al.2012] Preslav Nakov, Francisco Guzman, and Stephan Vogel. 2012. Optimizing for Sentence-Level BLEU+1 Yields Short Translations. In Proc. of COLING.
  • [Och2003] Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proc. of ACL, pages 160–167.
  • [Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proc. of ACL, pages 311–318.
  • [Przybocki et al.2008] M. Przybocki, K. Peterson, and S. Bronsart. 2008. Official results of the NIST 2008 ”Metrics for MAchine TRanslation” challenge (MetricsMATR08).
  • [Ritter et al.2011] Alan Ritter, Colin Cherry, and William B. Dolan. 2011. Data-driven response generation in social media. In Proc. of EMNLP, pages 583–593.
  • [Robertson et al.1995] Stephen E Robertson, Steve Walker, Susan Jones, et al. 1995. Okapi at TREC-3. In TREC.
  • [Sordoni et al.2015] Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Meg Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015.

    A neural network approach to context-sensitive generation of conversational responses.

    In Proc. of NAACL-HLT.
  • [Sun and Zhou2012] Hong Sun and Ming Zhou. 2012. Joint learning of a dual SMT system for paraphrase generation. In ACL, pages 38–42.
  • [Vedantam et al.2015] Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. CIDEr: Consensus-based image description evaluation. In CVPR.