Generating text is a core part of many NLP tasks such as image captioning (Lin et al., 2014), open-domain dialogue Sordoni et al. (2015), story generation Roemmele (2016), and summarization Nallapati et al. (2016). However, proper evaluation of natural language generation has proven difficult (Liu et al., 2016; Novikova et al., 2017; Chaganty et al., 2018). A good evaluation metric should not only capture the quality of generation, but also the diversity of generation, which is especially crucial for creative, open-ended tasks like dialogue or story generation.
Human evaluation, which is often viewed as the gold standard evaluation, captures quality but fails to capture diversity. As an example, for language modeling, a model that directly plagiarizes sentences from the training set would pass the human quality bar but would have zero generalization ability and thus have inadequate diversity. On the other hand, statistical evaluation
—i.e., perplexity on a reference test set—captures diversity, as it ensures a model must assign reasonable probability to novel sentences, but perplexity provides an inadequate measure of quality(Theis et al., 2015). For example, modifying a perfect model by removing its ability to generate even a single test sentence results in infinite perplexity even though the model is still near-perfect. Automatic metrics such as BLEU (Papineni et al., 2002) and ROUGE (Lin and Rey, 2004) capture quality better than perplexity but still correlate poorly with human evaluation and fail to capture diversity Novikova et al. (2017); Chaganty et al. (2018).
Existing approaches to combining statistical and human evaluation have been ad-hoc, leading to misleading performance measures. A common approach is to measure diversity through the perplexity of a probabilistic model and quality through human evaluation on beam-searched outputs. This gives the illusion that a single model is high-quality and diverse, while the reality is that it shows we can have either a diverse model (when sampling from the distribution used to compute perplexity) or a high-quality model (when beam-searching).
In this paper, we define the idealized evaluation metric as twice the error of the optimal discriminator
for classifying sentences as coming from the reference distribution or the model (Section2). If a model generates gibberish (low quality), the optimal discriminator can classify these accurately as coming from the model. If the reference distribution contains sentences the model cannot generate (low diversity), the optimal discriminator can classify these accurately as coming from the reference.
Unfortunately, the optimal discriminator is unavailable. Human discriminators cannot capture diversity effectively, and learned discriminators—e.g., from a Generative Adversarial Network (Goodfellow et al., 2014) or one trained on human judgments (Lowe et al., 2017)—are too unreliable to use for rigorous evaluation.
Our key result (Section 3) is based on the observation that the optimal classifier depends only on two numbers: the probability of a sentence under the model and the probability under the reference distribution. The former can be computed directly from the model, and we show that the latter can be well-approximated by human judgment scores. The resulting two-dimensional space is illustrated in Figure 1. We apply a simple -nearest neighbor classifier in this space and define Human Unified with Statistical Evaluation (HUSE) as twice the leave-one-out error of this classifier.
We apply HUSE to four natural language generation tasks (Section 5): language modeling, chitchat dialogue, story generation, and summarization. First, we show that human evaluation alone is insufficient to discriminate model generations from the references, leading to inflated estimates of model performance. In contrast, HUSE is able to reveal deficiencies of current models. We also show that common techniques for improving sample quality such as annealing actually increase distinguishability between the model and reference due to losses in diversity.
2 Optimal Discriminator
Consider a natural language generation task where the model is given a context (e.g., a dialogue history) drawn from some prior and must output a distribution over possible sentences . We define an idealized evaluation metric based on whether is close to a reference distribution , which is generally human-generated.111 While some tasks only care about quality and thus only require to place mass on some high quality , we demand that places mass on all high quality as given by . This diversity is important for open-ended tasks such as dialogue or story generation. Also note that need not be the human distribution, or match the training distribution. It can be defined as the distribution given by experts.
Specifically, consider a random variabledrawn from either the reference or the model based on an indicator :
Define to be twice the lowest possible error over any discriminator that attempts to determine based on and :
measures similarity between and ; it is 0 if and are disjoint and 1 if they are identical.222 Note that is a linear function of the total variational divergence: . See Appendix A.1 for details.
Unfortunately, is unattainable because it requires computing the optimal discriminator. In the spirit of the Turing Test, we could consider using the error rate of a human discriminator instead, often considered the gold standard for evaluation. However, while humans might have knowledge of , they do not have full knowledge of and thus would have difficulties determining which sentences a model cannot generate.
As a concrete example, suppose
placed a uniform distribution over some set. Without knowledge of the most sensible discriminator is to predict (reference) when . This discriminator achieves the same classification error of for both the perfect model and one which can only return a single . We could try to reveal to humans by showing multiple samples simultaneously, but this is expensive and, as we will later see, unnecessary.
Another option is to learn
over an expressive class of functions such as neural networks on data sampled fromand . This is analogous to learning the discriminator in a Generative Adversarial Network (GAN) (Goodfellow et al., 2014) or learning an evaluation metric from human judgments (Lowe et al., 2017). However, as are high-dimensional objects, training a good classifier is extremely difficult (and perhaps not significantly easier than solving the original generation problem). Indeed, learned evaluation metrics do not generalize very well (Lowe et al., 2017; Chaganty et al., 2018). Unlike these approaches which seek to replace human evaluation, our focus will instead be on combining human and automatic statistical evaluation to estimate the optimal classifier error.
3 Human Unified with Statistical Evaluation (HUSE)
For any feature map that maps to , define the evaluation score to be twice the error rate of the optimal discriminator that depends on only through :
Note that the evaluation score given by a feature map optimizes over all functions that depend on (3). Thus, the more information contains, the lower is. This has two implications: First, any feature map yields an (optimistic) upper bound on (2), meaning that might be able detect when a model is poor but cannot certify that it is good. Second, adding features to can only improve this bound.
3.1 Two features suffice
Let us consider the following two-dimensional feature map:
From the arguments above, it is clear that , but perhaps more surprisingly, we actually have equality: The two-dimensional feature map achieves the optimal discriminator score: .
We compute the true posterior over given . Since , and , by Bayes’ rule:
The optimal discriminator simply predicts if and otherwise. In other words, the decision boundary is given by . ∎
More generally, we can obtain this equality with a wider class of . It will hold exactly for any invertible transformation of (Appendix Corollary A.2), and approximately for any which has high mutual information with (Appendix Theorem 1). This means that we can substitute with noisy, possibly un-normalized estimates and still obtain accurate estimates of .
3.2 HUSE features
While we can directly compute for many probabilistic models, is unattainable, so is not computable. However, the wisdom of the crowds Surowiecki (2004); Ungar et al. (2012) suggests that pooling together the judgments of many humans can often produce surprisingly reliable estimates of real-world probabilities such as , even if no individual human is particularly reliable. With this motivation, we ask Amazon Mechanical Turk workers to rate a sentence from 1–5 based on how “typical” it is as a way to estimate . (see Appendix A.3 for more details). We define to be the average response over 20 crowdworkers. Figure 2 shows that for a language modeling task on the Reddit corpus,333We used the Reddit corpus due to crowdworker familiarity, corpus size, and short average sentence length, which results in a wide range of sentence frequencies. strongly correlates with the actual log-frequency of in the corpus. The high correlation suggests that human judgments are a good surrogate for .
In addition, we found that rather than using the model probability directly as a feature, normalizing by sentence length yielded lower (tighter) scores. We therefore define the HUSE features as follows:
and define the (population) HUSE score as .
3.3 Guarantees derived from HUSE
We now show that the HUSE score satisfies two nice properties: (i) HUSE does at least as well as human evaluation and (ii) a low HUSE score is sufficient to show that a model is far from the reference distribution.
To show (i), consider a feature map that only includes human evaluation: . Because also incorporates human evaluation, is always tighter (lower) than the human discriminator error :
Proposition 1 (Relationship between HUSE, human evaluation, and optimal scores).
4 Evaluating models with HUSE
In this section, we show how we can estimate the error rate from finite data (Section 4.1). We then show how the HUSE estimate can be decomposed into a score that measures quality (HUSE-Q) and a score that measures diversity (HUSE-D), which allows us to study quality-diversity tradeoffs (Section 4.2).
4.1 Learning a discriminator
For any feature map , we show how to produce an estimate of . Fix contexts . First, we draw examples from the reference distribution , which are usually human-generated sentences from a test set. We also draw examples from the model we wish to evaluate. Next, for each of the examples , we compute the feature map , which might involve evaluating the model probability as well as collecting human judgments from crowdworkers.
Finally, we compute the leave-one-out error of a classifier that tries to predict whether a given example comes from the reference distribution () or the model ().
The classification problems for HUSE are two-dimensional, which allows us to accurately estimate error rates using a -nearest neighbors classifier. We opt to use nearest neighbors classifiers as they are simple, require no training, and can asymptotically capture arbitrary continuous decision boundaries. Specifically, we set and define neighbors using
distances over the feature vectors
scaled componentwise to have unit variance. The overall procedure for computing the estimateis formally defined in Algorithm 1.
4.2 Quality-diversity decomposition
We now define the (empirical) HUSE score using the feature map :
We define the quality component of HUSE (HUSE-Q) similarly using human judgments alone:
Since humans can detect quality defects in a model, any increase in error from removing must come from a model’s lack of diversity. Therefore, we define the diversity component (HUSE-D) as follows:
which implies the decomposition . As long as the discriminators are non-degenerate (obtaining better performance than chance and HUSE HUSE-Q), all scores are contained in . Here, implies that the model suffers no diversity defects, while indicates that the examples could be discriminated perfectly due to a lack of diversity.
|Score||Summarization||Story generation||Chit-chat dialogue||LM|
5.1 Experimental setup
We use HUSE to evaluate three different types of single-sentence natural language generation tasks: (i) unconditional and high entropy (language modeling); (ii) conditional and high entropy (story generation, chit-chat dialogue); and (iii) conditional and low entropy (summarization). We show that HUSE provides a direct and interpretable measure of diversity on high-entropy tasks, while also serving as a useful model diagnostic on low-entropy ones.
The four tasks along with the datasets and models are as follows:
Story generation: Last sentence generation for ROC stories Mostafazadeh et al. (2016) consisting of 96,198 examples of partially written four-sentence stories as input, and a single sentence which completes the story as the target. We use a standard OpenNMT model with global attention Klein et al. (2017).
Language modeling: One billion word benchmark pre-trained language model from Jozefowicz et al. (2016). The task consists of generating a single sentence from the one billion word newswire text distribution.
Chit-chat dialogue: Two-turn chit-chat dialogue dataset consisting of 37.3 million comment-response pairs from Reddit (Appendix A.4). Comments are generally short (5–15 tokens) and cover a single topic (e.g. given “wow how did i not notice that”, the response is “you were focusing on other things its understandable”). We train a convolutional model using fairseq Gehring et al. (2017).
For all the tasks, we train neural models and evaluate their diversity-quality tradeoffs as we change the decoding scheme for generation. Our primary evaluation concerns diversity trade-offs involving temperature annealing which is a generation technique applicable to any probabilistic model that generates words sequentially. In temperature annealed models, we sample a word proportional to where is the model probability of given previous words and is the temperature parameter. We excluded beam search since it qualitatively behaves similarly to temperature annealing with low temperatures and due to beam search being extremely under diverse.
As a non-neural baseline, we also consider retrieval based models based on Apache solr on a few tasks. For this approach, we retrieve the single most relevant response from the training set using the BM25 similarity metric on inputs. Such models are known to perform well in tasks with complex outputs such as program generation Hayati et al. (2018); Hashimoto et al. (2018) and style transfer Li et al. (2018).
For cost reasons, we did not measure certain combinations of task and generation mechanisms. We did not measure retrieval for chit-chat dialogue, as we observed its outputs were lower quality than a low-temperature neural model. We also did not anneal language models, as the generation quality from the language model was already high, and our goal was to show that they achieved high HUSE. Our set of measurements, while not comprehensive, generally covers the available quality-diversity tradeoffs for conditional tasks.
Finally, we collect human judgments as per Section 4.1 where we query 20 Amazon Mechanical Turk crowdworkers for typicality ratings on 100 reference and 100 model sentences. Since our models generate UNK (unknown and out-of-vocabulary) tokens, we instructed crowdworkers to treat UNK tokens as rare, but appropriate words for the context.
5.2 Overall results
The HUSE scores across the four tasks vary widely. Table 1 shows that single-sentence language models are nearly indistinguishable, with and implied discriminator error of .
In contrast, both summarization and dialogue are highly distinguishable () with relatively low quality when sampled from . Human evaluation alone (HUSE-Q) would suggest that using temperature annealing to emphasize high-probability outputs substantially improves the model (HUSE-Q goes from to for summarization and to for dialogue). However, we find that this increase in sample quality comes at the cost of diversity (HUSE-D goes from to for summarization and to for dialogue). Examining the achievable HUSE and diversity tradeoffs in Figure 3 shows that mechanisms such as annealing which improve sample quality actually degrade HUSE due to severe losses in diversity.
We find that all generation schemes and models are inadequate for story generation on ROC stories. The original model () is very easily distinguishable by a human (), corresponding to a discriminator error of . The retrieval models can improve this to , but this comes at the expense of diversity.
Finally, we observe that directly sampling from the model is always diverse. This suggests that human evaluation is an appropriate evaluation for generation systems that are directly sampled (rather than beam-searched).
5.3 Model error analysis with Huse
Since HUSE is estimated from a two-dimensional classification problem, we can directly visualize the classification problem to understand defects in both model quality and diversity.
Figure 4 shows both reference points (blue squares) and model points (red circles) for the summarization task. The shaded areas indicate the decision boundary of the -nearest neighbor classifier.
At temperature , we find that the classification boundary is mostly horizontal, implying that human judgment alone can distinguish model outputs from references. There is a cluster of sentences with high HJ and high which are essentially indistinguishable. Examining the samples in this top-right region reveals that these are news stories with short headlines such as “Nadal pulls out of Sydney International” which can be reliably generated even at . However, the model frequently generates low quality samples that can easily be distinguished such as “two new vaccines in the poor countries were effective against go-it-alone study says” (Table 2).
At lower temperatures of and , the boundary shifts towards becoming diagonal. Although the distribution is no longer directly separable on human judgment, the two distributions are clearly separable with the inclusion of .
Using Figure 4, we can identify individual examples which were correctly and incorrectly classified based on and HJ. Table 2 shows examples of both quality failures and diversity failures identified by HUSE. For example, the “diversity failure” table shows that the summarization model () has an extremely low probability of generating some reference sentences (“NFL’s bills shake up front office”) and is thus under-diverse. Closer examination of the model shows that the probability of generating “front office” is low, since it is an unusual way to refer to the president and general manager. Improving these models on the diversity failures will require that the model understand more subtle paraphrases. We can also identify model successes, where the model outputs are indistinguishable from the reference in terms of quality (“Agassi bows out of Australian Open after injury”), and the model assigns high probability to the reference (“Agassi withdraws from Australian Open”).
|Context:||Two new vaccines have been shown effective against rotavirus, which is responsible for a half-million infant deaths in poor countries each year, research studies published Wednesday said.|
|Model||Two new vaccines in the poor countries were effective against go-it-alone study says||-2.3||2.6|
|Reference||New vaccines for key UNKvirus shown effective||-4.0||4.3|
|Context:||The Buffalo Bills sacked Tom Donahoe as president and general manager on Wednesday, fulfilling expectations of a shake-up after another failure to make the National Football League playoffs.|
|Model||Bills sack UNKas president GM and general manager||-0.9||4.3|
|Reference||NFL’s Bills shake up front office.||-5.1||4.3|
|Model is indistinguishable|
|Context:||US veteran and eight-time Grand Slam winner Andre Agassi has withdrawn from this month’s Australian Open due to a nagging ankle injury, his management team announced Thursday.|
|Model||Agassi bows out of Australian Open after injury.||-1.4||5.3|
|Reference||Agassi withdraws from Australian Open.||-0.3||4.9|
5.4 HUSE stability
Since HUSE depends on human crowdworker annotations, one might ask if it is possible to reduce either the number of annotated examples, or number of distinct crowdworkers for each example. We show that for low-quality models, substantially fewer annotations are needed.
Figure 5 shows the result of subsampling our original data of 200 sentences and 20 crowdworkers and estimating HUSE. First, we find that using 50 test set examples (Figure 5, left) is often sufficient to give accurate estimates of HUSE. Next, we find that the necessary number of crowdworkers per example depends heavily on the task. Easily distinguishable tasks (story generation), require only 10 crowdworkers, while less distinguishable tasks (summarization) require more than 20 crowdworkers to obtain accurate estimates.
6 Related work
The current state of NLG evaluation.
Existing approaches to NLG evaluation use a hodgepodge mix of quality and diversity measures. Out of the 26 NLG papers at ACL 2018, six perform only human evaluation, fourteen measure human evaluation and a diversity metric such as perplexity or n-gram diversity, and six do not evaluate using human judgments.
While perplexity and -gram counts can in principle evaluate diversity, their practical implementations suffer from serious drawbacks. When human evaluation and perplexity are both evaluated, they are almost always done on separate models—human evaluations are done on beam-searched output, while perplexity is computed on the softmax outputs. This makes it appear as if the models can simultaneously generate high quality outputs while also being diverse, when in fact they can only be one at a time based on whether they sample or run beam search.
On the other hand, -gram diversity was proposed by Li et al. (2016) to identify models with the generic utterance problem where models repeat phrases such as ‘I don’t know’. Unfortunately, -gram diversity is computed across contexts by counting the number of unique -grams generated, and so does not measure a model’s ability to generate multiple valid utterances at any single context. In particular, a model which only outputs a single memorized utterance per context (e.g., via memorization or retrieval) can still have high -gram diversity as long as the memorized sentences differ across contexts.
Finally, all existing diversity measures are computed separately from human evaluation. This results in two incomparable evaluation metrics, which prevent us from reasoning about tradeoffs between diversity and quality. In contrast, HUSE allows us to make precise statements about the tradeoffs between model quality and diversity because it is a single metric which decomposes into diversity and quality terms.
Related evaluations of diversity.
The importance of diverse responses has previously been acknowledged for summarization Nenkova et al. (2007) and information retrieval Clarke et al. (2008). Our work differs in considering a single evaluation measure that captures quality and diversity applicable to any generation task.
Automated metrics based on -gram overlap such as BLEU, METEOR, ROUGE Papineni et al. (2002); Lavie and Denkowski (2009); Lin and Rey (2004) work well for machine translation but do not generalize well to domains with a diverse spectrum of correct responses. While variants Sun and Zhou (2012); Galley et al. (2015); Shima and Mitamura (2011) have adapted such metrics to high entropy generative environments, they are still significantly inferior to the human judgments they attempt to mimic.
Caccia et al. (2018) recently examined the diversity and quality tradeoffs for different language model architectures on synthetic datasets. However, as their approach relies on measuring log-likelihoods under both the model and reference distributions, it cannot be applied to real data where is unavailable. Our main conceptual contribution overcomes this by showing that HJ is an acceptable proxy for .
Sajjadi et al. (2018)
also examines diversity and quality (which they call precision and recall) in the context of generative image models. However, they rely on assuming thatand can be estimated accurately using the Fréchet Inception Distance (FID) Heusel et al. (2017). HUSE avoids such assumptions and instead directly leverages human judgments, resulting in a simple and reliable metric more suitable for use as a gold-standard.
Estimating optimal classification error.
Evaluating a model by estimating its optimal classification error has been considered by several earlier works Olsson et al. (2018); Kannan and Vinyals (2016); Li et al. (2017); Bruni and Fernandez (2017); Bowman et al. (2016). However, these methods have focused on classifying sentences directly, which is quite challenging to do reliably. Existing adversarial evaluation methods do not yet reliably outperform human classification Kannan and Vinyals (2016); Bruni and Fernandez (2017). We propose the use of both human evaluation and model probabilities as part of the adversarial evaluation framework, and demonstrate that the resulting classifier reliably outperforms humans and captures both the sample quality and diversity of a model.
Distributional divergence estimation.
Our proposed evaluation metric is closely related to the total variation distance which has been studied extensively in the distribution testing literature. It is known that total variation distance estimates have pessimistic minimax estimation rates in high dimensions Balakrishnan and Wasserman (2017). Our work overcomes this by utilizing and an estimate of . Other approaches to distributional testing include the maximum mean discrepancy (MMD) and Wasserstein distances, but these approaches require knowledge of a ground truth metric or kernel space Tolstikhin et al. (2016); Singh et al. (2018). Although such divergences are easier to estimate than the total variation distance from samples, the implied convergence rates are still too slow to be practically useful.
In this paper, we demonstrate that the current gold standard of human evaluation does not penalize under-diverse models. To remedy this, we propose HUSE, a general purpose evaluation strategy which can be applied to any model for which we can calculate a model’s sampling probabilities. HUSE is an upper bound on the optimal classification error of distinguishing reference and model-generated text, and never does worse than human classification. HUSE leverages both model probabilities and human judgments, ensuring that models which do well on the metric are both high-quality and diverse.
Our work can be viewed as a “superhuman version” of the classic Turing Test Turing (1950). Instead of relying on just a human classifier, we approximate the optimal classifier, which can utilize information about the model in addition to the reference. We also modify the classification problem and seek to identify whether a sample comes from a (potentially superhuman) reference distribution, rather than the human distribution. These two changes lead to tractable, rigorous estimators which can quantify tradeoffs between model quality and diversity on a wide range of generation tasks.
Acknowledgements. We would like to thank Arun Chaganty, Robin Jia, and Peng Qi for extensive comments and feedback on the paper. This work was funded by DARPA CwC program under ARO prime contract no. W911NF-15-1-0462.
Reproducibility. All code, data, and experiments are available on the CodaLab platform at https://worksheets.codalab.org/worksheets/0x88644b5ee189402eb19d39d721d1005c.
- Balakrishnan and Wasserman (2017) S. Balakrishnan and L. Wasserman. 2017. Hypothesis testing for high-dimensional multinomials: A selective review. arXiv preprint arXiv:1712.06120.
- Bowman et al. (2016) S. R. Bowman, L. Vilnis, O. Vinyals, A. M. Dai, R. Jozefowicz, and S. Bengio. 2016. Generating sentences from a continuous space. In Computational Natural Language Learning (CoNLL), pages 10–21.
- Bruni and Fernandez (2017) E. Bruni and R. Fernandez. 2017. Adversarial evaluation for open-domain dialogue generation. In Proceedings of the SIGDIAL 2017 Conference.
- Caccia et al. (2018) M. Caccia, L. Caccia, W. Fedus, H. Larochelle, J. Pineau, and L. Charlin. 2018. Language gans falling short. arXiv preprint arXiv:1811.02549.
- Chaganty et al. (2018) A. Chaganty, S. Mussmann, and P. Liang. 2018. The price of debiasing automatic metrics in natural language evaluation. In Association for Computational Linguistics (ACL).
- Clarke et al. (2008) C. L. A. Clarke, M. Kolla, G. V. Cormack, O. Vechtomova, A. Ashkan, S. Büttcher, and I. MacKinnon. 2008. Novelty and diversity in information retrieval evaluation. In ACM SIGIR.
- Feder and Merhav (1994) M. Feder and N. Merhav. 1994. Relations between entropy and error probability. IEEE Transactions on Information Theory, 40:259–266.
- Galley et al. (2015) M. Galley, C. Brockett, A. Sordoni, Y. Ji, M. Auli, C. Quirk, M. Mitchell, J. Gao, and B. Dolan. 2015. deltableu: A discriminative metric for generation tasks with intrinsically diverse targets. arXiv preprint arXiv:1506.06863.
- Gehring et al. (2017) J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin. 2017. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122.
Gehrmann et al. (2018)
S. Gehrmann, Y. Deng, and A. M. Rush. 2018.
Bottom-up abstractive summarization.
Empirical Methods in Natural Language Processing (EMNLP).
- Goodfellow et al. (2014) I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems (NeurIPS).
- Hashimoto et al. (2018) T. Hashimoto, K. Guu, Y. Oren, and P. Liang. 2018. A retrieve-and-edit framework for predicting structured outputs. In Advances in Neural Information Processing Systems (NeurIPS).
- Hayati et al. (2018) S. A. Hayati, R. Olivier, P. Avvaru, P. Yin, A. Tomasic, and G. Neubig. 2018. Retrieval-based neural code generation. In Empirical Methods in Natural Language Processing (EMNLP).
- Heusel et al. (2017) M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems (NeurIPS).
- Jozefowicz et al. (2016) R. Jozefowicz, O. Vinyals, M. Schuster, N. Shazeer, and Y. Wu. 2016. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410.
- Kannan and Vinyals (2016) A. Kannan and O. Vinyals. 2016. Adversarial evaluation of dialogue models. In NIPS 2016 Workshop on Adversarial Training.
- Klein et al. (2017) G. Klein, Y. Kim, Y. Deng, J. Senellart, and A. M. Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. arXiv preprint arXiv:1701.02810.
- Lavie and Denkowski (2009) A. Lavie and M. Denkowski. 2009. The meteor metric for automatic evaluation of machine translation. Machine Translation, 23.
- Li et al. (2016) J. Li, M. Galley, C. Brockett, J. Gao, and W. B. Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Human Language Technology and North American Association for Computational Linguistics (HLT/NAACL), pages 110–119.
- Li et al. (2018) J. Li, R. Jia, H. He, and P. Liang. 2018. Delete, retrieve, generate: A simple approach to sentiment and style transfer. In North American Association for Computational Linguistics (NAACL).
- Li et al. (2017) J. Li, W. Monroe, T. Shi, A. Ritter, and D. Jurafsky. 2017. Adversarial learning for neural dialogue generation. arXiv preprint arXiv:1701.06547.
- Lin and Rey (2004) C. Lin and M. Rey. 2004. Looking for a few good metrics: ROUGE and its evaluation. In NTCIR Workshop.
Lin et al. (2014)
T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll’ar,
and C. L. Zitnick. 2014.
Microsoft COCO: Common objects in context.
European Conference on Computer Vision (ECCV), pages 740–755.
- Liu et al. (2016) C. Liu, R. Lowe, I. V. Serban, M. Noseworthy, L. Charlin, and J. Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Empirical Methods in Natural Language Processing (EMNLP).
- Lowe et al. (2017) R. Lowe, M. Noseworthy, I. V. Serban, N. Angelard-Gontier, Y. Bengio, and J. Pineau. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. In Association for Computational Linguistics (ACL).
- Mostafazadeh et al. (2016) N. Mostafazadeh, N. Chambers, X. He, D. Parikh, D. Batra, L. Vanderwende, P. Kohli, and J. Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In North American Association for Computational Linguistics (NAACL).
- Nallapati et al. (2016) R. Nallapati, B. Zhou, C. Gulcehre, B. Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023.
- Nenkova et al. (2007) A. Nenkova, R. J. Passonneau, and K. McKeown. 2007. The pyramid method: Incorporating human content selection variation in summarization evaluation. In ACM Transactions on Speech and Language Processing.
- Novikova et al. (2017) J. Novikova, O. Dušek, A. C. Curry, and V. Rieser. 2017. Why we need new evaluation metrics for NLG. In Empirical Methods in Natural Language Processing (EMNLP).
- Olsson et al. (2018) C. Olsson, S. Bhupatiraju, T. Brown, A. Odena, and I. Goodfellow. 2018. Skill rating for generative models. arXiv preprint arXiv:1808.04888.
- Papineni et al. (2002) K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Association for Computational Linguistics (ACL).
M. Roemmele. 2016.
Writing stories with help from recurrent neural networks.In
Association for the Advancement of Artificial Intelligence (AAAI).
- Sajjadi et al. (2018) M. S. M. Sajjadi, O. Bachem, M. Lucic, O. Bousquet, and S. Gelly. 2018. Assessing generative models via precision and recall. arXiv preprint arXiv:1806.00035.
- Shima and Mitamura (2011) H. Shima and T. Mitamura. 2011. Diversity-aware evaluation for paraphrase patterns. In Empirical Methods in Natural Language Processing (EMNLP).
- Singh et al. (2018) S. Singh, A. Uppal, B. Li, C. Li, M. Zaheer, and B. Poczos. 2018. Nonparametric density estimation under adversarial losses. In Advances in Neural Information Processing Systems (NeurIPS), pages 246–257.
- Sordoni et al. (2015) A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell, J. Nie, J. Gao, and B. Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In North American Association for Computational Linguistics (NAACL).
- Sun and Zhou (2012) H. Sun and M. Zhou. 2012. Joint learning of a dual SMT system for paraphrase generation. In Association for Computational Linguistics (ACL).
- Surowiecki (2004) J. Surowiecki. 2004. The wisdom of crowds: Why the many are smarter than the few and how collective wisdom shapes business, economies, societies, and nations. Doubleday and Co.
- Theis et al. (2015) L. Theis, A. van den Oord, and M. Bethge. 2015. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844.
- Tolstikhin et al. (2016) I. Tolstikhin, B. K. Sriperumbudur, and B. Scholkopf. 2016. Minimax estimation of maximum mean discrepancy with radial kernels. In Advances in Neural Information Processing Systems (NeurIPS), pages 1930–1938.
- Turing (1950) A. M. Turing. 1950. Computing machinery and intelligence. Mind, 49:433–460.
- Ungar et al. (2012) L. Ungar, B. Mellors, V. Satopää, J. Baron, P. Tetlock, J. Ramos, and S. Swift. 2012. The good judgment project: A large scale test of different methods of combining expert predictions. In Association for the Advancement of Artificial Intelligence (AAAI).
Appendix A Appendix
a.1 Relationship between total variation distance and optimal discriminator error
This is a standard result, replicated here for completeness: The total variation distance is related to the optimal discriminator error as follows: .
Fix any . Define and . Let be the where the assigns higher probability than , and define and be the aggregated probabilities. On , the optimal discriminator should return (model). This is an error when , which occurs with probability . Analogously, on the complement of , the error probability (when ) is . The total contribution to is thus . The rest follows from algebra:
a.2 Approximation error from features
Let and be the optimal classification error and optimal error under feature map respectively. Then,
where is the conditional mutual information in bits and is the prediction of the optimal classifier.
The lower bound falls out of the definition of . To prove the upper bound, a variant of the entropy lower bound by Feder and Merhav Feder and Merhav (1994) shows that the error rate for predicting , via the optimal follows
Now expand the mutual information using the chain rule
The last line follows from the fact that is a deterministic function of (Proposition 3.1). Substituting this into the inequality gives the bound,
Finally, note that incurs error, and we disagree with at most a fraction of time. Assuming that we get every one of these disagreements wrong gives an upper bound of on . ∎
A straightforward corollary is that whenever is an invertible function of , the conditional mutual information is zero, and therefore the above inequalities become an equality. Whenever is an invertible function of , .
a.3 Amazon Mechanical Turk for human judgments
In order to show that HUSE can be reliably estimated even with simple crowdsourcing techniques, we used a single uniform task design where we asked Amazon Mechanical Turk workers to rate the typicality of a sentence from 0–5. We defined 0 as invalid (grammatically or factually incorrect) and 5 as ‘very typical’. is defined as the average score that crowdworkers assign to a response given the context . We did not perform substantial filtering or qualification checks beyond HIT acceptance rate (HIT Approval rate greater than 95 percent and number of HITs approved greater than 50 and location is USA). We constructed each HIT to be 25 examples, and paid one dollar per HIT.
We observe that measuring many replicates is sufficient to get low-variance estimates of HJ. For classification tasks where the model is straightforward to identify from references (such as story generation) we require five to ten replicates, while for hard tasks such as summarization at least twenty replicates are needed (Section 5.4). Manual inspection suggests that up to 20% of the collected data are low-quality but that this noise is uncorrelated with the sentence being rated and outweighed by a larger majority of honest and reasonably accurate data. Even if the data quality is low, HUSE is still a valid upper bound (i.e. models with low HUSE are guaranteed to be distinguishable from humans). Thus the models which we identify as having low-HUSE are reliably distinguishable regardless of the crowdworker quality.
a.4 Reddit Dataset
We use a subset of Reddit comments from 2006-2018 scraped from https://pushshift.io/. We construct a dictionary containing the 10,000 most popular words and preprocess the dataset by removing deleted posts, out-of-vocabulary tokens, profanity, comments with less than 10 upvotes, and comments with over 400 tokens.