“Are they ready to go live?” Zuko asked. “Yup, in just a minute” Tomi replied as he leaned back in his reclining chair. This was a moment that the entire galaxy would see. After 10 years…
|Greedy:||… of study, the scientists had finally come to the conclusion that the universe was not a place for living. The scientists had been studying the universe for over a century. They had been studying the universe for over a century.|
|Top-:||… of studying this phenomenon, the team had finally come up with a plan to study what the aliens were doing. The team would be looking for a way to create a space station. “And what would you call it?” “What would you call the space station?”|
|Nucleus:||… of diligent testing, and some test run after that, the results were well received. “they’re waiting for you, because there’s no way they would.” Stowedly at a local wedding place, the two archaeologists spoke together.|
|Entmax:||… of hibernation, Tomi was back. These last days had been a significant step forward in his mission. This time, Tomi was not alone. All the empires had aligned together and the world’s leadership began to openly support his mission.|
The last few years have seen a tremendous progress in neural language modeling, with a strong impact on key NLP tasks such as machine translation (Sutskever et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017), speech recognition (Chorowski et al., 2015), summarization (Rush et al., 2015), and open-ended text generation (Radford et al., 2019). Benefitting from large amounts of data, models such as Transformer-XL Dai et al. (2019) and GPT-2 Radford et al. (2019) have achieved impressive perplexity scores on language modeling. However, the generated text is still often repetitive and incoherent (Table 1).
A conundrum with current approaches is the mismatch between training and testing conditions: models are usually trained to maximize the likelihood of observed text. However, when generating, state-of-the-art models sample from a truncated and renormalized softmax distribution (Fan et al., 2018; Holtzman et al., 2019)
. They do so as a compromise to avoid two extremes: a deterministic search for the most probable sentence (via greedy decoding or beam search) usually results in dull and repetitive “degenerate text”(Li et al., 2016a, 2017; Holtzman et al., 2019). Stochastically sampling from the full softmax distribution, on the other hand, often generates many implausible words from the tail of the distribution (Fan et al., 2018). The recently proposed nucleus sampling approach (Holtzman et al., 2019)
sets the truncation point based on the cumulative distribution function,i.e., considers the top words with a cumulative probability . This approach permits better control of the number of words being generated than previous work, such as top sampling (Fan et al., 2018), which samples from the most probable words. However, the “sparsity” introduced by both methods is artificially imposed at run time, not learned during training.
A second problem is that it is hard to compare different truncation strategies—for example, we cannot easily evaluate how the resulting truncated distributions behave as language models, because the most widely used metric for language modeling—perplexity—does not handle sparse distributions. For this reason, previous works (Welleck et al., 2019; Li et al., 2019) generate from a truncated softmax, but report the perplexity of the full softmax.
In this paper, we propose a new approach—entmax sampling (§3)—that eliminates the mismatch between training and test conditions, sparsity-wise. Key to our approach is the recently proposed entmax transformation (Peters et al., 2019)
. Entmax transforms a vector of scores into a
sparse probability distribution, preventing implausible words from receiving any probability mass. Moreover, it does so natively
: it comes with a well-defined loss function that allows it to learn its sparsity automatically from the data, during training. This results in a new stochastic text generator where the number of possible word types varies with the context (like nucleus sampling), but that generates by sampling directly from its output distribution (like softmax), and where the sparsity of this distribution is present during training (unlike any existing method).
As a second contribution, we propose three new metrics to support the evaluation of sparse language models (§4): -perplexity, sparsemax score, and Jensen-Shannon divergence. We show that these metrics are well supported theoretically and can be used to compare our method with various truncation and temperature techniques.
Experiments in language modeling, story completion, and dialogue generation (§5) show that entmax sampling generates more diverse text and fewer repetitions than nucleus and top- sampling.
1.1 Related work
While greedy decoding and beam search are popular strategies for sequence-to-sequence tasks, such as machine translation, Knowles et al. (2016) and Stahlberg and Byrne (2019) showed that searching for the most probable sentence in a model trained with likelihood maximization has a bias for short sentences. In open-ended generation, Fan et al. (2018) and Holtzman et al. (2018, 2019) have shown that these methods lead to repetitions and dull text. To overcome this, several authors proposed ways of altering the beam search method in order to promote word diversity (Li et al., 2016b; Vijayakumar et al., 2018; Kulikov et al., 2018).
An alternative to deterministic text generation is to sample directly from the softmax distribution. However, softmax is a strictly positive (dense) transformation. Since the probability mass tends to accumulate in a long tail, this procedure tends to generate very unlikely words too often, leading to degenerate text (Fan et al., 2018; Holtzman et al., 2019). This can be mitigated by lowering the softmax temperature (Ficler and Goldberg, 2017), by sampling from the top- most probable words only (Fan et al., 2018; Radford et al., 2019), and through nucleus sampling (Holtzman et al., 2019).
In addition to new decoding methods, models that aim to increase word diversity and diminish repetition have also been introduced. Xu et al. (2018) proposed a diversity-promoting generative adversarial network, which rewards novel and fluent text. Holtzman et al. (2018) proposed augmenting the language model with several discriminators. More recently, Welleck et al. (2019) proposed augmenting log-likelihood loss with an unlikelihood loss that penalizes the generation of tokens that are present in the context. These alternatives can be applied jointly with entmax sampling.
Sparse transformations and losses.
At the core of our work are sparse alternatives to the softmax transformation. Martins and Astudillo (2016) proposed sparsemax and applied it to multi-label classification. This was generalized by Peters et al. (2019) via their -entmax transformation, which was applied to sequence-to-sequence models for morphological inflection and machine translation. In contrast to our work, they performed deterministic decoding with beam search.
The most common metrics to evaluate text generation models are perplexity (Jelinek et al., 1977) and BLEU (Papineni et al., 2002). For open generation, Zhu et al. (2018) observed that “no single metric is comprehensive enough.” Other evaluations include corpus -gram overlap (Yu et al., 2017; Press et al., 2017), Fréchet distance (Cífka et al., 2018), and sweeping the softmax temperatures to assess model robustness Caccia et al. (2018). These approaches are aimed at the (harder) problem of evaluating the quality of generated text. By contrast, our paper proposes new metrics for evaluating language models in the task of predicting the next word conditioned on ground truth context (like perplexity does), but supporting sparse probability distributions (which perplexity does not).
2 Language Modeling
Language models assign probability to word sequences , where each is in a vocabulary , and . This probability can be written as . We would like the model to assign high probability to real sentences, i.e., each distribution should assign a large probability value to the ground truth .
Given a set of training sentences, the usual strategy for learning the language model parameters is to minimize the negative log-likelihood:
The standard choice to model in Eq. 1 is to compute a score vector by conditioning on the context , and then applying a softmax transformation, , where
At decoding time, the language model generates sentences one word at a time, by sampling from the learned probability distribution. However, softmax yields a dense distribution, i.e., some probability mass (even if small) is assigned to all the words in the vocabulary. Holtzman et al. (2019, §4) have shown that, if we sample from this distribution directly, the resulting text becomes degenerate, with common incoherences arising due to the unreliability of the tail of the distribution. This motivated a line of work proposing “ad-hoc” modifications to the softmax distribution, to reduce the effect of the tail. Two of the most successful techniques, top- and nucleus sampling (Fan et al., 2018; Holtzman et al., 2019), do so by truncating and renormalizing the distribution . Note that these techniques are applied only at decoding time—during training the original softmax distribution is left untouched, being used as part of the optimization of the cross-entropy loss.
Our alternative to these ad-hoc modifications builds on sparse transformations, as sparsemax (Martins and Astudillo, 2016) and, more generally, -entmax (Peters et al., 2019). These transformations have the ability to inherently produce sparse probability distributions (i.e., their tails are short). Therefore, sampling from these distributions directly is a natural way to prevent degenerate text.
3 Entmax Sampling
3.1 Sparse Transformations
where is the probability simplex, and is the Tsallis -entropy (Tsallis, 1988):
When , this becomes the commonly used Shannon entropy, and when , we recover the Gini entropy.222The Gini entropy is . When , . Thus, , , and recover , , and , respectively. Blondel et al. (2019) have shown that, for , entmax is able to output sparse probability distributions, where some words get exactly zero probability, whereas softmax () does not have this capability.
How can we learn this output sparsity during training?
where is the -entmax loss:
where , and is the indicator one-hot vector that corresponds to the ground truth word . When , we still recover the negative log-likelihood, , and, when , this corresponds to the sparsemax loss (Martins and Astudillo, 2016), which we will revisit in §4. When using the -entmax loss with , we can eliminate the mismatch between training and run time conditions, since the probability distributions evaluated by the loss are also sparse.
The entmax loss belongs to the wider class of Fenchel-Young losses (Blondel et al., 2019) and, consequently, is convex on , differentiable (with gradient ), and for has a separation margin: the loss becomes zero when the score of the correct class is separated by the rest by a margin of . Separation is achieved if and only if , i.e., when the model puts all its probability mass in the correct word. This allows the model to be adaptive to the degree of uncertainty present: in some cases there are few plausible words, so most words should have probability zero, while in other cases a higher number of words are plausible and should be given probability mass.
3.2 Entmax Sampling
We propose entmax sampling as an alternative to top- sampling and nucleus sampling (Holtzman et al., 2019). We sample from the categorical distribution obtained by applying the entmax transformation to the scores given by the model:
We sample directly from the learned sparse probability distribution over the words, without any ad-hoc modification. Therefore, this sparsity is not artificially imposed at run time; rather, it is native to the entmax transformation and learned during training. As in nucleus sampling and in opposition to top- sampling, entmax sampling considers a varying number of tokens depending on the context. Moreover, as we show in Table 3, with entmax sampling this variability is higher.
4 Evaluation Metrics
Language models are commonly evaluated by computing their perplexity () on held-out data. Perplexity assesses the ability of a language model to predict the next word given the context:
However, its computation involves the logarithm of the predicted probability distribution over the words. This poses a problem when we are using sparse or truncated probability distributions, since we have . Usually, authors report the values for perplexity computed on the original probability distribution, before truncation. However, this metric does not allow different sparse decoding strategies to be compared.333This is important not only when we have sparse or truncated probability distributions, but also to compare language models using different vocabularies: when using perplexity, if the ground truth word is not contained in the vocabulary, one usually considers the probability attributed to an unk token instead of a zero probability, which leads to an unfair comparison between models with different vocabularies. As an alternative, we propose three different metrics (to facilitate better understand these metrics, comparative plots are shown in Fig. 2, App. B).
To be able to compute the perplexity for sparse distributions, the simplest approach is to smooth it by adding a small value to all terms followed by renormalization, as in additive (Laplace) smoothing (Chen and Goodman, 1999):
The value of can be tuned for each method. A disadvantage of is that it still does not evaluate the original sparse distribution, but rather a modified version of it. However, when applied to variants of truncated softmax, by collapsing all the truncated probabilities to the same value , it is useful to measure how much truncation deteriorates its ability to rank words, compared to softmax.
We can derive a more interesting metric that handles sparse distributions directly. By setting in Eq. 6,444If we set instead, we revert to perplexity. we obtain the sparsemax loss proposed by Martins and Astudillo (2016), . We define the sparsemax score () as:
where is the Gini entropy (see footnote 2). Unlike perplexity, this score is bounded. In fact, it is always between 0 (when with ) and 1 (when ). We prove this fact in App. A. Interestingly, when the model is deterministic (e.g., when it comes from greedy search), we have , and the sparsemax score simply becomes the word accuracy. In the opposite case, when is uniform, we obtain when .
Given two discrete probability distributions and , and denoting their mixture (arithmetic mean) as
, and the Kullback-Leibler divergence as, the Jensen-Shannon divergence is defined as:
The Jensen-Shannon divergence can be interpreted as a mutual information as follows (Grosse et al., 2002; Banerjee et al., 2005): consider a two-step process where we first toss a fair coin . If the outcome is heads, we sample the next word according to the model ; if it is tails, we sample . A word generated according to this process is governed by the mixture , . The Jensen-Shannon divergence between and
is the mutual information between the random variablesand , which equals , where is the Shannon entropy and is the conditional entropy. Hence, the Jensen-Shannon divergence can be seen as the reduction of uncertainty about the source when we observe a sample from the mixture . The more similar the two distributions and are, the smaller this reduction is.
In our experiments, we report the
as an evaluation metric for language models, setting(i.e., a one-hot distribution placed on the ground truth word ) and averaging the over the words. Like the sparsemax score described above, the is bounded: it is zero if , and maximal () when is a one-hot distribution placed on a different word.
Comparing multiple models.
The generalized JS allows to compare two or more trained models:
where are the probability distributions of the different models and is their mixture. This property can be useful for measuring the diversity between multiple models (e.g., when used in an ensemble system). We use this metric in App. D to rank the sentences in which the different models we compare disagree the most.
We compare the different decoding methods in three NLP tasks: language modeling (§5.1), story completion (§5.2), and dialogue generation (§5.3). In language modeling, we evaluate the model’s fluency, while in story completion we also evaluate if the methods generate coherent and “interesting” text. In dialogue generation, we evaluate the methods’ performance in an interactive task.
5.1 Language Modeling
Datasets and metrics.
We performed experiments on three widely used language modeling datasets: WikiText-2 and WikiText-103 Merity et al. (2016), and BookCorpus Zhu et al. (2015). WikiText-2 and WikiText-103 are composed of Wikipedia articles, comprising around 2 and 100 million tokens for training, respectively. Their validation and test sets have 217,000 and 245,000 tokens, respectively. BookCorpus is composed of 11,038 freely available books. We used the standard split: 800 million tokens for training, 260,000 for validation, and 280,000 for testing.
We report the sparsemax score, Jensen-Shannon, and -perplexity (§4) to evaluate the methods’ fluency, and the rep and wrep555rep measures the number of times that a word from the previous words is repeated, when generating the following word. wrep does the same, discarding words that are also repeated in the ground truth. We report the average of rep and wrep for . (Welleck et al., 2019) to evaluate the methods’ tendency to generate repetitions. All metrics are computed at the BPE level Sennrich et al. (2016). We treat
as an hyperparameter for each model, tuned on the validation sets (we report thefor each model in Table 8, App. C). For softmax, with and without decreased temperature, we set .
Fine-tuning OpenAI GPT-2.
We fine-tuned GPT-2 Radford et al. (2019) in two training settings: with negative log-likelihood and with entmax losses.666 We use the PyTorch re-implementation at
We use the PyTorch re-implementation athttps://github.com/huggingface/transformers.
We used the GPT-2 medium model, which consists of a 24 layer transformer with 345 million parameters. The models were fine-tuned for up to 5 epochs for Wikitext-2 and up to 1 for Wikitext-103 and BookCorpus using the Adam optimizer(Kingma and Ba, 2015), with a learning rate of , which was linearly decayed to zero over the course of training. We report results of the models that have the smallest JS divergence on the validation set. We used entmax with , based on a grid search over . The same was done for top- sampling with , nucleus sampling with , and softmax with decreased temperature with . The selected hyperparameters were (as in Fan et al. (2018)), (as in Holtzman et al. (2019)), and .
Automatic Metrics Results.
As can be seen in Table 2, entmax sampling consistently outperforms every other decoding method in sparsemax score and number of repetitions. Additionally, entmax sampling leads to better -perplexity scores than greedy decoding, top- sampling, nucleus sampling, and softmax-, while having scores similar to softmax sampling. Concerning the Jensen-Shannon divergence scores, top- sampling has the best scores. All this is achieved with a model that copes with sparsity inherently, instead of with truncations truncations at run time.
In order to understand why entmax leads to better sparsemax scores and fewer repetitions, we show the mean, median, standard deviation, minimum, and maximum number of tokens each decoding strategy considers when predicting the next word given a context on the Wikitext-103 test set, in Table3.
We can see that entmax sampling and nucleus sampling consider a lot more tokens than greedy decoding and top-
sampling, which can be the reason for the smaller number of repetitions. This is possible since the number of tokens is variable, going from 1 to 12,305 for nucleus sampling and from 1 to 18,399 for entmax sampling. Moreover, a possible explanation for entmax sampling outperforming nucleus sampling is the fact that entmax’ support size has higher variance; it is able to consider a higher number of tokens, while still being able to consider only one word. It is more adaptive to the context.777By having a larger cumulative , nucleus sampling could also consider more tokens. However a larger led to lower scores in the automatic metrics.
5.2 Story completion
Next, we analyze the model’s ability to generate long sequences of text using different sampling methods.888Softmax sampling is not considered since it has been shown to generate degenerate text (Holtzman et al., 2019). We performed completion of stories from the WritingPrompts dataset (Fan et al., 2018), using the models fine-tuned on the BookCorpus dataset. WritingPrompts consists of a collection of human-written stories paired with writing prompts. We randomly selected 1,000 stories which were at least 200 words long and used the first 50 words as context for the models. Examples of stories generated with each method (Table 1 and Table 11 of App. E) show that entmax sampling leads to more interesting stories while preventing degenerate text.
To measure the stories’ word diversity, we show in Figure 1 the distinct- metric999Distinct- corresponds to the number of distinct -grams divided by the total number of generated words. (Li et al., 2016a) for the stories generated by each model. It can be seen that entmax sampling leads to more diverse unique n-grams for , closer to human generated text. We also measured the number of unique words in the stories generated: entmax sampling generated 12,767 different words, while softmax with decreased temperature, greedy decoding, top-, and nucleus sampling generated 9,973, 3,464, 7,852 and 11,929 words, respectively. As expected, entmax leads to higher word diversity on par with human stories, which contain 15,166 different words.
We performed human evaluation of greedy decoding, top-, nucleus, and entmax sampling on completion of stories from the WritingPrompts datasets. To perform the human evaluation, we divided 100 stories into 2 sets of 50 stories. For each set of stories, 2 judges evaluated each story in 3 metrics: fluency (whether the text is syntactically and semantically correct), coherence (whether the story continuation is related to the context and is consistent), and engagement (whether the annotator felt interested in the story). Ratings were given on a 5-point scale, and the mean for each metric is reported in Table 4.
Entmax sampling outperforms greedy decoding, top-, and nucleus sampling on all three metrics. This confirms the better generation quality of entmax sampling.
5.3 Dialogue Generation
To evaluate the sampling methods in an interactive setting, we experiment with dialogue generation. Its goal is to generate an utterance, given a context consisting of the previous utterances in the dialogue and, in some cases, initial context sentences with related information that can be describing personas, knowledge, or scenarios.
Datasets and metrics.
We performed experiments with the PersonaChat dataset (Zhang et al., 2018). It is a crowd-sourced dialogue dataset in which speakers were asked to condition their utterances on predefined personas. It contains 164,356 utterances over 10,981 dialogues. As there is no public test set, we report results on the validation set. We evaluate the word -score, -perplexity, sparsemax score, and Jensen-Shannon. As for the language modeling experiments, -perplexity, sparsemax score, and Jensen-Shannon are computed at the BPE level. We also report distinct- metric for (Li et al., 2016a) and analyze how the models behave in dialogue simulations between two agents (Li et al., 2016c).
Fine-tuning OpenAI GPT-2.
In order to apply entmax decoding to the dialogue generation task, we follow Wolf et al. (2019) and Budzianowski and Vulić (2019): the input given to the language model consists of the sentences describing the persona the model should impersonate, and the history utterances. In order for the model to adapt to dialogue, the word and position embeddings are augmented with dialogue-state embeddings that indicate whether tokens are from a persona sentence, speaker 1, or speaker 2. These embeddings are learned during fine-tuning.
We fine-tune the GPT-2 medium model (Radford et al., 2019) for a maximum of 3 epochs with a learning rate of that linearly decays to zero. , , , and are chosen with a grid search over the sets of values , , , and . The values chosen are , , , and .
Automatic Metrics Results.
We report the results in Table 5. Entmax again outperforms all the other sampling methods in sparsemax score and -perplexity, having the lowest JS (same as top-). Entmax also leads to fewer repetitions, having higher distinct-1 and distinct-2 scores. However, its F score is lower (similar findings have been reported in Li et al. (2019)). This can be due to dialogue generation being an open-ended generation task that can have multiple correct answers.
Additionally, we simulated a conversation between two agents of the same model (Li et al., 2016c). We chose different personas randomly for the two agents. Then a first utterance from the PersonaChat dataset was given as context. We measured the average length of conversations, considering that the conversation is finished when utterances have an overlap of 80 or higher, when there is no response by an agent, or when it reaches 20 utterances (procedure similar to the one proposed by Li et al. (2016c)). We also measured the number of unique words used, and the distinct- metric for .
As can be seen in Table 6, entmax sampling leads to longer conversations with higher word diversity and higher number of distinct 1-grams and 2-grams.
Finally, we performed human evaluation following the ConvAI2 challenge: 6 volunteers had 30 conversations each with models using the different sampling methods. The volunteers scored the conversations from 1 to 5 in terms of fluency, consistency (whether the model’s utterances are coherent with their persona and the model does not contradict itself), and engagement. The model’s personas were randomly selected from the PersonaChat validation set. Results are reported in Table 7.
Entmax sampling outperforms the other methods in consistency and engagement, having similar scores in fluency. This means entmax sampling does not only generate the most interesting conversation utterances, but it also leads to an improvement of the conversation consistency.
We proposed entmax sampling as a new strategy for generating text from a sparse probability distribution. It provides three main advantages: (i) offers a natural way of sampling directly from the output probability distribution; (ii) the distribution sparsity is present during training, and, consequently, there is no distribution sparsity mismatch between training and run time; (iii) when sampling with entmax, the number of words to be considered varies with the context, as in nucleus sampling and in contrast to top- sampling. Additionally, we proposed new metrics for evaluating language models that produce sparse and truncated probability distributions: -perplexity, sparsemax score, and Jensen-Shannon divergence.
Experiments show that entmax sampling leads to higher word diversity, fewer repetitions, and similar or improved results in automated metrics. Human evaluation confirms that entmax outperforms greedy decoding, top-, and nucleus sampling in coherence/consistency and engagement, and is similar or better in terms of fluency.
- Bahdanau et al. (2015) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proc. ICLR.
Banerjee et al. (2005)
Arindam Banerjee, Srujana Merugu, Inderjit S Dhillon, and Joydeep Ghosh. 2005.
with Bregman divergences.
Journal of machine learning research.
- Blondel et al. (2019) Mathieu Blondel, Andre Martins, and Vlad Niculae. 2019. Learning Classifiers with Fenchel-Young Losses: Generalized Entropies, Margins, and Algorithms. In Proc. AISTATS.
- Budzianowski and Vulić (2019) Paweł Budzianowski and Ivan Vulić. 2019. Hello, It’s GPT-2-How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems. In Proc. Workshop on Neural Generation and Translation.
- Caccia et al. (2018) Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, and Laurent Charlin. 2018. Language GANs Falling Short. arXiv preprint arXiv:1811.02549.
- Chen and Goodman (1999) Stanley F Chen and Joshua Goodman. 1999. An empirical study of smoothing techniques for language modeling. Computer Speech & Language.
- Chorowski et al. (2015) Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recognition. In Proc. NIPS.
- Cífka et al. (2018) Ondřej Cífka, Aliaksei Severyn, Enrique Alfonseca, and Katja Filippova. 2018. Eval all, trust a few, do wrong to none: Comparing sentence generation models. arXiv preprint arXiv:1804.07972.
- Dai et al. (2019) Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. Proc. ACL.
- Fan et al. (2018) Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical Neural Story Generation. In Proc. ACL.
- Ficler and Goldberg (2017) Jessica Ficler and Yoav Goldberg. 2017. Controlling Linguistic Style Aspects in Neural Language Generation. In Proc. of the Workshop on Stylistic Variation.
- Grosse et al. (2002) Ivo Grosse, Pedro Bernaola-Galván, Pedro Carpena, Ramón Román-Roldán, Jose Oliver, and H Eugene Stanley. 2002. Analysis of symbolic sequences using the Jensen-Shannon divergence. Physical Review E.
- Holtzman et al. (2018) Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. 2018. Learning to Write with Cooperative Discriminators. In Proc. ACL.
- Holtzman et al. (2019) Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751.
- Jelinek et al. (1977) Fred Jelinek, Robert L Mercer, Lalit R Bahl, and James K Baker. 1977. Perplexity-a measure of the difficulty of speech recognition tasks. The Journal of the Acoustical Society of America.
- Kingma and Ba (2015) Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proc. ICLR.
- Knowles et al. (2016) Rebecca Knowles, Adithya Renduchintala, Philipp Koehn, and Jason Eisner. 2016. Analyzing Learner Understanding of Novel L2 Vocabulary. In Proc. SIGNLL.
- Kulikov et al. (2018) Ilya Kulikov, Alexander H Miller, Kyunghyun Cho, and Jason Weston. 2018. Importance of a search strategy in neural dialogue modelling. arXiv preprint arXiv:1811.00907.
- Li et al. (2016a) Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A Diversity-Promoting Objective Function for Neural Conversation Models. In Proc. NAACL.
- Li et al. (2016b) Jiwei Li, Will Monroe, and Dan Jurafsky. 2016b. A simple, fast diverse decoding algorithm for neural generation. arXiv preprint arXiv:1611.08562.
- Li et al. (2016c) Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016c. Deep Reinforcement Learning for Dialogue Generation. In Proc. EMNLP.
- Li et al. (2017) Jiwei Li, Will Monroe, Tianlin Shi, Sébastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial Learning for Neural Dialogue Generation. In Proc. EMNLP.
- Li et al. (2019) Margaret Li, Stephen Roller, Ilia Kulikov, Sean Welleck, Y-Lan Boureau, Kyunghyun Cho, and Jason Weston. 2019. Don’t Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training. arXiv preprint arXiv:1911.03860.
- Martins and Astudillo (2016) Andre Martins and Ramon Astudillo. 2016. From softmax to sparsemax: A sparse model of attention and multi-label classification. In Proc. ICML.
- Merity et al. (2016) Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer Sentinel Mixture Models. In Proc. ICLR.
- Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proc. ACL.
- Peters et al. (2019) Ben Peters, Vlad Niculae, and André FT Martins. 2019. Sparse Sequence-to-Sequence Models. In Proc. ACL.
- Press et al. (2017) Ofir Press, Amir Bar, Ben Bogin, Jonathan Berant, and Lior Wolf. 2017. Language generation with recurrent generative adversarial networks without pre-training. arXiv preprint arXiv:1706.01399.
- Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog.
- Rush et al. (2015) Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A Neural Attention Model for Abstractive Sentence Summarization. In Proc. EMNLP.
- Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In Proc.ACL.
- Stahlberg and Byrne (2019) Felix Stahlberg and Bill Byrne. 2019. On NMT Search Errors and Model Errors: Cat Got Your Tongue? In Proc. EMNLP.
- Sutskever et al. (2014) I Sutskever, O Vinyals, and QV Le. 2014. Sequence to sequence learning with neural networks. Proc. NIPS.
- Tsallis (1988) Constantino Tsallis. 1988. Possible generalization of boltzmann-gibbs statistics. Journal of statistical physics.
- Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. NIPS.
- Vijayakumar et al. (2018) Ashwin K Vijayakumar, Michael Cogswell, Ramprasaath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2018. Diverse beam search for improved description of complex scenes. In Proc. AAAI.
- Welleck et al. (2019) Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2019. Neural text generation with unlikelihood training. arXiv preprint arXiv:1908.04319.
- Wolf et al. (2019) Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. arXiv preprint arXiv:1901.08149.
- Xu et al. (2018) Jingjing Xu, Xuancheng Ren, Junyang Lin, and Xu Sun. 2018. Diversity-promoting gan: A cross-entropy based generative adversarial network for diversified text generation. In Proc. EMNLP.
- Yu et al. (2017) Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Proc. AAAI.
- Zhang et al. (2018) Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing Dialogue Agents: I have a dog, do you have pets too? In Proc. ACL).
- Zhu et al. (2018) Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A benchmarking platform for text generation models. In Proc. SIGIR.
- Zhu et al. (2015) Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books. In Proc. ICCV.
Appendix A Proof of boundedness of the sparsemax score
We show here that the sparsemax score in Eq. 4 is always bounded between 0 and 1. The fact that simply follows from the fact (Blondel et al., 2019, Prop. 2) that any Fenchel-Young loss (which includes ) is non-negative. Since , it follows that . Let us see when the maximal value 1 is attained. We have:
Since the Gini entropy is maximized by the uniform distribution, the maximum distribution in Eq.A is of the form for . Replacing in Eq. A, we obtain
This is maximized by , which corresponds to .
To see that we always have , we use the fact that the Gini entropy is always non-negative (zero if and only if is a one-hot distribution), which is clear from the definition in footnote 2, and that ; therefore, the sum of these two terms is also non-negative, and zero if and only if with .
Appendix B Comparative plots of evaluation metrics
Figure 2 shows comparative plots of the -perplexity, sparsemax score, and Jensen-Shannon divergence, for a distribution of the form , varying , with a vocabulary of 100 words.
Appendix C Values of used for -perplexity
The values of used for the computation of the -perplexity are reported in Table 8.
Appendix D Comparison of models with the Jensen-Shannon divergence
We compared the distributions given by the different decoding methods when generating the sentences of the BookCorpus validation set with the Jensen-Shannon divergence, as described in §4. In Tables 9 and 10 we show some of the sentences with higher Jensen-Shannon divergence, as well as the probability given by each model to the ground truth word.
Appendix E Story completion examples
Examples of story completion with context from the WritingPrompts dataset using the different models fine-tuned on BookCorpus are presented in Table 11.
|Context:||I wake up, battered and scarred from the beat I was given behind the burger joint. I think the name was Ronny’s? My memory of yesterday, better yet the last week has been completely faded. It was looking into the fog of a …|
|Greedy:||… memory, and was beginning to wonder if the man was still alive. the fog was beginning to swirl, and the fog was beginning to swirl again. the fog was beginning to swirl again, and the fog was beginning to swirl again.|
|Top-:||… moment of silence. “how ’d you know?” “i’m a professor, but i can’t find it.” “and you’re just a professor, i guess.” “and you ’re just a professor, too.” “i ’m a professor?” “yes. and you are.” “i didn’t mean to scare you.”|
|Nucleus:||… match to find out how far its web is. ain’t a pretty good option, huh? paul shrugged. wow, that’s how much access i’ve got, i’m sure. ill check with them if i need to go outside in the cold. i’m sure they’ll do it. fine by me.|
|Entmax:||… memory. my brain has become trapped in a cycle of confusion that can’t be understood with common knowledge. things become warped and fake, the knowledge getting heavier and more complex. move. do something. i can’t.|
|Context:||“We are gathered here, in the sight of the Gods and in the presence of friends and family, to observe the passing of Mr. John Carlyle.” The soft woosh of the beginning flames punctuates the Speaker’s introduction. John was …|
|Greedy:||… a former professor of physics at the university of texas. he was a graduate student at the university of texas. he was a member of the faculty of the university of texas. he was a member of the faculty of the university of texas.|
|Top-:||… a very nice guy, who always liked to make fun of herself, even after her husband had died and she found him dead. she never wanted to be his wife, but he did have a strong reputation as a good friend, so she liked to make|
|Nucleus:||… a good man. he’d built a home with plans for much larger quantities and quickly noticed that consuming supplies did not always lead to immediate success. on each occasion he filled his stomach with an island of straw he had|
|Entmax:||… very interesting. he was quite quintessential romantic: blond, warm, driven; amazingly handsome; amazing with his mother; and calm; always entertaining. his childhood spent in the neighborhood of narrow north kennedy|
Appendix F Dialogue simulation examples
In order to evaluate the different decoding methods in dialogue generation, we simulated a conversation between two agents using the same decoding method. For that, we assigned each agent a randomly selected persona from the PersonaChat dataset and gave a first utterance of the dataset as context. Examples of dialogue simulations are presented in Tables 12-13 and Tables 14-15.
|Context:||hello how are you today?|
|Softmax:||- i’m great i love to dance to oldies music was is fun|
|- very fluent in all the vernacular spoken in pa|
|- that’s awesome i teach elementary school|
|- i love teaching, tap dancing to bob segar, all illustration type stuff|
|- that sounds like a lot of fun|
|- it is, but this harry potter series just made me feel tingly|
|- that’s really hurtful to do|
|- it can be, but i am working on other things|
|- that you are fine i like james patterson|
|- very nice of you to comment|
|- yes my dog is my son that he chews candy to be funny for my dogs birthday|
|- i have no kids i moved to virginia|
|- that is crazy did you like the hurricane ?|
|- yeah it was so devastating|
|- i see|
|- its so sad i’m sorry you are stuck in pennywise|
|- wow that’s very sad|
|- it is he?|
|- it will be my big dream to leave and go to florida|
|Softmax-:||- i am doing good, how about yourself?|
|- i am good, just getting in from work|
|- oh, what do you do?|
|- i sell cars, what do you do?|
|- i work in a hospital as a nurse.|
|- oh that is very nice of you|
|- what do you sell cars ?|
|- i sell cars as a hobby|
|- oh, i see. do you have any hobbies?|
|- i like to paint and hang out in my living room.|
|- oh, i enjoy listening to music. i love rolling stones !|
|- i like rolling stones too do you?|
|- yeah, i really like the rolling stones, they’re my favorite!|
|- cool, meat is amazing. i also like to like painting|
|- i think it is for the best. i like to know that i love my job.|
|- yes beater and i love meat. i also love to eat meat.|
|- i’m sure you are a vegetarian! what do you do for a living?|
|- i sell cars, i’m an electrician|
|- oh i see. are you a car salesman?|
|- i just started selling cars|
|Greedy:||- i’m good, how are you?|
|- i’m good thanks for asking|
|- what do you do for a living?|
|- i’m studying to be a dentist|
|- oh, i’m a teacher.|
|- what city are you from?|
|- i live in the city.|
|- what do you do for a living?|
|- i teach math and english.|
|- what city are you from?|
|- i live in the city.|
|- what city are you from?|
|Context:||hello how are you today?|
|Top-:||- hello! i am well ,yourself ?|
|- i am great. i just got off of work|
|- i just got a job at a grocery store.|
|- oh i am sorry to hear that|
|- i’m okay. do you like reading ?|
|- yes i love reading and i also love to swim|
|- oh that’s awesome! i really like reading cupcakes.|
|- that sounds delicious. i’m in the south so i like to eat cupcakes|
|- i love to make my own cupcakes. i love it!|
|- do you have any hobbies?|
|- i love reading, but my favorite hobby is painting.|
|- that is cool. i’ve been to canada twice , but i’ve never been there|
|- i love to paint too! what color eyes do you have?|
|- mine are blue i’m not sure|
|- i have brown hair.|
|- i like to make my own cupcakes|
|- cool. do you work? i work at walmart and i work at the grocery market.|
|- i am actually a reporter for the local news|
|- cool! do you like to read?|
|- i’ve a newspaper reporter in the news|
|Nucleus:||i’m great, just got home from a long walk with my two dogs.|
|- do you play any instruments?|
|- not really. what kind of music do you like?|
|- i play piano sometimes and listen to metal music|
|- oh cool. i’m just trying to write an ebook about animals.|
|- i like cats and dogs. do you play any instruments?|
|- i do not, i prefer listening to music i like.|
|- what kind of music do you like?|
|- classical music and video game soundtrack.|
|- do you go to concerts?|
|- i go with my husband and listen to a lot of different music genres.|
|- do you have any kids?|
|- nope, how about you?|
|- i’ve a boyfriend. are you married?|
|- yes, recently divorced. we are planing on getting a corvette.|
|- how exciting! are you close to your husband?|
|- i am. i am looking to buy one when i retire.|
|- that is awesome! are you married?|
|- nope. we were looking to move to ireland when i got my divorce.|
|- that’s so sad. you should i ask if he’s ok?|
|Entmax:||- good, how are you doing?|
|- great just watching a movie|
|- oh really what movie?|
|- it’s a story about robots getting into human stuff.|
|- wow! well that strikes me as not science fiction at all|
|- why do you say that then?|
|- because humans are fragile and vulnerable lol|
|- yeah they would be|
|- very true. what kind of movies u like?|
|- i love sci fi and ghost in the shell|
|- i actually have never watched that. maybe ghost in the shell|
|- it is such a beautiful movie|
|- yeah i love stories about lost ones|
|- that ’s cool|
|- my favourite movie though is gamera how about you|
|- i love it too, did you see it?|
|- yes that was one of my favorite films|
|- that’s cool. do you work?|
|- i am too lazy to try difficult things now|
|Context:||greetings from the big apple, kim here, how are you?|
|Softmax:||- hi! great thanks, how are you ? do you have any hobbies?|
|- i ride my black mbu on my nightly walks, and listen to rock music.|
|- whats your favorite kind of music?|
|- guns and roses is my bedrock, my favorite artist.|
|- that’s great, my favorite genre is joleg tonne or tontera band.|
|Softmax-:||- i’m doing great, how are you?|
|- i’m good, just playing with my son today.|
|- sounds fun, what are you playing with?|
|- I’m playing soccer right now, he’s 4.|
|- do you have any pets?|
|- no, just the baby. are you married?|
|- no, no i don’t have any kids|
|- i’m single, but i have been in this profession for a while.|
|- interesting, tell me more about yourself.|
|- i’m a professional soccer player. what about you ?|
|- that’s cool, i’m just a training dummy|
|- i also play soccer professionally, i love it.|
|- i like to play sports too. definitely a great sport|
|- i have been to stanford for training, trying to perfect my skills.|
|- i bet that’s great training|
|- its a lot of work!|
|- yeah but i like it|
|- yeah. i’m actually a soccer mom, we try to keep on top of our kids.|
|- do you have kids?|
|- i do.|
|Greedy:||- i’m doing well. how are you?|
|- i’m good, just finished cooking dinner.|
|- what do you like to cook?|
|- i love pasta, and i love to cook.|
|- i like to make pasta too. i’m a huge fan of pasta.|
|- i love to cook, i’m a student, so i’m always busy.|
|- what are you studying?|
|- i’m studying to be a chef.|
|- what city are you from?|
|- i’m from cali, what about you?|
|- i’m from cali. i’m from the midwest.|
|- i’m from the midwest too , but i’m from the midwest.|
|Context:||greetings from the big apple, kim here, how are you?|
|Top-:||- i’m very good, thank you, i hope you have a great day|
|- thanks. have a good day too.|
|- same to you, have a good day too!|
|- you do the same. have a great day too.|
|- the same to you and have a great day also!|
|- i do have a very nice day, i love running.|
|- that’s nice to enjoy, it s a great day here today|
|- i am a political junkie.|
|- you must be very busy!|
|- what do you do for a living?|
|- i’m actually a business major.|
|- that must be hard to do, do you like politics?|
|- not a big fan of politics, i like to go skiing and listen to rock|
|- do you play sports or something?|
|- i am in school for business|
|- oh that is awesome. i used to play football in college, but i hated it!|
|- oh that’s why i play football!!|
|- wow. what is your name?|
|- my name is jed. i am from russia. how about you?|
|- i played in ny and played at boston.|
|Nucleus:||- hi, i’m fantastic, how are you?|
|- great. making some barbies for our barbie girls.|
|- awesome, do you have any hobbies?|
|- i really enjoy barbies. i live a long time away.|
|- i am looking forward to going to school in the fall.|
|- school is cool. do you sing? i love barbie songs.|
|- i’ve a pet pean i sing while eating tacos.|
|- barbie! that’s cute.|
|- oh yea, i have a small heart lol|
|- are you looking forward to winter? i would like to live in the big apple.|
|- winter is fun! no thanks, me neither.|
|- you must really like summer, too lol|
|- thanks. it is too cold to sing in the winter. its just not for me.|
|- do you like tacos?|
|- oh my gosh. they’re good. my favorite is barbie. lol|
|- i know, but spaghetti is my fave lol|
|- haha. lol|
|- oh , i do enjoy spaghetti , for my college graduation i was in last year of high school|
|- how are you ?|
|Entmax:||- good. i just rang someone on the other side.|
|- good, what did you get excited about today?|
|- i love paris. how was your day?|
|- its been crazy, i’m always feeling excited! lol|
|- what is your favorite thing to do?|
|- oh i love going hiking in the mountains, myself.|
|- that sound amazing. i like travelling.|
|- love trips, but i cannot stand staying in one place all day. lol|
|- do you have any hobbies ? i always want to find cool new things.|
|- i really like going out and nature itself, i prefer hiking|
|- yes, exploring parks and all that jazz when i can.|
|- that is awesome fun, whats your fav color?|
|- i love grey. roses and the mountains signify my youth.|
|- mine is blue, it makes me think of blueberries though|
|- grey denotes youth well or openness and transparency. love the kale chips.|
|- mmm i love chocolate . lol|
|- oh i am sold on chocolate. eating it off the cob|
|- i miss the crazy curly hair fries and crackers . haha|