Sparse Text Generation

04/06/2020 ∙ by Pedro Henrique Martins, et al. ∙ Priberam Unbabel Inc. 0

Current state-of-the-art text generators build on powerful language models such as GPT-2, which have impressive performance. However, to avoid degenerate text, they require sampling from a modified softmax, via temperature parameters or ad-hoc truncation techniques, as in top-k or nucleus sampling. This creates a mismatch between training and testing conditions. In this paper, we use the recently introduced entmax transformation to train and sample from a natively sparse language model, avoiding this mismatch. The result is a text generator with favorable performance in terms of fluency and consistency, fewer repetitions, and n-gram diversity closer to human text. In order to evaluate our model, we propose three new metrics that are tailored for comparing sparse or truncated distributions: ϵ-perplexity, sparsemax score, and Jensen-Shannon divergence. Human-evaluated experiments in story completion and dialogue generation show that entmax sampling leads to more engaging and coherent stories and conversations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Context:

“Are they ready to go live?” Zuko asked. “Yup, in just a minute” Tomi replied as he leaned back in his reclining chair. This was a moment that the entire galaxy would see. After 10 years…

Greedy: … of study, the scientists had finally come to the conclusion that the universe was not a place for living. The scientists had been studying the universe for over a century. They had been studying the universe for over a century.
Top-: … of studying this phenomenon, the team had finally come up with a plan to study what the aliens were doing. The team would be looking for a way to create a space station. “And what would you call it?” “What would you call the space station?”
Nucleus: … of diligent testing, and some test run after that, the results were well received. “they’re waiting for you, because there’s no way they would.Stowedly at a local wedding place, the two archaeologists spoke together.
Entmax: … of hibernation, Tomi was back. These last days had been a significant step forward in his mission. This time, Tomi was not alone. All the empires had aligned together and the world’s leadership began to openly support his mission.
Table 1: Example of story completion on the WritingPrompts, using the greedy decoding, top- sampling, nucleus sampling, and entmax sampling (our proposal). Repetitions and off-topic text are highlighted.

The last few years have seen a tremendous progress in neural language modeling, with a strong impact on key NLP tasks such as machine translation (Sutskever et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017), speech recognition (Chorowski et al., 2015), summarization (Rush et al., 2015), and open-ended text generation (Radford et al., 2019). Benefitting from large amounts of data, models such as Transformer-XL Dai et al. (2019) and GPT-2 Radford et al. (2019) have achieved impressive perplexity scores on language modeling. However, the generated text is still often repetitive and incoherent (Table 1).

A conundrum with current approaches is the mismatch between training and testing conditions: models are usually trained to maximize the likelihood of observed text. However, when generating, state-of-the-art models sample from a truncated and renormalized softmax distribution (Fan et al., 2018; Holtzman et al., 2019)

. They do so as a compromise to avoid two extremes: a deterministic search for the most probable sentence (via greedy decoding or beam search) usually results in dull and repetitive “degenerate text”

(Li et al., 2016a, 2017; Holtzman et al., 2019). Stochastically sampling from the full softmax distribution, on the other hand, often generates many implausible words from the tail of the distribution (Fan et al., 2018). The recently proposed nucleus sampling approach (Holtzman et al., 2019)

sets the truncation point based on the cumulative distribution function,

i.e., considers the top words with a cumulative probability . This approach permits better control of the number of words being generated than previous work, such as top sampling (Fan et al., 2018), which samples from the most probable words. However, the “sparsity” introduced by both methods is artificially imposed at run time, not learned during training.

A second problem is that it is hard to compare different truncation strategies—for example, we cannot easily evaluate how the resulting truncated distributions behave as language models, because the most widely used metric for language modeling—perplexity—does not handle sparse distributions. For this reason, previous works (Welleck et al., 2019; Li et al., 2019) generate from a truncated softmax, but report the perplexity of the full softmax.

In this paper, we propose a new approach—entmax sampling3)—that eliminates the mismatch between training and test conditions, sparsity-wise. Key to our approach is the recently proposed entmax transformation (Peters et al., 2019)

. Entmax transforms a vector of scores into a

sparse probability distribution

, preventing implausible words from receiving any probability mass. Moreover, it does so natively

: it comes with a well-defined loss function that allows it to learn its sparsity automatically from the data, during training. This results in a new stochastic text generator where the number of possible word types varies with the context (like nucleus sampling), but that generates by sampling directly from its output distribution (like softmax), and where the sparsity of this distribution is present during training (unlike any existing method).

As a second contribution, we propose three new metrics to support the evaluation of sparse language models (§4): -perplexity, sparsemax score, and Jensen-Shannon divergence. We show that these metrics are well supported theoretically and can be used to compare our method with various truncation and temperature techniques.

Experiments in language modeling, story completion, and dialogue generation (§5) show that entmax sampling generates more diverse text and fewer repetitions than nucleus and top- sampling.

1.1 Related work

Decoding methods.

While greedy decoding and beam search are popular strategies for sequence-to-sequence tasks, such as machine translation, Knowles et al. (2016) and Stahlberg and Byrne (2019) showed that searching for the most probable sentence in a model trained with likelihood maximization has a bias for short sentences. In open-ended generation, Fan et al. (2018) and Holtzman et al. (2018, 2019) have shown that these methods lead to repetitions and dull text. To overcome this, several authors proposed ways of altering the beam search method in order to promote word diversity (Li et al., 2016b; Vijayakumar et al., 2018; Kulikov et al., 2018).

An alternative to deterministic text generation is to sample directly from the softmax distribution. However, softmax is a strictly positive (dense) transformation. Since the probability mass tends to accumulate in a long tail, this procedure tends to generate very unlikely words too often, leading to degenerate text (Fan et al., 2018; Holtzman et al., 2019). This can be mitigated by lowering the softmax temperature (Ficler and Goldberg, 2017), by sampling from the top- most probable words only (Fan et al., 2018; Radford et al., 2019), and through nucleus sampling (Holtzman et al., 2019).

Diversity-promoting models.

In addition to new decoding methods, models that aim to increase word diversity and diminish repetition have also been introduced. Xu et al. (2018) proposed a diversity-promoting generative adversarial network, which rewards novel and fluent text. Holtzman et al. (2018) proposed augmenting the language model with several discriminators. More recently, Welleck et al. (2019) proposed augmenting log-likelihood loss with an unlikelihood loss that penalizes the generation of tokens that are present in the context. These alternatives can be applied jointly with entmax sampling.

Sparse transformations and losses.

At the core of our work are sparse alternatives to the softmax transformation. Martins and Astudillo (2016) proposed sparsemax and applied it to multi-label classification. This was generalized by Peters et al. (2019) via their -entmax transformation, which was applied to sequence-to-sequence models for morphological inflection and machine translation. In contrast to our work, they performed deterministic decoding with beam search.

Evaluation metrics.

The most common metrics to evaluate text generation models are perplexity (Jelinek et al., 1977) and BLEU (Papineni et al., 2002). For open generation, Zhu et al. (2018) observed that “no single metric is comprehensive enough.” Other evaluations include corpus -gram overlap (Yu et al., 2017; Press et al., 2017), Fréchet distance (Cífka et al., 2018), and sweeping the softmax temperatures to assess model robustness Caccia et al. (2018). These approaches are aimed at the (harder) problem of evaluating the quality of generated text. By contrast, our paper proposes new metrics for evaluating language models in the task of predicting the next word conditioned on ground truth context (like perplexity does), but supporting sparse probability distributions (which perplexity does not).

2 Language Modeling

Language models assign probability to word sequences , where each is in a vocabulary , and . This probability can be written as . We would like the model to assign high probability to real sentences, i.e., each distribution should assign a large probability value to the ground truth .

Given a set of training sentences, the usual strategy for learning the language model parameters is to minimize the negative log-likelihood:

(1)

The standard choice to model in Eq. 1 is to compute a score vector by conditioning on the context , and then applying a softmax transformation, , where

(2)

At decoding time, the language model generates sentences one word at a time, by sampling from the learned probability distribution. However, softmax yields a dense distribution, i.e., some probability mass (even if small) is assigned to all the words in the vocabulary. Holtzman et al. (2019, §4) have shown that, if we sample from this distribution directly, the resulting text becomes degenerate, with common incoherences arising due to the unreliability of the tail of the distribution. This motivated a line of work proposing “ad-hoc” modifications to the softmax distribution, to reduce the effect of the tail. Two of the most successful techniques, top- and nucleus sampling (Fan et al., 2018; Holtzman et al., 2019), do so by truncating and renormalizing the distribution . Note that these techniques are applied only at decoding time—during training the original softmax distribution is left untouched, being used as part of the optimization of the cross-entropy loss.

Our alternative to these ad-hoc modifications builds on sparse transformations, as sparsemax (Martins and Astudillo, 2016) and, more generally, -entmax (Peters et al., 2019). These transformations have the ability to inherently produce sparse probability distributions (i.e., their tails are short). Therefore, sampling from these distributions directly is a natural way to prevent degenerate text.

3 Entmax Sampling

3.1 Sparse Transformations

The -entmax111https://github.com/deep-spin/entmax. (Peters et al., 2019) is a family of transformations, parametrized by a scalar parameter , defined as:

(3)

where is the probability simplex, and is the Tsallis -entropy (Tsallis, 1988):

(4)

When , this becomes the commonly used Shannon entropy, and when , we recover the Gini entropy.222The Gini entropy is . When , . Thus, , , and recover , , and , respectively. Blondel et al. (2019) have shown that, for , entmax is able to output sparse probability distributions, where some words get exactly zero probability, whereas softmax () does not have this capability.

How can we learn this output sparsity during training?

Following Peters et al. (2019), we replace the negative log-likelihood loss in Eq. 1 by

(5)

where is the -entmax loss:

(6)

where , and is the indicator one-hot vector that corresponds to the ground truth word . When , we still recover the negative log-likelihood, , and, when , this corresponds to the sparsemax loss (Martins and Astudillo, 2016), which we will revisit in §4. When using the -entmax loss with , we can eliminate the mismatch between training and run time conditions, since the probability distributions evaluated by the loss are also sparse.

The entmax loss belongs to the wider class of Fenchel-Young losses (Blondel et al., 2019) and, consequently, is convex on , differentiable (with gradient ), and for has a separation margin: the loss becomes zero when the score of the correct class is separated by the rest by a margin of . Separation is achieved if and only if , i.e., when the model puts all its probability mass in the correct word. This allows the model to be adaptive to the degree of uncertainty present: in some cases there are few plausible words, so most words should have probability zero, while in other cases a higher number of words are plausible and should be given probability mass.

3.2 Entmax Sampling

We propose entmax sampling as an alternative to top- sampling and nucleus sampling (Holtzman et al., 2019). We sample from the categorical distribution obtained by applying the entmax transformation to the scores given by the model:

(7)

We sample directly from the learned sparse probability distribution over the words, without any ad-hoc modification. Therefore, this sparsity is not artificially imposed at run time; rather, it is native to the entmax transformation and learned during training. As in nucleus sampling and in opposition to top- sampling, entmax sampling considers a varying number of tokens depending on the context. Moreover, as we show in Table 3, with entmax sampling this variability is higher.

4 Evaluation Metrics

Language models are commonly evaluated by computing their perplexity () on held-out data. Perplexity assesses the ability of a language model to predict the next word given the context:

(8)

However, its computation involves the logarithm of the predicted probability distribution over the words. This poses a problem when we are using sparse or truncated probability distributions, since we have . Usually, authors report the values for perplexity computed on the original probability distribution, before truncation. However, this metric does not allow different sparse decoding strategies to be compared.333This is important not only when we have sparse or truncated probability distributions, but also to compare language models using different vocabularies: when using perplexity, if the ground truth word is not contained in the vocabulary, one usually considers the probability attributed to an unk token instead of a zero probability, which leads to an unfair comparison between models with different vocabularies. As an alternative, we propose three different metrics (to facilitate better understand these metrics, comparative plots are shown in Fig. 2, App. B).

-perplexity.

To be able to compute the perplexity for sparse distributions, the simplest approach is to smooth it by adding a small value to all terms followed by renormalization, as in additive (Laplace) smoothing (Chen and Goodman, 1999):

(9)

The value of can be tuned for each method. A disadvantage of is that it still does not evaluate the original sparse distribution, but rather a modified version of it. However, when applied to variants of truncated softmax, by collapsing all the truncated probabilities to the same value , it is useful to measure how much truncation deteriorates its ability to rank words, compared to softmax.

Sparsemax score.

We can derive a more interesting metric that handles sparse distributions directly. By setting in Eq. 6,444If we set instead, we revert to perplexity. we obtain the sparsemax loss proposed by Martins and Astudillo (2016), . We define the sparsemax score () as:

(10)

where is the Gini entropy (see footnote 2). Unlike perplexity, this score is bounded. In fact, it is always between 0 (when with ) and 1 (when ). We prove this fact in App. A. Interestingly, when the model is deterministic (e.g., when it comes from greedy search), we have , and the sparsemax score simply becomes the word accuracy. In the opposite case, when is uniform, we obtain when .

Jensen-Shannon Divergence.

Given two discrete probability distributions and , and denoting their mixture (arithmetic mean) as

, and the Kullback-Leibler divergence as

, the Jensen-Shannon divergence is defined as:

(11)

The Jensen-Shannon divergence can be interpreted as a mutual information as follows (Grosse et al., 2002; Banerjee et al., 2005): consider a two-step process where we first toss a fair coin . If the outcome is heads, we sample the next word according to the model ; if it is tails, we sample . A word generated according to this process is governed by the mixture , . The Jensen-Shannon divergence between and

is the mutual information between the random variables

and , which equals , where is the Shannon entropy and is the conditional entropy. Hence, the Jensen-Shannon divergence can be seen as the reduction of uncertainty about the source when we observe a sample from the mixture . The more similar the two distributions and are, the smaller this reduction is.

In our experiments, we report the

as an evaluation metric for language models, setting

(i.e., a one-hot distribution placed on the ground truth word ) and averaging the over the words. Like the sparsemax score described above, the is bounded: it is zero if , and maximal () when is a one-hot distribution placed on a different word.

Comparing multiple models.

The generalized JS allows to compare two or more trained models:

(12)

where are the probability distributions of the different models and is their mixture. This property can be useful for measuring the diversity between multiple models (e.g., when used in an ensemble system). We use this metric in App. D to rank the sentences in which the different models we compare disagree the most.

5 Experiments

We compare the different decoding methods in three NLP tasks: language modeling (§5.1), story completion (§5.2), and dialogue generation (§5.3). In language modeling, we evaluate the model’s fluency, while in story completion we also evaluate if the methods generate coherent and “interesting” text. In dialogue generation, we evaluate the methods’ performance in an interactive task.

5.1 Language Modeling

WikiText-2 WikiText-103 BookCorpus
sp JS rep wrep sp JS rep wrep sp JS rep wrep
Softmax .649 .415 20.19 .352 .177 .673 .413 15.36 .374 .180 .646 .412 17.02 .386 .215
Softmax- .632 .403 27.98 .396 .201 .663 .389 16.56 .426 .208 .635 .403 19.26 .434 .239
Greedy .431 .406 936.49 .460 .232 .472 .366 607.80 .470 .228 .426 .398 986.28 .506 .250
Top- .633 .395 92,04 .409 .219 .661 .365 55.43 .420 .218 .633 .396 69.26 .448 .260
Nucleus .645 .406 40.45 .374 .189 .672 .379 20.87 .377 .183 .644 .409 25.48 .410 .229
Entmax .652 .406 22.82 .349 .170 .680 .387 15.31 .365 .177 .651 .405 16.90 .380 .206
Table 2: Language model evaluation on WikiText-2, WikiText-103, and BookCorpus test sets. For all metrics except sp, lower is better.

Datasets and metrics.

We performed experiments on three widely used language modeling datasets: WikiText-2 and WikiText-103 Merity et al. (2016), and BookCorpus Zhu et al. (2015). WikiText-2 and WikiText-103 are composed of Wikipedia articles, comprising around 2 and 100 million tokens for training, respectively. Their validation and test sets have 217,000 and 245,000 tokens, respectively. BookCorpus is composed of 11,038 freely available books. We used the standard split: 800 million tokens for training, 260,000 for validation, and 280,000 for testing.

We report the sparsemax score, Jensen-Shannon, and -perplexity (§4) to evaluate the methods’ fluency, and the rep and wrep555rep measures the number of times that a word from the previous words is repeated, when generating the following word. wrep does the same, discarding words that are also repeated in the ground truth. We report the average of rep and wrep for . (Welleck et al., 2019) to evaluate the methods’ tendency to generate repetitions. All metrics are computed at the BPE level Sennrich et al. (2016). We treat

as an hyperparameter for each model, tuned on the validation sets (we report the

for each model in Table 8, App. C). For softmax, with and without decreased temperature, we set .

Fine-tuning OpenAI GPT-2.

We fine-tuned GPT-2 Radford et al. (2019) in two training settings: with negative log-likelihood and with entmax losses.666

We use the PyTorch re-implementation at

https://github.com/huggingface/transformers.

We used the GPT-2 medium model, which consists of a 24 layer transformer with 345 million parameters. The models were fine-tuned for up to 5 epochs for Wikitext-2 and up to 1 for Wikitext-103 and BookCorpus using the Adam optimizer

(Kingma and Ba, 2015), with a learning rate of , which was linearly decayed to zero over the course of training. We report results of the models that have the smallest JS divergence on the validation set. We used entmax with , based on a grid search over . The same was done for top- sampling with , nucleus sampling with , and softmax with decreased temperature with . The selected hyperparameters were (as in Fan et al. (2018)), (as in Holtzman et al. (2019)), and .

Automatic Metrics Results.

As can be seen in Table 2, entmax sampling consistently outperforms every other decoding method in sparsemax score and number of repetitions. Additionally, entmax sampling leads to better -perplexity scores than greedy decoding, top- sampling, nucleus sampling, and softmax-, while having scores similar to softmax sampling. Concerning the Jensen-Shannon divergence scores, top- sampling has the best scores. All this is achieved with a model that copes with sparsity inherently, instead of with truncations truncations at run time.

In order to understand why entmax leads to better sparsemax scores and fewer repetitions, we show the mean, median, standard deviation, minimum, and maximum number of tokens each decoding strategy considers when predicting the next word given a context on the Wikitext-103 test set, in Table 

3.

Mean Median SD Min Max
Softmax 50,257 50,257 0 50,257 50,257
Softmax- 50,257 50,257 0 50,257 50,257
Greedy 1 1 0 1 1
Top- 10 10 0 10 10
Nucleus 4,983 3,554 2,710 1 12,305
Entmax 11,224 9,672 4,166 1 18,399
Table 3: Mean, median, standard deviation, minimum, and maximum number of tokens considered by each decoding method on the Wikitext-103 test set.

We can see that entmax sampling and nucleus sampling consider a lot more tokens than greedy decoding and top-

sampling, which can be the reason for the smaller number of repetitions. This is possible since the number of tokens is variable, going from 1 to 12,305 for nucleus sampling and from 1 to 18,399 for entmax sampling. Moreover, a possible explanation for entmax sampling outperforming nucleus sampling is the fact that entmax’ support size has higher variance; it is able to consider a higher number of tokens, while still being able to consider only one word. It is more adaptive to the context.

777By having a larger cumulative , nucleus sampling could also consider more tokens. However a larger led to lower scores in the automatic metrics.

5.2 Story completion

Next, we analyze the model’s ability to generate long sequences of text using different sampling methods.888Softmax sampling is not considered since it has been shown to generate degenerate text (Holtzman et al., 2019). We performed completion of stories from the WritingPrompts dataset (Fan et al., 2018), using the models fine-tuned on the BookCorpus dataset. WritingPrompts consists of a collection of human-written stories paired with writing prompts. We randomly selected 1,000 stories which were at least 200 words long and used the first 50 words as context for the models. Examples of stories generated with each method (Table 1 and Table 11 of App. E) show that entmax sampling leads to more interesting stories while preventing degenerate text.

Figure 1: Plot of the distinct-n metric for each sampling method on story completion, with .

To measure the stories’ word diversity, we show in Figure 1 the distinct- metric999Distinct- corresponds to the number of distinct -grams divided by the total number of generated words. (Li et al., 2016a) for the stories generated by each model. It can be seen that entmax sampling leads to more diverse unique n-grams for , closer to human generated text. We also measured the number of unique words in the stories generated: entmax sampling generated 12,767 different words, while softmax with decreased temperature, greedy decoding, top-, and nucleus sampling generated 9,973, 3,464, 7,852 and 11,929 words, respectively. As expected, entmax leads to higher word diversity on par with human stories, which contain 15,166 different words.

Human evaluation.

We performed human evaluation of greedy decoding, top-, nucleus, and entmax sampling on completion of stories from the WritingPrompts datasets. To perform the human evaluation, we divided 100 stories into 2 sets of 50 stories. For each set of stories, 2 judges evaluated each story in 3 metrics: fluency (whether the text is syntactically and semantically correct), coherence (whether the story continuation is related to the context and is consistent), and engagement (whether the annotator felt interested in the story). Ratings were given on a 5-point scale, and the mean for each metric is reported in Table 4.

Fluency Coherence Engagement
Greedy 1.2 1.2 1.1
top- 2.8 2.2 2.1
Nucleus 3.0 2.0 2.2
Entmax 3.1 2.3 2.6
Table 4: Human evaluation of story completion. All scores marked in bold at each column outperform the others with statistical significance, according to the Wilcoxon’s test with p-value .

Entmax sampling outperforms greedy decoding, top-, and nucleus sampling on all three metrics. This confirms the better generation quality of entmax sampling.

5.3 Dialogue Generation

To evaluate the sampling methods in an interactive setting, we experiment with dialogue generation. Its goal is to generate an utterance, given a context consisting of the previous utterances in the dialogue and, in some cases, initial context sentences with related information that can be describing personas, knowledge, or scenarios.

Datasets and metrics.

We performed experiments with the PersonaChat dataset (Zhang et al., 2018). It is a crowd-sourced dialogue dataset in which speakers were asked to condition their utterances on predefined personas. It contains 164,356 utterances over 10,981 dialogues. As there is no public test set, we report results on the validation set. We evaluate the word -score, -perplexity, sparsemax score, and Jensen-Shannon. As for the language modeling experiments, -perplexity, sparsemax score, and Jensen-Shannon are computed at the BPE level. We also report distinct- metric for (Li et al., 2016a) and analyze how the models behave in dialogue simulations between two agents (Li et al., 2016c).

Fine-tuning OpenAI GPT-2.

In order to apply entmax decoding to the dialogue generation task, we follow Wolf et al. (2019) and Budzianowski and Vulić (2019): the input given to the language model consists of the sentences describing the persona the model should impersonate, and the history utterances. In order for the model to adapt to dialogue, the word and position embeddings are augmented with dialogue-state embeddings that indicate whether tokens are from a persona sentence, speaker 1, or speaker 2. These embeddings are learned during fine-tuning.

We fine-tune the GPT-2 medium model (Radford et al., 2019) for a maximum of 3 epochs with a learning rate of that linearly decays to zero. , , , and are chosen with a grid search over the sets of values , , , and . The values chosen are , , , and .

Automatic Metrics Results.

We report the results in Table 5. Entmax again outperforms all the other sampling methods in sparsemax score and -perplexity, having the lowest JS (same as top-). Entmax also leads to fewer repetitions, having higher distinct-1 and distinct-2 scores. However, its F score is lower (similar findings have been reported in Li et al. (2019)). This can be due to dialogue generation being an open-ended generation task that can have multiple correct answers.

sp JS dist-1 dist-2
Softmax 0.636 0.412 17.21 14.21 0.4325 0.8422
Softmax- 0.621 0.393 17.18 16.31 0.4312 0.8289
Greedy 0.422 0.401 1031.79 21.79 0.4305 0.7958
Top- 0.626 0.393 47.79 17.34 0.4378 0.8344
Nucleus 0.632 0.399 26.98 15.98 0.4334 0.8436
Entmax 0.642 0.393 17.10 15.02 0.4532 0.8494
Table 5: Automatic evaluation of dialogue generation on the Persona-Chat validation set.

Additionally, we simulated a conversation between two agents of the same model (Li et al., 2016c). We chose different personas randomly for the two agents. Then a first utterance from the PersonaChat dataset was given as context. We measured the average length of conversations, considering that the conversation is finished when utterances have an overlap of 80 or higher, when there is no response by an agent, or when it reaches 20 utterances (procedure similar to the one proposed by Li et al. (2016c)). We also measured the number of unique words used, and the distinct- metric for .

Length Unique words dist-1 dist-2
Softmax 13.98 11,242 0.6084 0.8824
Softmax- 14.82 5,495 0.5384 0.6936
Greedy 7.83 4,229 0.4853 0.6732
top- 14.72 8,833 0.5623 0.8461
Nucleus 15.56 10,098 0.5836 0.8728
Entmax 15.83 13,020 0.6546 0.9211
Table 6: Evaluation of dialogue simulations between two agents using the different sampling methods.

As can be seen in Table 6, entmax sampling leads to longer conversations with higher word diversity and higher number of distinct 1-grams and 2-grams.

Human evaluation.

Finally, we performed human evaluation following the ConvAI2 challenge: 6 volunteers had 30 conversations each with models using the different sampling methods. The volunteers scored the conversations from 1 to 5 in terms of fluency, consistency (whether the model’s utterances are coherent with their persona and the model does not contradict itself), and engagement. The model’s personas were randomly selected from the PersonaChat validation set. Results are reported in Table 7.

Fluency Consistency Engagement
Greedy 4.6 3.3 2.5
top- 4.5 3.6 3.4
Nucleus 4.6 3.6 3.3
Entmax 4.6 3.8 3.9
Table 7: Human evaluation of dialogue generation. All scores marked in bold at each column outperform the non-bold ones with statistical significance, according to the Wilcoxon’s test with p-value .

Entmax sampling outperforms the other methods in consistency and engagement, having similar scores in fluency. This means entmax sampling does not only generate the most interesting conversation utterances, but it also leads to an improvement of the conversation consistency.

6 Conclusions

We proposed entmax sampling as a new strategy for generating text from a sparse probability distribution. It provides three main advantages: (i) offers a natural way of sampling directly from the output probability distribution; (ii) the distribution sparsity is present during training, and, consequently, there is no distribution sparsity mismatch between training and run time; (iii) when sampling with entmax, the number of words to be considered varies with the context, as in nucleus sampling and in contrast to top- sampling. Additionally, we proposed new metrics for evaluating language models that produce sparse and truncated probability distributions: -perplexity, sparsemax score, and Jensen-Shannon divergence.

Experiments show that entmax sampling leads to higher word diversity, fewer repetitions, and similar or improved results in automated metrics. Human evaluation confirms that entmax outperforms greedy decoding, top-, and nucleus sampling in coherence/consistency and engagement, and is similar or better in terms of fluency.

References

Appendix A Proof of boundedness of the sparsemax score

We show here that the sparsemax score in Eq. 4 is always bounded between 0 and 1. The fact that simply follows from the fact (Blondel et al., 2019, Prop. 2) that any Fenchel-Young loss (which includes ) is non-negative. Since , it follows that . Let us see when the maximal value 1 is attained. We have:

(13)

Since the Gini entropy is maximized by the uniform distribution, the maximum distribution in Eq. 

A is of the form for . Replacing in Eq. A, we obtain

(14)

This is maximized by , which corresponds to .

To see that we always have , we use the fact that the Gini entropy is always non-negative (zero if and only if is a one-hot distribution), which is clear from the definition in footnote 2, and that ; therefore, the sum of these two terms is also non-negative, and zero if and only if with .

Appendix B Comparative plots of evaluation metrics

Figure 2 shows comparative plots of the -perplexity, sparsemax score, and Jensen-Shannon divergence, for a distribution of the form , varying , with a vocabulary of 100 words.

Figure 2: Comparative plots of -perplexity for and (top), and of sparsemax score and JS divergence (bottom). In both cases, the -axis is .

Appendix C Values of used for -perplexity

The values of used for the computation of the -perplexity are reported in Table 8.

WikiText-2 WikiText-103 BookCorpus PersonaChat
Greedy
Top-
Nucleus
Entmax
Table 8: Values of used for -perplexity.

Appendix D Comparison of models with the Jensen-Shannon divergence

We compared the distributions given by the different decoding methods when generating the sentences of the BookCorpus validation set with the Jensen-Shannon divergence, as described in §4. In Tables 9 and 10 we show some of the sentences with higher Jensen-Shannon divergence, as well as the probability given by each model to the ground truth word.

besides , i enjoyed having her with me
Softmax 0.011 0.0002 0.808 0.1479 0.0002 0.0141 0.0228 0.0179 0.9114
Softmax-t 0.0131 0.0001 0.8855 0.1864 0.0001 0.0137 0.0211 0.0179 0.9467
Greedy 0 0 1 1 0 0 0 0 1
top- 0.0439 0 0.8814 0.2543 0 0 0.0311 0 0.9267
Nucleus 0.037 0 1 0.3042 0 0 0 0 1
Entmax 0.0159 0 0.9943 0.3311 0 0.044 0.0073 0.0185 1
Table 9: Probabilities given by each model to “besides, i enjoyed having her with me”
i miss my parents and my friends
Softmax 0.0299 0.0006 0.0294 0.0104 0.1031 0.0611 0.0225
Softmax-t 0.0399 0.0005 0.0279 0.0121 0.1067 0.0763 0.0240
Greedy 0 0 0 0 0 1 0
top- 0.1193 0 0.0391 0 0.01303 0.1410 0
Nucleus 0.1005 0 0 0.0289 0.1035 0.1012 0
Entmax 0.1047 0 0.0320 0.0127 0.1453 0.1509 0.0469
Table 10: Probabilities given by each model to “i miss my parents and my friends”.

Appendix E Story completion examples

Examples of story completion with context from the WritingPrompts dataset using the different models fine-tuned on BookCorpus are presented in Table 11.

Context: I wake up, battered and scarred from the beat I was given behind the burger joint. I think the name was Ronny’s? My memory of yesterday, better yet the last week has been completely faded. It was looking into the fog of a …
Greedy: … memory, and was beginning to wonder if the man was still alive. the fog was beginning to swirl, and the fog was beginning to swirl again. the fog was beginning to swirl again, and the fog was beginning to swirl again.
Top-: … moment of silence. “how ’d you know?” “i’m a professor, but i can’t find it.” “and you’re just a professor, i guess.” “and you ’re just a professor, too.” “i ’m a professor?” “yes. and you are.” “i didn’t mean to scare you.”
Nucleus: … match to find out how far its web is. ain’t a pretty good option, huh? paul shrugged. wow, that’s how much access i’ve got, i’m sure. ill check with them if i need to go outside in the cold. i’m sure they’ll do it. fine by me.
Entmax: … memory. my brain has become trapped in a cycle of confusion that can’t be understood with common knowledge. things become warped and fake, the knowledge getting heavier and more complex. move. do something. i can’t.
Context: “We are gathered here, in the sight of the Gods and in the presence of friends and family, to observe the passing of Mr. John Carlyle.” The soft woosh of the beginning flames punctuates the Speaker’s introduction. John was …
Greedy: … a former professor of physics at the university of texas. he was a graduate student at the university of texas. he was a member of the faculty of the university of texas. he was a member of the faculty of the university of texas.
Top-: … a very nice guy, who always liked to make fun of herself, even after her husband had died and she found him dead. she never wanted to be his wife, but he did have a strong reputation as a good friend, so she liked to make
Nucleus: … a good man. he’d built a home with plans for much larger quantities and quickly noticed that consuming supplies did not always lead to immediate success. on each occasion he filled his stomach with an island of straw he had
Entmax: … very interesting. he was quite quintessential romantic: blond, warm, driven; amazingly handsome; amazing with his mother; and calm; always entertaining. his childhood spent in the neighborhood of narrow north kennedy
Table 11: Examples of story completion using the different decoding methods.

Appendix F Dialogue simulation examples

In order to evaluate the different decoding methods in dialogue generation, we simulated a conversation between two agents using the same decoding method. For that, we assigned each agent a randomly selected persona from the PersonaChat dataset and gave a first utterance of the dataset as context. Examples of dialogue simulations are presented in Tables 12-13 and Tables 14-15.

Context: hello how are you today?
Softmax: - i’m great i love to dance to oldies music was is fun
- very fluent in all the vernacular spoken in pa
- that’s awesome i teach elementary school
- i love teaching, tap dancing to bob segar, all illustration type stuff
- that sounds like a lot of fun
- it is, but this harry potter series just made me feel tingly
- that’s really hurtful to do
- it can be, but i am working on other things
- that you are fine i like james patterson
- very nice of you to comment
- yes my dog is my son that he chews candy to be funny for my dogs birthday
- i have no kids i moved to virginia
- that is crazy did you like the hurricane ?
- yeah it was so devastating
- i see
- its so sad i’m sorry you are stuck in pennywise
- wow that’s very sad
- it is he?
- seriously
- it will be my big dream to leave and go to florida
Softmax-: - i am doing good, how about yourself?
- i am good, just getting in from work
- oh, what do you do?
- i sell cars, what do you do?
- i work in a hospital as a nurse.
- oh that is very nice of you
- what do you sell cars ?
- i sell cars as a hobby
- oh, i see. do you have any hobbies?
- i like to paint and hang out in my living room.
- oh, i enjoy listening to music. i love rolling stones !
- i like rolling stones too do you?
- yeah, i really like the rolling stones, they’re my favorite!
- cool, meat is amazing. i also like to like painting
- i think it is for the best. i like to know that i love my job.
- yes beater and i love meat. i also love to eat meat.
- i’m sure you are a vegetarian! what do you do for a living?
- i sell cars, i’m an electrician
- oh i see. are you a car salesman?
- i just started selling cars
Greedy: - i’m good, how are you?
- i’m good thanks for asking
- what do you do for a living?
- i’m studying to be a dentist
- oh, i’m a teacher.
- what city are you from?
- i live in the city.
- what do you do for a living?
- i teach math and english.
- what city are you from?
- i live in the city.
- what city are you from?
Table 12: Example of dialogue simulation between two agents using the different decoding methods.
Context: hello how are you today?
Top-: - hello! i am well ,yourself ?
- i am great. i just got off of work
- i just got a job at a grocery store.
- oh i am sorry to hear that
- i’m okay. do you like reading ?
- yes i love reading and i also love to swim
- oh that’s awesome! i really like reading cupcakes.
- that sounds delicious. i’m in the south so i like to eat cupcakes
- i love to make my own cupcakes. i love it!
- do you have any hobbies?
- i love reading, but my favorite hobby is painting.
- that is cool. i’ve been to canada twice , but i’ve never been there
- i love to paint too! what color eyes do you have?
- mine are blue i’m not sure
- i have brown hair.
- i like to make my own cupcakes
- cool. do you work? i work at walmart and i work at the grocery market.
- i am actually a reporter for the local news
- cool! do you like to read?
- i’ve a newspaper reporter in the news
Nucleus: i’m great, just got home from a long walk with my two dogs.
- do you play any instruments?
- not really. what kind of music do you like?
- i play piano sometimes and listen to metal music
- oh cool. i’m just trying to write an ebook about animals.
- i like cats and dogs. do you play any instruments?
- i do not, i prefer listening to music i like.
- what kind of music do you like?
- classical music and video game soundtrack.
- do you go to concerts?
- i go with my husband and listen to a lot of different music genres.
- do you have any kids?
- nope, how about you?
- i’ve a boyfriend. are you married?
- yes, recently divorced. we are planing on getting a corvette.
- how exciting! are you close to your husband?
- i am. i am looking to buy one when i retire.
- that is awesome! are you married?
- nope. we were looking to move to ireland when i got my divorce.
- that’s so sad. you should i ask if he’s ok?
Entmax: - good, how are you doing?
- great just watching a movie
- oh really what movie?
- it’s a story about robots getting into human stuff.
- wow! well that strikes me as not science fiction at all
- why do you say that then?
- because humans are fragile and vulnerable lol
- yeah they would be
- very true. what kind of movies u like?
- i love sci fi and ghost in the shell
- i actually have never watched that. maybe ghost in the shell
- it is such a beautiful movie
- yeah i love stories about lost ones
- that ’s cool
- my favourite movie though is gamera how about you
- i love it too, did you see it?
- yes that was one of my favorite films
- that’s cool. do you work?
- i am too lazy to try difficult things now
Table 13: Example of dialogue simulation between two agents using the different decoding methods.
Context: greetings from the big apple, kim here, how are you?
Softmax: - hi! great thanks, how are you ? do you have any hobbies?
- i ride my black mbu on my nightly walks, and listen to rock music.
- whats your favorite kind of music?
- guns and roses is my bedrock, my favorite artist.
- that’s great, my favorite genre is joleg tonne or tontera band.
Softmax-: - i’m doing great, how are you?
- i’m good, just playing with my son today.
- sounds fun, what are you playing with?
- I’m playing soccer right now, he’s 4.
- do you have any pets?
- no, just the baby. are you married?
- no, no i don’t have any kids
- i’m single, but i have been in this profession for a while.
- interesting, tell me more about yourself.
- i’m a professional soccer player. what about you ?
- that’s cool, i’m just a training dummy
- i also play soccer professionally, i love it.
- i like to play sports too. definitely a great sport
- i have been to stanford for training, trying to perfect my skills.
- i bet that’s great training
- its a lot of work!
- yeah but i like it
- yeah. i’m actually a soccer mom, we try to keep on top of our kids.
- do you have kids?
- i do.
Greedy: - i’m doing well. how are you?
- i’m good, just finished cooking dinner.
- what do you like to cook?
- i love pasta, and i love to cook.
- i like to make pasta too. i’m a huge fan of pasta.
- i love to cook, i’m a student, so i’m always busy.
- what are you studying?
- i’m studying to be a chef.
- what city are you from?
- i’m from cali, what about you?
- i’m from cali. i’m from the midwest.
- i’m from the midwest too , but i’m from the midwest.
Table 14: Example of dialogue simulation between two agents using the different decoding methods.
Context: greetings from the big apple, kim here, how are you?
Top-: - i’m very good, thank you, i hope you have a great day
- thanks. have a good day too.
- same to you, have a good day too!
- you do the same. have a great day too.
- the same to you and have a great day also!
- i do have a very nice day, i love running.
- that’s nice to enjoy, it s a great day here today
- i am a political junkie.
- you must be very busy!
- what do you do for a living?
- i’m actually a business major.
- that must be hard to do, do you like politics?
- not a big fan of politics, i like to go skiing and listen to rock
- do you play sports or something?
- i am in school for business
- oh that is awesome. i used to play football in college, but i hated it!
- oh that’s why i play football!!
- wow. what is your name?
- my name is jed. i am from russia. how about you?
- i played in ny and played at boston.
Nucleus: - hi, i’m fantastic, how are you?
- great. making some barbies for our barbie girls.
- awesome, do you have any hobbies?
- i really enjoy barbies. i live a long time away.
- i am looking forward to going to school in the fall.
- school is cool. do you sing? i love barbie songs.
- i’ve a pet pean i sing while eating tacos.
- barbie! that’s cute.
- oh yea, i have a small heart lol
- are you looking forward to winter? i would like to live in the big apple.
- definitely
- winter is fun! no thanks, me neither.
- you must really like summer, too lol
- thanks. it is too cold to sing in the winter. its just not for me.
- do you like tacos?
- oh my gosh. they’re good. my favorite is barbie. lol
- i know, but spaghetti is my fave lol
- haha. lol
- oh , i do enjoy spaghetti , for my college graduation i was in last year of high school
- how are you ?
Entmax: - good. i just rang someone on the other side.
- good, what did you get excited about today?
- i love paris. how was your day?
- its been crazy, i’m always feeling excited! lol
- what is your favorite thing to do?
- oh i love going hiking in the mountains, myself.
- that sound amazing. i like travelling.
- love trips, but i cannot stand staying in one place all day. lol
- do you have any hobbies ? i always want to find cool new things.
- i really like going out and nature itself, i prefer hiking
- yes, exploring parks and all that jazz when i can.
- that is awesome fun, whats your fav color?
- i love grey. roses and the mountains signify my youth.
- mine is blue, it makes me think of blueberries though
- grey denotes youth well or openness and transparency. love the kale chips.
- mmm i love chocolate . lol
- oh i am sold on chocolate. eating it off the cob
- haha
- i miss the crazy curly hair fries and crackers . haha
Table 15: Example of dialogue simulation between two agents using the different decoding methods.