1 Introduction
Attentionbased sequencetosequence (seq2seq) models have proven useful for a variety of NLP applications, including machine translation Bahdanau et al. (2015); Vaswani et al. (2017), speech recognition Chorowski et al. (2015), abstractive summarization Chopra et al. (2016), and morphological inflection generation Kann and Schütze (2016), among others. In part, their strength comes from their flexibility: many tasks can be formulated as transducing a source sequence into a target sequence of possibly different length.
However, conventional seq2seq models are dense: they compute both attention weights and output probabilities with the softmax function (Bridle, 1990), which always returns positive values. This results in dense attention alignments, in which each source position is attended to at each target position, and in dense output probabilities, in which each vocabulary type always has nonzero probability of being generated. This contrasts with traditional statistical machine translation systems, which are based on sparse, hard alignments, and decode by navigating through a sparse lattice of phrase hypotheses. Can we transfer such notions of sparsity to modern neural architectures? And if so, do they improve performance?
In this paper, we provide an affirmative answer to both questions by proposing neural sparse seq2seq models that replace the softmax transformations (both in the attention and output) by sparse transformations. Our innovations are rooted in the recently proposed sparsemax transformation (Martins and Astudillo, 2016) and FenchelYoung losses (Blondel et al., 2019). Concretely, we consider a family of transformations (dubbed entmax), parametrized by a scalar , based on the Tsallis entropies (Tsallis, 1988). This family includes softmax () and sparsemax () as particular cases. Crucially, entmax transforms are sparse for all .
Our models are able to produce both sparse attention, a form of inductive bias that increases focus on relevant source words and makes alignments more interpretable, and sparse output probabilities
, which together with autoregressive models can lead to probability distributions that are nonzero only for a finite subset of all possible strings. In certain cases, a short list of plausible outputs can be enumerated without ever exhausting the beam (Figure
1), rendering beam search exact. Sparse output seq2seq models can also be used for adaptive, sparse next word suggestion (Figure 2).Overall, our contributions are as follows:

We propose an entmax sparse output layer
, together with a natural loss function. In largevocabulary settings, sparse outputs avoid wasting probability mass on unlikely outputs, substantially improving accuracy. For tasks with little output ambiguity, entmax losses, coupled with beam search, can often produce
exact finite sets with only one or a few sequences. To our knowledge, this is the first study of sparse output probabilities in seq2seq problems. 
We construct entmax sparse attention, improving interpretability at no cost in accuracy. We show that the entmax gradient has a simple form (Proposition 2), revealing an insightful missing link between softmax and sparsemax.

We derive a novel exact algorithm for the case of 1.5entmax, achieving processing speed close to softmax on the GPU, even with large vocabulary sizes. For arbitrary , we investigate a GPUfriendly approximate algorithm.^{1}^{1}1
Our Pytorch code is available at
https://github.com/deepspin/OpenNMTentmax.
We experiment on two tasks: one characterlevel with little ambiguity (morphological inflection generation) and another wordlevel, with more ambiguity (neural machine translation). The results show clear benefits of our approach, both in terms of accuracy and interpretability.
2 Background
The underlying architecture we focus on is an RNNbased seq2seq with global attention and inputfeeding (Luong et al., 2015). We provide a brief description of this architecture, with an emphasis on the attention mapping and the loss function.
Notation.
Scalars, vectors, and matrices are denoted respectively as
, , and . We denote the –probability simplex (the set of vectors representing probability distributions over choices) by . We denote the positive part as , and by its elementwise application to vectors. We denote the indicator vector .Encoder.
Given an input sequence of tokens , the encoder applies an embedding lookup followed by layered bidirectional LSTMs (Hochreiter and Schmidhuber, 1997), resulting in encoder states .
Decoder.
The decoder generates output tokens , one at a time, terminated by a stop symbol. At each time step , it computes a probability distribution for the next generated word , as follows. Given the current state of the decoder LSTM, an attention mechanism (Bahdanau et al., 2015) computes a focused, fixedsize summary of the encodings , using as a query vector. This is done by computing tokenlevel scores , then taking a weighted average
(1) 
The contextual output is the nonlinear combination , yielding the predictive distribution of the next word
(2) 
The output , together with the embedding of the predicted , feed into the decoder LSTM for the next step, in an autoregressive manner. The model is trained to maximize the likelihood of the correct target sentences, or equivalently, to minimize
(3) 
A central building block in the architecture is the transformation ,
(4) 
which maps a vector of scores into a probability distribution (i. e., a vector in ). As seen above, the mapping plays two crucial roles in the decoder: first, in computing normalized attention weights (Eq. 1), second, in computing the predictive probability distribution (Eq. 2). Since , softmax never assigns a probability of zero to any word, so we may never fully rule out nonimportant input tokens from attention, nor unlikely words from the generation vocabulary. While this may be advantageous for dealing with uncertainty, it may be preferrable to avoid dedicating model resources to irrelevant words. In the next section, we present a strategy for differentiable sparse probability mappings. We show that our approach can be used to learn powerful seq2seq models with sparse outputs and sparse attention mechanisms.
3 Sparse Attention and Outputs
3.1 The sparsemax mapping and loss
To pave the way to a more general family of sparse attention and losses, we point out that softmax (Eq. 4) is only one of many possible mappings from to . Martins and Astudillo (2016) introduce sparsemax: an alternative to softmax which tends to yield sparse probability distributions:
(5) 
Since Eq. 5 is a projection onto , which tends to yield sparse solutions, the predictive distribution is likely to assign exactly zero probability to lowscoring choices. They also propose a corresponding loss function to replace the negative log likelihood loss (Eq. 3):
(6) 
This loss is smooth and convex on and has a margin: it is zero if and only if for any (Martins and Astudillo, 2016, Proposition 3). Training models with the sparsemax loss requires its gradient (cf. Appendix A.2):
For using the sparsemax mapping in an attention mechanism, Martins and Astudillo (2016) show that it is differentiable almost everywhere, with
where if , otherwise .
Entropy interpretation.
At first glance, sparsemax appears very different from softmax, and a strategy for producing other sparse probability mappings is not obvious. However, the connection becomes clear when considering the variational form of softmax (Wainwright and Jordan, 2008):
(7) 
where is the wellknown GibbsBoltzmannShannon entropy with base .
Likewise, letting be the Gini entropy, we can rearrange Eq. 5 as
(8) 
crystallizing the connection between softmax and sparsemax: they only differ in the choice of entropic regularizer.
3.2 A new entmax mapping and loss family
The parallel above raises a question: can we find
interesting interpolations between softmax and sparsemax?
We answer affirmatively, by considering a generalization of the Shannon and Gini entropies proposed by Tsallis (1988): a family of entropies parametrized by a scalar which we call Tsallis entropies:(9) 
This family is continuous, i. e., for any (cf. Appendix A.1). Moreover, . Thus, Tsallis entropies interpolate between the Shannon and Gini entropies. Starting from the Tsallis entropies, we construct a probability mapping, which we dub entmax:
(10) 
and, denoting , a loss function
(11) 
The motivation for this loss function resides in the fact that it is a FenchelYoung loss (Blondel et al., 2019), as we briefly explain in Appendix A.2. Then, and . Similarly, is the negative log likelihood, and is the sparsemax loss. For all , entmax tends to produce sparse probability distributions, yielding a function family continuously interpolating between softmax and sparsemax, cf. Figure 3. The gradient of the entmax loss is
(12) 
Tsallis entmax losses have useful properties including convexity, differentiability, and a hingelike separation margin property: the loss incurred becomes zero when the score of the correct class is separated by the rest by a margin of . When separation is achieved, (Blondel et al., 2019). This allows entmax seq2seq models to be adaptive to the degree of uncertainty present: decoders may make fully confident predictions at “easy” time steps, while preserving sparse uncertainty when a few choices are possible (as exemplified in Figure 2).
Tsallis entmax probability mappings have not, to our knowledge, been used in attention mechanisms. They inherit the desirable sparsity of sparsemax, while exhibiting smoother, differentiable curvature, whereas sparsemax is piecewise linear.
3.3 Computing the entmax mapping
Whether we want to use as an attention mapping, or as a loss function, we must be able to efficiently compute , i. e., to solve the maximization in Eq. 10. For , the closedform solution is given by Eq. 4. For , given , we show that there is a unique threshold such that (Appendix C.1, Lemma 2):
(13) 
i. e., entries with score get zero probability. For sparsemax (), the problem amounts to Euclidean projection onto , for which two types of algorithms are well studied:

[label=.]

iterative, bisectionbased Liu and Ye (2009).
The bisection approach searches for the optimal threshold numerically. Blondel et al. (2019) generalize this approach in a way applicable to . The resulting algorithm is (cf. Appendix C.1 for details):
Algorithm 1 works by iteratively narrowing the interval containing the exact solution by exactly half. Line 7 ensures that approximate solutions are valid probability distributions, i. e., that .
Although bisection is simple and effective, an exact sortingbased algorithm, like for sparsemax, has the potential to be faster and more accurate. Moreover, as pointed out by Condat (2016), when exact solutions are required, it is possible to construct inputs for which bisection requires arbitrarily many iterations. To address these issues, we propose a novel, exact algorithm for 1.5entmax, halfway between softmax and sparsemax.
We give a full derivation in Appendix C.2. As written, Algorithm 2 is because of the sort; however, in practice, when the solution has no more than nonzeros, we do not need to fully sort , just to find the largest values. Our experiments in §4.2 reveal that a partial sorting approach can be very efficient and competitive with softmax on the GPU, even for large . Further speedups might be available following the strategy of Condat (2016), but our simple incremental method is very easy to implement on the GPU using primitives available in popular libraries (Paszke et al., 2017).
Our algorithm resembles the aforementioned sortingbased algorithm for projecting onto the simplex (Michelot, 1986). Both algorithms rely on the optimality conditions implying an analyticallysolvable equation in : for sparsemax (), this equation is linear, for it is quadratic (Eq. 36 in Appendix C.2). Thus, exact algorithms may not be available for general values of .
3.4 Gradient of the entmax mapping
The following result shows how to compute the backward pass through , a requirement when using as an attention mechanism.
Proposition 1.
Let . Assume we have computed , and define the vector
Then,
Proof: The result follows directly from the more general Proposition 2, which we state and prove in Appendix B, noting that .
The gradient expression recovers the softmax and sparsemax Jacobians with and , respectively (Martins and Astudillo, 2016, Eqs. 8 and 12), thereby providing another relationship between the two mappings. Perhaps more interestingly, Proposition 1 shows why the sparsemax Jacobian depends only on the support and not on the actual values of : the sparsemax Jacobian is equal for and . This is not the case for with , suggesting that the gradients obtained with other values of may be more informative. Finally, we point out that the gradient of entmax losses involves the entmax mapping (Eq. 12), and therefore Proposition 1 also gives the Hessian of the entmax loss.
4 Experiments
The previous section establishes the computational building blocks required to train models with entmax sparse attention and loss functions. We now put them to use for two important NLP tasks, morphological inflection and machine translation. These two tasks highlight the characteristics of our innovations in different ways. Morphological inflection is a characterlevel task with mostly monotonic alignments, but the evaluation demands exactness: the predicted sequence must match the gold standard. On the other hand, machine translation uses a wordlevel vocabulary orders of magnitude larger and forces a sparse output layer to confront more ambiguity: any sentence has several valid translations and it is not clear beforehand that entmax will manage this well.
Despite the differences between the tasks, we keep the architecture and training procedure as similar as possible. We use two layers for encoder and decoder LSTMs and apply dropout with probability 0.3. We train with Adam (Kingma and Ba, 2015), with a base learning rate of 0.001, halved whenever the loss increases on the validation set. At test time, we select the model with the best validation accuracy and decode with a beam size of 5. We implemented all models with OpenNMTpy (Klein et al., 2017).
In our primary experiments, we use three values for the attention and loss functions: (softmax), (to which our novel Algorithm 2 applies), and (sparsemax). We also investigate the effect of tuning with increased granularity.
4.1 Morphological Inflection
The goal of morphological inflection is to produce an inflected word form (such as “drawn”) given a lemma (“draw”) and a set of morphological tags ({verb, past, participle}). We use the data from task 1 of the CoNLL–SIGMORPHON 2018 shared task (Cotterell et al., 2018).
Training.
We train models under two data settings: high (approximately 10,000 samples per language in 86 languages) and medium (approximately 1,000 training samples per language in 102 languages). We depart from previous work by using multilingual training: each model is trained on the data from all languages in its data setting. This allows parameters to be shared between languages, eliminates the need to train languagespecific models, and may provide benefits similar to other forms of data augmentation (Bergmanis et al., 2017). Each sample is presented as a pair: the source contains the lemma concatenated to the morphological tags and a special language identification token (Johnson et al., 2017; Peters et al., 2017), and the target contains the inflected form. As an example, the source sequence for Figure 1 is english ␣ verb ␣ participle ␣ past ␣ d ␣ r ␣ a ␣ w. Although the set of inflectional tags is not sequential, treating it as such is simple to implement and works well in practice (Kann and Schütze, 2016)
. All models use embedding and hidden state sizes of 300. We validate at the end of every epoch in the
high setting and only once every ten epochs in medium because of its smaller size.high  medium  

output  attention  (avg.)  (ens.)  (avg.)  (ens.) 
1  1  93.15  94.20  82.55  85.68 
1.5  92.32  93.50  83.20  85.63  
2  90.98  92.60  83.13  85.65  
1.5  1  94.36  94.96  84.88  86.38 
1.5  94.44  95.00  84.93  86.55  
2  94.05  94.74  84.93  86.59  
2  1  94.59  95.10  84.95  86.41 
1.5  94.47  95.01  85.03  86.61  
2  94.32  94.89  84.96  86.47  
UZH (2018)  96.00  86.64 
Accuracy.
Results are shown in Table 1. We report the official metric of the shared task, word accuracy averaged across languages. In addition to the average results of three individual model runs, we use an ensemble of those models, where we decode by averaging the raw probabilities at each time step. Our best sparse loss models beat the softmax baseline by nearly a full percentage point with ensembling, and up to two and a half points in the medium setting without ensembling. The choice of attention has a smaller impact. In both data settings, our best model on the validation set outperforms all submissions from the 2018 shared task except for UZH (Makarov and Clematide, 2018)
, which uses a more involved imitation learning approach and larger ensembles. In contrast, our only departure from standard
seq2seq training is the dropin replacement of softmax by entmax.Sparsity.
Besides their accuracy, we observed that entmax models made very sparse predictions: the best configuration in Table 1 concentrates all probability mass into a single predicted sequence in 81% validation samples in the high data setting, and 66% in the more difficult medium setting. When the model does give probability mass to more than one sequence, the predictions reflect reasonable ambiguity, as shown in Figure 1. Besides enhancing interpretability, sparsity in the output also has attractive properties for beam search decoding: when the beam covers all nonzeroprobability hypotheses, we have a certificate of globally optimal decoding, rendering beam search exact. This is the case on 87% of validation set sequences in the high setting, and 79% in medium. To our knowledge, this is the first instance of a neural seq2seq model that can offer optimality guarantees.
4.2 Machine Translation
method  deen  ende  jaen  enja  roen  enro  

25.70  0.15  21.86  0.09  20.22  0.08  25.21  0.29  29.12  0.18  28.12  0.18  
26.17  0.13  22.42  0.08  20.55  0.30  26.00  0.31  30.15  0.06  28.84  0.10  
24.69  0.22  20.82  0.19  18.54  0.11  23.84  0.37  29.20  0.16  28.03  0.16 
We now turn to a highly different seq2seq regime in which the vocabulary size is much larger, there is a great deal of ambiguity, and sequences can generally be translated in several ways. We train models for three language pairs in both directions:
Training.
We use byte pair encoding (BPE; Sennrich et al., 2016a) to ensure an open vocabulary. We use separate segmentations with 25k merge operations per language for roen and a joint segmentation with 32k merges for the other language pairs. deen is validated once every 5k steps because of its smaller size, while the other sets are validated once every 10k steps. We set the maximum number of training steps at 120k for roen and 100k for other language pairs. We use 500 dimensions for word vectors and hidden states.
Evaluation.
Table 2 shows BLEU scores (Papineni et al., 2002) for the three models with , using the same value of for the attention mechanism and loss function. We observe that the 1.5entmax configuration consistently performs best across all six choices of language pair and direction. These results support the notion that the optimal function is somewhere between softmax and sparsemax, which motivates a more finegrained search for ; we explore this next.
Finegrained impact of .
Algorithm 1 allows us to further investigate the marginal effect of varying the attention and the loss , while keeping the other fixed. We report deen validation accuracy on a finegrained grid in Figure 4. On this dataset, moving from softmax toward sparser attention (left) has a very small positive effect on accuracy, suggesting that the benefit in interpretability does not hurt accuracy. The impact of the loss function (right) is much more visible: there is a distinct optimal value around , with performance decreasing for too large values. Interpolating between softmax and sparsemax thus inherits the benefits of both, and our novel Algorithm 2 for is confirmed to strike a good middle ground. This experiment also confirms that bisection is effective in practice, despite being inexact. Extrapolating beyond the sparsemax loss () does not seem to perform well.
Sparsity.
In order to form a clearer idea of how sparse entmax becomes, we measure the average number of nonzero indices on the deen validation set and show it in Table 3. As expected, 1.5entmax is less sparse than sparsemax as both an attention mechanism and output layer. In the attention mechanism, 1.5entmax’s increased support size does not come at the cost of much interpretability, as Figure 5 demonstrates. In the output layer, 1.5entmax assigns positive probability to only 16.13 target types out of a vocabulary of 17,993 meaning that the supported set of words often has an intuitive interpretation. Figure 2 shows the sparsity of the 1.5entmax output layer in practice: the support becomes completely concentrated when generating a phrase like “the tree of life”, but grows when presenting a list of synonyms (“view”, “look”, “glimpse”, and so on). This has potential practical applications as a predictive translation system (Green et al., 2014), where the model’s support set serves as a list of candidate autocompletions at each time step.
method  # attended  # target words 

24.25  17993  
5.55  16.13  
3.75  7.55 
Training time.
Importantly, the benefits of sparsity do not come at a high computational cost. Our proposed Algorithm 2 for 1.5entmax runs on the GPU at nearsoftmax speeds (Figure 6). For other values, bisection (Algorithm 1) is slightly more costly, but practical even for large vocabulary sizes. On deen, bisection is capable of processing about 10,500 target words per second on a single Nvidia GeForce GTX 1080 GPU, compared to 13,000 words per second for 1.5entmax with Algorithm 2 and 14,500 words per second with softmax. On the smallervocabulary morphology datasets, Algorithm 2 is nearly as fast as softmax.
5 Related Work
Sparse attention.
Sparsity in the attention and in the output have different, but related, motivations. Sparse attention can be justified as a form of inductive bias, since for tasks such as machine translation one expects only a few source words to be relevant for each translated word. Dense attention probabilities are particularly harmful for long sequences, as shown by Luong et al. (2015), who propose “local attention” to mitigate this problem. Combining sparse attention with fertility constraints has been recently proposed by Malaviya et al. (2018). Hard attention (Xu et al., 2015; Aharoni and Goldberg, 2017; Wu et al., 2018) selects exactly one source token. Its discrete, nondifferentiable nature requires imitation learning or Monte Carlo policy gradient approximations, which drastically complicate training. In contrast, entmax is a differentiable, easy to use, dropin softmax replacement. A recent study by Jain and Wallace (2019) tackles the limitations of attention probabilities to provide interpretability. They only study dense attention in classification tasks, where attention is less crucial for the final predictions. In their conclusions, the authors defer to future work exploring sparse attention mechanisms and seq2seq models. We believe our paper can foster interesting investigation in this area.
Losses for seq2seq models.
Mostly motivated by the challenges of large vocabulary sizes in seq2seq, an important research direction tackles replacing the crossentropy loss with other losses or approximations (Bengio and Senécal, 2008; Morin and Bengio, 2005; Kumar and Tsvetkov, 2019). While differently motivated, some of the above strategies (e. g., hierarchical prediction) could be combined with our proposed sparse losses. Niculae et al. (2018) use sparsity to predict interpretable sets of structures. Since autoregressive seq2seq makes no factorization assumptions, their strategy cannot be applied without approximations, such as in Edunov et al. (2018).
6 Conclusion and Future Work
We proposed sparse sequencetosequence models and provided fast algorithms to compute their attention and output transformations. Our approach yielded consistent improvements over dense models on morphological inflection and machine translation, while inducing interpretability in both attention and output distributions. Sparse output layers also provide exactness when the number of possible hypotheses does not exhaust beam search.
Given the ubiquity of softmax in NLP, entmax has many potential applications. A natural next step is to apply entmax to selfattention (Vaswani et al., 2017). In a different vein, the strong morphological inflection results point to usefulness in other tasks where probability is concentrated in a small number of hypotheses, such as speech recognition.
Acknowledgments
This work was supported by the European Research Council (ERC StG DeepSPIN 758969), and by the Fundação para a Ciência e Tecnologia through contract UID/EEA/50008/2019. We thank Mathieu Blondel, Nikolay Bogoychev, Gonçalo Correia, Erick Fonseca, Pedro Martins, Tsvetomila Mihaylova, Miguel Rios, Marcos Treviso, and the anonymous reviewers, for helpful discussion and feedback.
References
 Aharoni and Goldberg (2017) Roee Aharoni and Yoav Goldberg. 2017. Morphological inflection generation with hard monotonic attention. In Proc. ACL.
 Bahdanau et al. (2015) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. ICLR.

Bengio and Senécal (2008)
Yoshua Bengio and JeanSébastien Senécal. 2008.
Adaptive importance
sampling to accelerate training of a neural probabilistic language model.
IEEE Transactions on Neural Networks
, 19(4):713–722.  Bergmanis et al. (2017) Toms Bergmanis, Katharina Kann, Hinrich Schütze, and Sharon Goldwater. 2017. Training data augmentation for lowresource morphological inflection. In Proc. CoNLL–SIGMORPHON.
 Blondel et al. (2019) Mathieu Blondel, André FT Martins, and Vlad Niculae. 2019. Learning classifiers with FenchelYoung losses: Generalized entropies, margins, and algorithms. In Proc. AISTATS.
 Bojar et al. (2016) Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, et al. 2016. Findings of the 2016 conference on machine translation. In ACL WMT.
 Bridle (1990) John S Bridle. 1990. Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In Neurocomputing, pages 227–236. Springer.
 Cettolo et al. (2017) M Cettolo, M Federico, L Bentivogli, J Niehues, S Stüker, K Sudoh, K Yoshino, and C Federmann. 2017. Overview of the IWSLT 2017 evaluation campaign. In Proc. IWSLT.
 Chopra et al. (2016) Sumit Chopra, Michael Auli, and Alexander M Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proc. NAACLHLT.
 Chorowski et al. (2015) Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attentionbased models for speech recognition. In Proc. NeurIPS.
 Condat (2016) Laurent Condat. 2016. Fast projection onto the simplex and the ball. Mathematical Programming, 158(12):575–585.
 Cotterell et al. (2018) Ryan Cotterell, Christo Kirov, John SylakGlassman, Gėraldine Walther, Ekaterina Vylomova, Arya D McCarthy, Katharina Kann, Sebastian Mielke, Garrett Nicolai, Miikka Silfverberg, et al. 2018. The CoNLL–SIGMORPHON 2018 shared task: Universal morphological reinflection. Proc. CoNLL–SIGMORPHON.
 Danskin (1966) John M Danskin. 1966. The theory of maxmin, with applications. SIAM Journal on Applied Mathematics, 14(4):641–664.
 Edunov et al. (2018) Sergey Edunov, Myle Ott, Michael Auli, David Grangier, et al. 2018. Classical structured prediction losses for sequence to sequence learning. In Proc. NAACLHLT.
 Green et al. (2014) Spence Green, Sida I Wang, Jason Chuang, Jeffrey Heer, Sebastian Schuster, and Christopher D Manning. 2014. Human effort and machine learnability in computer aided translation. In Proc. EMNLP.
 Held et al. (1974) Michael Held, Philip Wolfe, and Harlan P Crowder. 1974. Validation of subgradient optimization. Mathematical Programming, 6(1):62–88.
 Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long shortterm memory. Neural Computation, 9(8):1735–1780.
 Jain and Wallace (2019) Sarthak Jain and Byron C. Wallace. 2019. Attention is not explanation. In Proc. NAACLHLT.
 Johnson et al. (2017) Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, et al. 2017. Google’s multilingual neural machine translation system: Enabling zeroshot translation. Transactions of the Association for Computational Linguistics, 5:339–351.
 Kann and Schütze (2016) Katharina Kann and Hinrich Schütze. 2016. MED: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection. In Proc. SIGMORPHON.
 Kingma and Ba (2015) Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proc. ICLR.
 Klein et al. (2017) Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M Rush. 2017. OpenNMT: Opensource toolkit for neural machine translation. arXiv eprints.
 Kumar and Tsvetkov (2019) Sachin Kumar and Yulia Tsvetkov. 2019. Von MisesFisher loss for training sequence to sequence models with continuous outputs. In Proc. ICLR.
 Liu and Ye (2009) Jun Liu and Jieping Ye. 2009. Efficient Euclidean projections in linear time. In Proc. ICML.
 Luong et al. (2015) MinhThang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proc. EMNLP.
 Makarov and Clematide (2018) Peter Makarov and Simon Clematide. 2018. UZH at CoNLL–SIGMORPHON 2018 shared task on universal morphological reinflection. Proc. CoNLL–SIGMORPHON.
 Malaviya et al. (2018) Chaitanya Malaviya, Pedro Ferreira, and André FT Martins. 2018. Sparse and constrained attention for neural machine translation. In Proc. ACL.
 Martins and Astudillo (2016) André FT Martins and Ramón Fernandez Astudillo. 2016. From softmax to sparsemax: A sparse model of attention and multilabel classification. In Proc. of ICML.
 Michelot (1986) Christian Michelot. 1986. A finite algorithm for finding the projection of a point onto the canonical simplex of . Journal of Optimization Theory and Applications, 50(1):195–200.
 Morin and Bengio (2005) Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. In Proc. AISTATS.
 Neubig (2011) Graham Neubig. 2011. The Kyoto free translation task. http://www.phontron.com/kftt.
 Niculae and Blondel (2017) Vlad Niculae and Mathieu Blondel. 2017. A regularized framework for sparse and structured neural attention. In Proc. NeurIPS.
 Niculae et al. (2018) Vlad Niculae, André FT Martins, Mathieu Blondel, and Claire Cardie. 2018. SparseMAP: Differentiable sparse structured inference. In Proc. ICML.
 Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proc. ACL.
 Paszke et al. (2017) Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In Proc. NeurIPS Autodiff Workshop.
 Peters et al. (2017) Ben Peters, Jon Dehdari, and Josef van Genabith. 2017. Massively multilingual neural graphemetophoneme conversion. In Proc. Workshop on Building Linguistically Generalizable NLP Systems.
 Rockafellar (1970) R Tyrrell Rockafellar. 1970. Convex Analysis. Princeton University Press.
 Sennrich et al. (2016a) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Neural machine translation of rare words with subword units. In Proc. ACL.
 Sennrich et al. (2016b) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Edinburgh neural machine translation systems for WMT 16. In Proc. WMT.
 Tsallis (1988) Constantino Tsallis. 1988. Possible generalization of BoltzmannGibbs statistics. Journal of Statistical Physics, 52:479–487.
 Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. NeurIPS.

Wainwright and Jordan (2008)
Martin J Wainwright and Michael I Jordan. 2008.
Graphical
models, exponential families, and variational inference.
Foundations and Trends® in Machine Learning
, 1(1–2):1–305.  Wu et al. (2018) Shijie Wu, Pamela Shapiro, and Ryan Cotterell. 2018. Hard nonmonotonic attention for characterlevel transduction. In Proc. EMNLP.
 Xu et al. (2015) Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proc. ICML.
Appendix A Background
a.1 Tsallis entropies
Recall the definition of the Tsallis family of entropies in Eq. 9 for ,
(14) 
This family is continuous in , i. e., for any . Proof: For simplicity, we rewrite in separable form:
(15) 
It suffices to show that for . Let , and . Observe that , so we are in an indeterminate case. We take the derivatives of and :
From l’Hôpital’s rule,
Note also that, as , the denominator grows unbounded, so .
a.2 FenchelYoung losses
In this section, we recall the definitions and properties essential for our construction of . The concepts below were formalized by Blondel et al. (2019) in more generality; we present below a less general version, sufficient for our needs.
Definition 1 (Probabilistic prediction function regularized by ).
Let be a strictly convex regularization function. We define the prediction function as
(16) 
Definition 2 (FenchelYoung loss generated by ).
Let be a strictly convex regularization function. Let denote a groundtruth label (for example, if there is a unique correct class ). Denote by the prediction scores produced by some model, and by the probabilistic predictions. The FenchelYoung loss generated by is
(17) 
Properties of FenchelYoung losses.

[itemsep=3pt]

Nonnegativity. for any and .

Zero loss. if and only if , i. e., the prediction is exactly correct.

Convexity. is convex in .

Differentiability. is differentiable with .

Smoothness. If is strongly convex, then is smooth.

Temperature scaling. For any constant , .
Characterizing the solution of .
To shed light on the generic probability mapping in Eq. 16, we derive below the optimality conditions characterizing its solution. The optimality conditions are essential not only for constructing algorithms for computing (Appendix C), but also for deriving the Jacobian of the mapping (Appendix B). The Lagrangian of the maximization in Eq. 16 is
(18) 
with subgradient
(19) 
The subgradient KKT conditions are therefore:
(20)  
(21)  
(22)  
(23) 
Connection to softmax and sparsemax.
Appendix B Backward pass for generalized sparse attention mappings
When a mapping is used inside the computation graph of a neural network, the Jacobian of the mapping has the important role of showing how to propagate error information, necessary when training with gradient methods. In this section, we derive a new, simple expression for the Jacobian of generalized sparse . We apply this result to obtain a simple form for the Jacobian of mappings.
The proof is in two steps. First, we prove a lemma that shows that Jacobians are zero outside of the support of the solution. Then, completing the result, we characterize the Jacobian at the nonzero indices.
Lemma 1 (Sparse attention mechanisms have sparse Jacobians).
Let be strongly convex. The attention mapping is differentiable almost everywhere, with Jacobian symmetric and satisfying
Proof: Since is strictly convex, the in Eq. 16 is unique. Using Danskin’s theorem (Danskin, 1966), we may write
Since is strongly convex, the gradient of its conjugate
is differentiable almost everywhere (Rockafellar, 1970). Moreover,
is the Hessian of , therefore it is symmetric, proving the
first two claims.
Recall the definition of a partial derivative,
Denote by . We will show that for any such that , and any ,
In other words, we consider only one side of the limit, namely subtracting a small nonnegative . A vector solves the optimization problem in Eq. 16 if and only if there exists and satisfying Eqs. 20–23. Let . We verify that satisfies the optimality conditions for , which implies that . Since we add a nonnegative quantity to , which is nonnegative to begin with, , and since , we also satisfy . Finally,
It follows that If is differentiable at
, this onesided limit must agree with the derivative. Otherwise, the sparse onesided limit is a generalized Jacobian.
Proposition 2.
Let , with strongly convex and differentiable . Denote the support of by . If the second derivative exists for any , then
In particular, if with twice differentiable on , we have
Proof: Lemma 1 verifies that for . It remains to find the derivatives w.r.t. . Denote by the restriction of the corresponding vectors to the indices in the support . The optimality conditions on the support are
(24) 
where , so . Differentiating w.r.t. at yields
(25) 
Since is strictly convex, is invertible. From block Gaussian elimination (i. e., the Schur complement),
which can then be used to solve for giving
yielding the desired result. When is separable, is diagonal, with , yielding the simplified expression which completes the proof.
Connection to other differentiable attention results.
Our result is similar, but simpler than Niculae and Blondel (2017, Proposition 1), especially in the case of separable . Crucially, our result does not require that the second derivative exist outside of the support. As such, unlike the cited work, our result is applicable in the case of , where either or its reciprocal may not exist at .
Appendix C Algorithms for entmax
c.1 General thresholded form for bisection algorithms.
The following lemma provides a simplified form for the solution of .
Lemma 2.
For any , there exists a unique such that
(26) 
Proof: We use the regularized prediction functions defined in Appendix A.2. From both definitions,
We first note that for all ,
(27) 
From the constant invariance and scaling properties of
Comments
There are no comments yet.