1 Introduction
Calibration of supervised learning models is a topic of continued interest in machine learning and statistics
NiculescuMizil and Caruana (2005); Candela et al. (2005); Crowson et al. (2016); Guo et al. (2017). Calibration requires that the probability a model assigns to a prediction equals the true chance of correctness of the prediction. For example, if a calibrated model
makes 1000 predictions with probability values around , we expect 990 of these to be correct. If makes another 100 predictions with probability 0.8, we expect around 80 of these to be correct. Calibration is important in reallife deployments of a model since it ensures interpretable probabilities, and plays a crucial role in reducing prediction bias Pleiss et al. (2017). In this paper we show that for structured prediction models calibration is also important for sound working of the inference algorithm that generates structured outputs.Much recent work have studied calibration of modern neural networks for scalar predictions
Guo et al. (2017); Lakshminarayanan et al. (2017); Hendrycks and Gimpel (2017); Louizos and Welling (2017); Pereyra et al. (2017); Kumar et al. (2018); Kuleshov et al. (2018). A surprising outcome is that modern neural networks have been found to be miscalibrated in the direction of overconfidence, in spite of a statistically sound loglikelihood based training objective.We investigate calibration of attentionbased encoderdecoder models for sequence to sequence (seq2seq) learning as applied to neural machine translation. We measure calibration of token probabilities of three modern neural architectures for translation — NMT Bahdanau et al. (2015), GNMT Wu et al. (2016), and the Transformer model Vaswani et al. (2017) on six different benchmarks. We find the output token probabilities of these models to be poorly calibrated. This is surprising because the output distribution is conditioned on true previous tokens (teacher forcing) where there is no traintest mismatch unlike when we condition on predicted tokens where there is a risk of exposure bias Bengio et al. (2015); Ranzato et al. (2016); Norouzi et al. (2016); Wiseman and Rush (2016). We show that such lack of calibration can explain the counterintuitive bleu drop with increasing beamsize Koehn and Knowles (2017).
We dig into root causes for the lack of calibration and pin point two primary causes: poor calibration of the EOS token and attention uncertainty. Instead of generic temperature based fixes as in Guo et al. (2017)
, we propose a parametric model to recalibrate as a function of input coverage, attention uncertainty, and token probability. We show that our approach leads to improved tokenlevel calibration. We demonstrate three advantages of a better calibrated model. First, we show that the calibrated model better correlates probability with
bleu and that leads to bleu increment by up to 0.4 points just by recalibrating a pretrained model. Second, we show that the calibrated model has better calibration on the persequence bleumetric, which we refer to as sequencelevel calibration and was achieved just by recalibrating tokenlevel probabilities. Third, we show that improved calibration diminishes the drop in bleu with increasing beamsize. Unlike patches like coverage and length penalty Wu et al. (2016); He et al. (2016); Yang et al. (2018a), inference on calibrated models also yields reliable probabilities.2 Background and Motivation
We review the model underlying modern NMT systems^{1}^{1}1see Koehn (2017) for a more detailed review, then discuss measures for calibration.
2.1 Attentionbased NMT
State of the art NMT systems use an attentionbased encoderdecoder neural network for modeling over the space of discrete output translations of an input sentence where denotes the network parameters. Let denote the tokens in a sequence and denote tokens in . Let denote output vocabulary. A special token EOS marks the end of a sequence in both and . First, an encoder (e.g. a bidirectional LSTM) transforms each
into realvectors
. The EncoderDecoder (ED) network factorizes as(1) 
where . The decoder computes each as
(2) 
where is a decoder state summarizing ; is attention weighted input:
(3) 
is the attention unit.
During training given a , we find to minimize negative log likelihood (NLL):
(4) 
During inference given a , we need to find the that maximizes . This is intractable given the full dependency (Eq: 1). Approximations like beam search with a beamwidth parameter (typically between 4 and 12) maintains highest probability prefixes which are grown token at a time. At each step beam search finds the topB highest probability tokens from for each prefix until a EOS is encountered.
2.2 Calibration: Definition and Measures
Our goal is to study, analyze, and fix the calibration of the next token distribution that is used at each inference step. We first define calibration and how it is measured. Then, we motivate the importance of calibration in beamsearch like inference for sequence prediction.
We use the shortform for A prediction model is wellcalibrated if for any value , of all predictions with probability , the fraction correct is . That is, the model assigned probability represents the chance of correctness of the prediction.
Calibration error measures the mismatch between the model assigned probability (also called confidence) and fraction correct. To measure such mismatch on finite test data we bin the range of [0,1] into equal sized bins (e.g. [0,0.1),[0.1,0.2), etc) and sum up the mismatch in each bin. Say, we are given a test data with distributions spanning different sequence and step combinations. Let denote the correct output and denote the prediction; its prediction confidence is then . Within each bin , let denote all where confidence value falls within that bin. Over each we measure, (1) the fraction correct or accuracy , that is the fraction of cases in where , (2) the average value, called the average confidence , (3) the total mass on the bin the fraction of the cases in that bin. A graphical way to measure calibration error is via reliability plots that shows average confidence on the xaxis against average accuracy . In a wellcalibrated model where confidence matches with correctness, the plot lies on the diagonal. Figure 1 shows several examples of calibration plots of two models with bins each of size 0.05. The bins have been smoothed over in these figures. The absolute difference between the diagonal and the observed plot scaled by bin weight is called expected calibration error (ECE). ECE considers only the highest scoring prediction from each but since beamsearch reasons over probability of multiple high scoring tokens, we extended and used a weighted version of ECE that measures calibration of the entire distribution. We describe ECE and weighted ECE more formally below, and also provide an example to motivate the use our weighted ECE metric for structured prediction tasks.
2.2.1 Expected Calibration Error (ECE)
ECE is defined when a model makes a single prediction with a confidence . In the case of scalar prediction or considering just the topmost token in structured prediction tasks, the prediction is with as confidence. Let denote if matches the correct label at .
First partition the confidence interval [0..1] into
equal bins . Then in each bin measure the absolute difference between the accuracy and confidence of predictions in that bin. This gives the expected calibration error (ECE) as:(5) 
where is total output token lengths (or total number of scalar predictions made). Since beamsearch reasons over probability of multiple high scoring tokens, we wish to calibrate the entire distribution. If V is the vocabulary size, we care to calibrate all predicted probabilities. A straightforward use of ECE that treats these as independent scalar predictions is incorrect, and is not informative.
2.2.2 Weighted Expected Calibration Error (Weighted ECE)
Weighted ECE is given by the following formula: (various symbols have usual meanings as used in the rest of this paper)
We motivate our definition as applying ECE on a classifier that predicts label
with probability proportional to its confidence instead of the highest scoring label deterministically.Example
This example highlights how weighted ECE calibrates the full distribution. Consider two distributions on a V of size 3: and . For both let the first label be correct. Clearly, with correct label probability of 0.4 is better calibrated than . But ECE of both is the same at since both of theirs highest scoring prediction (label 3) is incorrect. In contrast, with bins of size 0.1, weighted ECE will be for which is less than 0.5 for . Such finegrained distinction is important for beamsearch and any other structured search algorithms with large search spaces. In the paper we used ECE to denote weighted ECE.
2.3 Importance of Calibration
In scalar classification models, calibration as a goal distinct from accuracy maximization, is motivated primarily by interpretability of the confidence scores. In fact, the widely adopted fix for miscalibration, called temperature scaling, that scales the entire distribution by a constant temperature parameters as . does not change the relative ordering of the probability of the , and thus leaves the classification accuracy unchanged. For sequence prediction models, we show that calibration of the token distribution is important also for the sound working of the beamsearch inference algorithm. Consider an example: say we have an input sequence for which the correct twotoken sequence is ”That’s awesome”. Let’s say the model outputs a miscalibrated distribution for the first token position:
where the ideal model should have been
Assume at , the model is calibrated and
The highest probability prediction from the uncalibrated model is It’s ok with probability , whereas from the calibrated model is That’s awesome. Thus, accuracy of the model is 0 and the calibrated is 1 even though the relative ordering of token probabilities is the same in and . This example also shows that if we used beam size =1, we would get the correct prediction although with the lower score , whereas the higher scoring ( ) prediction obtained at beam size=2 is wrong.
More generally, increasing the beam size almost always outputs a higher scoring prefix but if the score is not calibrated that does not guarantee more correct outputs. The more prefixes we have with overconfident (miscalibrated) scores, the higher is our chance of overshadowing the true best prefix in the next step, causing accuracy to drop with increasing beamsize. This observation is validated on real data too where we observe that improving calibration captures the BLEU drop with increasing beam size.
3 Calibration of existing models
We next study the calibration of six stateoftheart publicly available pretrained NMT models on various WMT+IWSLT benchmarks. The first five are from Tensorflow’s NMT codebase
Luong et al. (2017) ^{2}^{2}2https://github.com/tensorflow/nmt#benchmarks: EnDe GNMT (4 layers), EnDe GNMT (8 layers), DeEn GNMT, DeEn NMT, EnVi NMT. They all use multilayered LSTMs arranged either in the GNMT architecture Wu et al. (2016) or standard NMT architecture Bahdanau et al. (2015). The sixth EnDe T2T, is the pretrained Transformer model^{3}^{3}3https://github.com/tensorflow/tensor2tensor/tree/master/tensor2tensor, pretrained model at https://goo.gl/wkHexj. (We use T2T and Transformer interchangeably.) The T2T replaces LSTMs with selfattention Vaswani et al. (2017) and uses multiple attention heads, each with its own attention vector.
Figure 1 shows calibration as a reliability plot where xaxis is average weighted confidence and yaxis is average weighted accuracy. The blue lines are for the original models and the red lines are after our fixes (to be ignored in this section). The figure also shows calibration error (ECE). We observe that all six models are miscalibrated to various degrees with ECE ranging from 2.9 to 9.8. For example, in the last bin of EnVi NMT the average predicted confidence is 0.98 whereas its true accuracy is only 0.82. Five of the six models are overly confident. The transformer model attempts to fix the overconfidence by using a soft crossentropy loss that assigns a probability to the correct label and probability to all others as follows:
(6) 
With this loss, the overconfidence changes to slight underconfidence. While an improvement over the pure NLL training, we will show how to enhance its calibration even further.
This observed miscalibration was surprising given the tokens are conditioned on the true previous tokens (teacher forcing). We were expecting biases when conditioning on predicted previous tokens because that leads to what is called as ”exposure bias” Bengio et al. (2015); Ranzato et al. (2016); Norouzi et al. (2016); Wiseman and Rush (2016). In teacherforcing, the test scenario matches the training scenario where the NLL training objective (Eq 4) is statistically sound — it is minimized when the fitted distribution matches the true data distribution Hastie et al. (2001).
3.1 Reasons for Miscalibration
In this section we seek out reasons for the observed miscalibration of modern NMT models. For scalar classification Guo et al. (2017) discusses reasons for poor calibration of modern neural networks (NN). A primary reason is that the high capacity of NN causes the negative log likelihood (NLL) to overfit without overfitting 0/1 error Zhang et al. (2017). We show that for sequence to sequence learning models based on attention and with large vocabulary, a different set of reasons come into play. We identify three of these. While these are not exclusive reasons, we show that correcting them improves calibration and partly fixes other symptoms of miscalibrated models.
3.2 Poor calibration of EOS token
To investigate further we drill down to tokenwise calibration. Figure 2 shows the plots of EOS, three other frequent tokens, and the rest for four models. Surprisingly, EOS is calibrated very poorly and is much worse than the overall calibration plots in Figure 1
and other frequent tokens. For NMT and GNMT models EOS is overestimated, and for T2T the EOS is underestimated. For instance, for the EnDe GNMT(4) model (toprow, first column in Fig
2), out of all EOS predictions with confidence in the [0.9, 0.95] bin only 60% are correct. Perhaps these encoderdecoder style models do not harness enough signals to reliably model the end of a sequence. One such important signal is coverage of the input sequence. While coverage has been used heuristically in beamsearch inference
Wu et al. (2016), we propose a more holistic fix of the entire distribution using coverage as one of the features in Section 4.3.3 Uncertainty of Attention
We conjectured that a second reason for overconfidence could be the uncertainty of attention. A wellcalibrated model must express all sources of prediction uncertainty in its output distribution. Existing attention models average out the attention uncertainty of
in the input context (Eq: 3). Thereafter, has no influence on the output distribution. We had conjectured that this would manifest as worser calibration for high entropy attentions , and this is what we observed empirically. In Table 1 we show ECE partitioned across whether the entropy of is high or low on five^{4}^{4}4We drop the T2T model since measuring attention uncertainty is unclear in the context of multiple attention heads. models. Observe that ECE is higher for highentropy attention.Model Name  Low  High 

EnVi NMT  9.0  13.0 
EnDe GNMT(4)  4.5  5.3 
EnDe GNMT(8)  4.8  5.4 
DeEn GNMT  3.8  2.3 
DeEn GNMT  3.9  5.9 
DeEn NMT  2.3  4.1 
3.4 Head versus Tail tokens
The large vocabulary and the softmax bottleneck Yang et al. (2018b) was another reason we investigated. We studied the calibration for tail predictions (the ones made with low probability) in contrast to the head in a given softmax distribution. In Figure 2(a) for different thresholds of log probability (Xaxis), we show total true accuracy (red) and total predicted confidence (blue) for all predictions with confidence less than . In Figure 2(b) we show the same for head predictions with confidence . The first two from GNMT/NMT underestimate tail (low) probabilities while overestimating the head. The T2T model shows the opposite trend. This shows that the phenomenon of miscalibration manifests in the entire softmax output and motivates a method of recalibration that is sensitive to the output token probability.
4 Reducing Calibration Errors
For modern neural classifiers Guo et al. (2017) compares several posttraining fixes and finds temperature scaling to provide the best calibration without dropping accuracy. This method chooses a positive temperature value and transforms the distribution as . The optimal is obtained by maximizing NLL on a heldout validation dataset.
Our investigation in Section 3.1
showed that calibration of different tokens in different input contexts varies significantly. We propose an alternative method, where the temperature value is not constant but varies based on the entropy of the attention, the log probability (logit) of the token, the token’s identity (EOS or not), and the input coverage. At the
th decoding step, let denote the entropy of the attention vector and the logit for a token at step be . We measure coverage as the fraction of input tokens with cumulative attention until greater than a threshold . We used . Using we compute the (inverse of ) temperature for scaling token at step in two steps. We first correct the extreme miscalibration of EOS by learning a correction as a function of the input coverage as follows:This term helps to dampen EOS probability when input coverage is low and are learned parameters. Next, we correct for overall miscalibration by using a neural network to learn variable temperature values as follows:
where and are functions with parameters . For each of and
, we use a 2layered feedforward network with hidden ReLu activation, three units per hidden layer, and a sigmoid activation function to output in range
. Since the T2T model underestimates probability, we found that learning was easier if we added 1 to the sigmoid outputs of and before multiplying them to compute the temperature. We learn parameters (including and ) by minimizing NLL on temperature adjusted logits using a validation set .where and is as defined earlier. The heldout validation set was created using a 1:1 mixture of 2000 examples sampled from the train and dev set. The dev and test distributions are quite different for WMT+IWSLT datasets. So we used a mixture of dev and train set for temperature calibration rather than just the dev set for generalizability reasons. Just using the dev set defeats the purpose of calibration, as the temperaturebased calibration method can potentially overfit to a particular distribution (dev) whereas using a mixture of dev and train will prevent this overfitting and hence, provide us with a model that is more likely to generalize to a third (test) distribution.
5 Experimental Results
We first show that our method manages to significantly reduce calibration error on the test set. Then we present two outcomes of a better calibrated model: (1) higher accuracy, and (2) reduced BLEU drop with increasing beamsize.
5.1 Reduction in Calibration Error
Figure 1 shows that our method (shown in red) reduces miscalibration of all six models — in all cases our model is closer to the diagonal than the original. We manage to both reduce the underestimation of the T2T model and the overconfidence of the NMT and GNMT models.
We compare ECE of our method of recalibration to the single temperature method in Table 2 (Column ECE). Note the single temperature is selected using the same validation dataset as ours. Our ECE is lower particularly for the T2T model. We will show next that our more informed recalibration has several other benefits.
Model Name  ECE  BLEU  

Base  Our  T  Base  Our  T  
EnVi NMT  9.8  3.5  3.8  26.2  26.6  26.0 
EnDe GNMT4  4.8  2.4  2.7  26.8  26.8  26.7 
EnDe GNMT8  5.0  2.2  2.1  27.6  27.5  27.4 
DeEn GNMT  3.3  2.2  2.3  29.6  29.9  29.6 
DeEn GNMT  29.9  30.1  30.1  
(length norm)  
DeEn NMT  3.5  2.0  2.2  28.8  29.0  28.7 
T2T EnDe  2.9  1.7  5.4  27.9  28.1  28.1 
T2T EnDe(B=4)  28.3  28.3  28.2 
5.2 An interpretable measure of whole sequence calibration
For structured outputs like in translation, the whole sequence probability is often quite small and an uninterpretable function of output length and source sentence difficulty. In general, designing a good calibration measure for structured outputs is challenging. Nguyen and O’Connor (2015)
propose to circumvent the problem by reducing structured calibration to the calibration of marginal probabilities over single variables. This works for tractable joint distributions like chain CRFs and HMMs. For modern NMT systems that assume full dependency, such marginalization is neither tractable nor useful. We propose an alternative measure of calibration in terms of
bleu score rather than structured probabilities. We define this measure using bleu but any other scoring function including gBLEU, and Jaccard are easily substitutable.We define model expected of a prediction as value of bleu if true label sequences were sampled from the predicted distribution
(7) 
where denote samples from .^{5}^{5}5We could also treat various sequences obtained from beam search with large beam width as samples (unless these are very similar) and adjust the estimator by the importance weights. We observed that both explicit sampling and reweighted estimates with beamsearched sequences give similar results.
It is easy to see that if is perfectly calibrated the model predicted will match the actual bleu on the true label sequence in expectation. That is, if we considered all predictions with predicted , then the actual bleu over them will be when is wellcalibrated. This is much like ECE for scalar classification except that instead of matching 0/1 accuracy with confidence, we match actual bleu with expected bleu. We refer to this as Structured ECE in our results (Table 3).
Figure 4 shows the binned values of (Xaxis) and average actual bleu (Yaxis) for WMT + IWSLT tasks on the baseline model and after recalibrating (solid lines). In the same plot we show the density (fraction of all points) in each bin by each method. We use samples for estimating . Table 2 shows aggregated difference over these bins. We can make a number of observations from these results.
The calibrated model’s bleu plot is closer to the diagonal than baseline’s. Thus, for a calibrated model the values provide a interpretable notion of the quality of prediction. The only exception is the T2T model. The model has very low entropy on token probabilities and the top 100 sequences are only slight variants of each other, and the samples are roughly identical. An interesting topic for future work is further investigating the reasons behind the T2T model being so sharply peaked compared to other models.
The baseline and calibrated model’s densities (shown in dotted) are very different with the calibrated model showing a remarkable shift to the low end. The trend in density is in agreement with the observed BLEU scores, and hence higher density is observed towards the lower end.
Model Name  ECE  BLEU  Structured ECE  

Base  Our  T  Base  Our  T  Base  Our  
EnVi NMT  9.8  3.5  3.8  26.2  26.6  26.0  7.3  0.9 
EnDe GNMT4  4.8  2.4  2.7  26.8  26.8  26.7  5.8  3.4 
EnDe GNMT8  5.0  2.2  2.1  27.6  27.5  27.4  6.4  3.3 
DeEn GNMT  3.3  2.2  2.3  29.6  29.9  29.6  2.5  1.3 
DeEn GNMT (Lnorm)  3.3  2.2  2.3  29.9  30.1  30.1  2.5  1.3 
DeEn NMT  3.5  2.0  2.2  28.8  29.0  28.7  4.0  2.4 
T2T EnDe  2.9  1.7  5.4  27.9  28.1  28.1  98.8  98.8 
T2T EnDe (B=4)  2.9  1.7  5.4  28.3  28.3  28.2  98.8  98.8 
5.3 More Accurate Predictions
Unlike scalar classification, where temperature scaling does not change accuracy, for structured outputs with beamsearch like inference, temperature scaling can lead to different MAP solutions. In Table 2 we show the bleu score with different methods. These are with beam size 10, the default in the NMT code. For the T2T model we report bleu with beam size 4, the default in the T2T code. In all models except DeEn GNMT, using length normalization reduces test^{6}^{6}6On the Dev set length normalization improves bleu, indicating the general difference between the test and dev set in the WMT benchmark bleu. So, we report bleu without length norm by default and for DeEn GNMT we report with lengthnorm. The table shows that in almost all cases, our informed recalibration improves inference accuracy. The gain with calibration is more than 0.3 units in bleu on three models: EnVi, DeEn GNMT and EnDe T2T. Even with length normalization on DeEn GNMT, we improve bleu by 0.2 using calibration. The increase in accuracy is modest but significant because they came out of only tweaking the token calibration of an existing trained model using a small validation dataset.
Further calibration using fixed temperature actually hurts accuracy (bleu). In five of the six models, the bleu after recalibrating with temperature drops, even while the ECE reduction is comparable to ours. This highlights the importance of accounting for factors like coverage and attention entropy for achieving sound recalibration.
5.4 BLEU drop with increasing beamsize
One idiosyncrasy of modern NMT systems is the drop in bleu score as the inference is made more accurate with increasing beamsize. In Table 4 we show the bleu scores of original models and our calibrated versions with beam size increasing from 10 to 80. These experiments are on the dev set since the calibration was done on the dev set and we want to highlight the importance of calibration. bleu drops much more with the original model than with the calibrated one. For example for EnDe GNMT4, bleu drops from 23.9 to 23.8 to 23.7 to 23.5 as beam width is increased from 10 to 20 to 40 to 80, whereas after calibration it is more stable going from 23.9 to 23.9 to 23.9 to 23.8. The bleu drop is reduced but not totally eliminated since we have not achieved perfect calibration. Length normalization can sometimes help stabilize this drop, but the test accuracy(bleu) is higher without length normalization on five of the six models. Also, length normalization is arguably a hack since it is not used during training. Recalibration is more principled that also provides interpretable scores as a byproduct.
Model  B=10  B=20  B=40  B=80 

EnVi NMT  23.8  0.2  0.4  0.7 
+ calibrated  24.1  0.2  0.2  0.4 
EnDe GNMT4  23.9  0.1  0.2  0.4 
+ calibrated  23.9  0.0  0.0  0.1 
EnDe GNMT8  24.6  0.1  0.3  0.5 
+ calibrated  24.7  0.1  0.4  0.6 
DeEn GNMT  28.8  0.2  0.3  0.5 
+ calibrated  28.9  0.1  0.2  0.3 
DeEn NMT  28.0  0.1  0.4  0.6 
+ calibrated  28.2  0.0  0.2  0.2 
EnDe T2T*  26.5  0.2  0.7  1.2 
+ calibrated  26.6  0.1  0.3  0.4 
6 Related Work
Calibration of scalar classification and regression models has been extensively studied. NiculescuMizil and Caruana (2005)
systematically evaluated many classical models and found models trained on conditional likelihood like logistic regression and neural networks (of 2005) to be wellcalibrated, whereas SVMs and naive Bayes were poorly calibrated.
Nguyen and O’Connor (2015) corroborated this for NLP tasks. Many methods are proposed for fixing calibration including Platt’s scaling Platt (1999), Isotonic regression Zadrozny and Elkan (2002), and Bayesian binning Naeini et al. (2015), and training regularizers like MMCE Kumar et al. (2018). A principled option is to capture parameter uncertainty using Bayesian methods. Recently, these have been applied on DNNs using variational methods Louizos and Welling (2017), ensemble methods Lakshminarayanan et al. (2017), and weight perturbationbased training Khan et al. (2018).For modern neural networks, a recent systematic study Guo et al. (2017) finds them to be poorly calibrated and finds temperature scaling to provide the best fix. We find that temperature scaling is inadequate for more complicated structured models where different tokens have very different dynamics. We propose a more precise fix derived after a detailed investigation of the root cause for the lack of calibration.
Going from scalar to structured outputs, Nguyen and O’Connor (2015) investigates calibration for NLP tasks like NER and CoRef on loglinear structured models like CRFs and HMMs. They define calibration on tokenlevel and edgelevel marginal probabilities of the model. Kuleshov and Liang (2015) generalizes this to structured predictions. But these techniques do not apply to modern NMT networks since each node’s probability is conditioned on all previous tokens making nodelevel marginals both intractable and useless.
Concurrently with our work, Ott et al. (2018) studied the uncertainty of neural translation models where their main conclusion was that existing models ”spread too much probability mass across sequences”. However, they do not provide any fix to the problem. Another concern is that their observations are only based on the FairSeq’s CNNbased model, whereas we experiment on a much larger set of architectures. Our initial measurements on a pretrained EnFr FairSeq model^{7}^{7}7Downloaded from https://github.com/pytorch/ fairseq, commit f6ac1aecb3329d2cbf3f1f17106b74 ac51971e8a. found the model to be wellcalibrated (also corroborated in their paper) unlike the six architectures we present here (which they did not evaluate). An interesting area of future work is to explore the reasons for this difference.
The problem of drop in accuracy with increasing beamsize and length bias has long puzzled researchers Bahdanau et al. (2014); Sountsov and Sarawagi (2016); Koehn and Knowles (2017) and many heuristic fixes have been proposed including the popular length normalization/coverage penalty Wu et al. (2016), word reward He et al. (2016), and bounded penalty Yang et al. (2018a). These heuristics fix the symptoms by delaying the placement of the EOS token, whereas ours is the first paper that attributes this phenomenon to the lack of calibration. Our experiments showed that miscalibration is most severe for the EOS token, but it affects several other tokens too. Also, by fixing calibration we get more useful output probabilities, which is not possible by these fixes to only the BLEU drop problem.
7 Conclusion and Future work
Calibration is an important property to improve interpretability and reduce bias in any prediction model. For sequence prediction it is additionally important for sound functioning of beamsearch or any approximate inference method. We measured the calibration of six stateoftheart neural machine translation systems built on attentionbased encoderdecoder models using our proposed weighted ECE measure to quantify calibration of an entire multinomial distribution and not just the highest confidence token.
The token probabilities of all six NMT models were found to be surprisingly miscalibrated even when conditioned on true previous tokens. On digging into the reasons, we found the EOS token to be the worst calibrated. Also, positions with higher attention entropy had worse calibration.
We designed a parametric model to recalibrate as a function of input coverage, attention uncertainty, and token probability. We achieve significant reduction in ECE and show that translation accuracy improves by as much as 0.4 when the right models are used to fix calibration. Existing temperature scaling recalibration actually worsens accuracy. We show that improved calibration leads to greater correlation between probability and error and this manisfests as reduced bleu drop with increasing beamsize. We further show that in our calibrated models the predicted bleu is closer to the actual bleu.
We have reduced, but not totally eliminated the miscalibration of modern NMT models. Perhaps the next round of fixes will emerge out of a better training mechanism that achieves calibration at the time of training. The insights we have obtained about the relation between coverage and EOS calibration, and attention uncertainty should also be useful for better training.
Acknowledgements
We thank all anonymous reviewers for their comments. We thank members of our group at IIT Bombay, at the time when this research was being carried out, for discussions. Sunita Sarawagi gratefully acknowledges the support of NVIDIA corporation for Titan X GPUs.
References
 Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.
 Bahdanau et al. (2015) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. ICLR.

Bengio et al. (2015)
Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015.
Scheduled sampling for sequence prediction with recurrent neural networks.
In NIPS.  Candela et al. (2005) Joaquin Quiñonero Candela, Carl Edward Rasmussen, Fabian H. Sinz, Olivier Bousquet, and Bernhard Schölkopf. 2005. Evaluating predictive uncertainty challenge. In Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, First PASCAL Machine Learning Challenges Workshop, MLCW 2005, Southampton, UK, April 1113, 2005, Revised Selected Papers, pages 1–27.
 Crowson et al. (2016) Cynthia S Crowson, Elizabeth J Atkinson, and Terry M Therneau. 2016. Assessing calibration of prognostic risk scores. Statistical Methods in Medical Research, 25(4):1692–1706.
 Guo et al. (2017) Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 611 August 2017, pages 1321–1330.
 Hastie et al. (2001) Trevor Hastie, Robert Tibshirani, and Jerome Friedman. 2001. The Elements of Statistical Learning. Springer Series in Statistics. Springer New York Inc., New York, NY, USA.
 He et al. (2016) Wei He, Zhongjun He, Hua Wu, and Haifeng Wang. 2016. Improved neural machine translation with smt features. In AAAI.
 Hendrycks and Gimpel (2017) Dan Hendrycks and Kevin Gimpel. 2017. A baseline for detecting misclassified and outofdistribution examples in neural networks. In ICLR.

Khan et al. (2018)
Mohammad Emtiyaz Khan, Didrik Nielsen, Voot Tangkaratt, Wu Lin, Yarin Gal, and
Akash Srivastava. 2018.
Fast and scalable bayesian deep learning by weightperturbation in adam.
In Proceedings of the 35th International Conference on Machine Learning, ICML.  Koehn (2017) Philipp Koehn. 2017. Neural machine translation. CoRR, abs/1709.07809.
 Koehn and Knowles (2017) Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation. Association for Computational Linguistics.
 Kuleshov et al. (2018) Volodymyr Kuleshov, Nathan Fenner, and Stefano Ermon. 2018. Accurate uncertainties for deep learning using calibrated regression. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 1015, 2018, pages 2801–2809.
 Kuleshov and Liang (2015) Volodymyr Kuleshov and Percy S Liang. 2015. Calibrated structured prediction. In NIPS, pages 3474–3482.
 Kumar et al. (2018) Aviral Kumar, Sunita Sarawagi, and Ujjwal Jain. 2018. Trainable calibration measures from kernel mean embeddings. In ICML.
 Lakshminarayanan et al. (2017) Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. 2017. Simple and scalable predictive uncertainty estimation using deep ensembles. In NIPS, pages 6405–6416.
 Louizos and Welling (2017) Christos Louizos and Max Welling. 2017. Multiplicative normalizing flows for variational Bayesian neural networks. In ICML, volume 70, pages 2218–2227.
 Luong et al. (2017) MinhThang Luong, Eugene Brevdo, and Rui Zhao. 2017. Neural machine translation (seq2seq) tutorial. https://github.com/tensorflow/nmt.
 Naeini et al. (2015) Mahdi Pakdaman Naeini, Gregory F. Cooper, and Milos Hauskrecht. 2015. Obtaining well calibrated probabilities using bayesian binning. In AAAI.

Nguyen and O’Connor (2015)
Khanh Nguyen and Brendan O’Connor. 2015.
Posterior calibration and exploratory analysis for natural language processing models.
In EMNLP, pages 1587–1598.  NiculescuMizil and Caruana (2005) Alexandru NiculescuMizil and Rich Caruana. 2005. Predicting good probabilities with supervised learning. In ICML.
 Norouzi et al. (2016) Mohammad Norouzi, Samy Bengio, zhifeng Chen, Navdeep Jaitly, Mike Schuster, Yonghui Wu, and Dale Schuurmans. 2016. Reward augmented maximum likelihood for neural structured prediction. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 1723–1731. Curran Associates, Inc.
 Ott et al. (2018) Myle Ott, Michael Auli, David Grangier, and Marc’Aurelio Ranzato. 2018. Analyzing uncertainty in neural machine translation. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3956–3965.
 Pereyra et al. (2017) Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, and Geoffrey E. Hinton. 2017. Regularizing neural networks by penalizing confident output distributions. ICLR workshop.

Platt (1999)
John C. Platt. 1999.
Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods.
In ADVANCES IN LARGE MARGIN CLASSIFIERS, pages 61–74. MIT Press.  Pleiss et al. (2017) Geoff Pleiss, Manish Raghavan, Felix Wu, Jon M. Kleinberg, and Kilian Q. Weinberger. 2017. On fairness and calibration. In NIPS, pages 5684–5693.
 Ranzato et al. (2016) M Ranzato, S Chopra, M Auli, and W Zaremba. 2016. Sequence level training with recurrent neural networks. ICLR.
 Sountsov and Sarawagi (2016) Pavel Sountsov and Sunita Sarawagi. 2016. Length bias in encoder decoder models and a case for global conditioning. In EMNLP.
 Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.
 Wiseman and Rush (2016) Sam Wiseman and Alexander M. Rush. 2016. Sequencetosequence learning as beamsearch optimization. In EMNLP.
 Wu et al. (2016) Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.
 Yang et al. (2018a) Yilin Yang, Liang Huang, and Mingbo Ma. 2018a. Breaking the beam search curse: A study of (re)scoring methods and stopping criteria for neural machine translation. In EMNLP.
 Yang et al. (2018b) Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W. Cohen. 2018b. Breaking the softmax bottleneck: A highrank RNN language model. In International Conference on Learning Representations.
 Zadrozny and Elkan (2002) Bianca Zadrozny and Charles Elkan. 2002. Transforming classifier scores into accurate multiclass probability estimates. In ACM SIGKDD.
 Zhang et al. (2017) Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2017. Understanding deep learning requires rethinking generalization. ICLR.
Comments
There are no comments yet.