CIFT: Crowd-Informed Fine-Tuning to Improve Machine Learning Ability

02/27/2017 ∙ by John P. Lalor, et al. ∙ University of Massachusetts Amherst Vanderbilt University University of Massachusetts Medical School 0

Item Response Theory (IRT) allows for measuring ability of Machine Learning models as compared to a human population. However, it is difficult to create a large dataset to train the ability of deep neural network models (DNNs). We propose Crowd-Informed Fine-Tuning (CIFT) as a new training process, where a pre-trained model is fine-tuned with a specialized supplemental training set obtained via IRT model-fitting on a large set of crowdsourced response patterns. With CIFT we can leverage the specialized set of data obtained through IRT to inform parameter tuning in DNNs. We experiment with two loss functions in CIFT to represent (i) memorization of fine-tuning items and (ii) learning a probability distribution over potential labels that is similar to the crowdsourced distribution over labels to simulate crowd knowledge. Our results show that CIFT improves ability for a state-of-the art DNN model for Recognizing Textual Entailment (RTE) tasks and is generalizable to a large-scale RTE test set.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In Machine Learning (ML) classification tasks a model is trained on a set of labeled data and optimized based on some loss function. The training data consists of some feature set and associated labels , where

is a vector of integers corresponding to the classes of the problem. Typically we assume that each training example is labeled correctly, and each is equally appropriate for a single class. There is no way to quantify the uncertainty of the examples, nor a way to exploit such uncertainty during training. Particularly for NLP tasks with sentence- or phrase-based classification such as Natural Language Inference (NLI), it is not common to model ambiguity in language in training data labels.

For example, consider the following two premise-hypothesis pairs, both taken from the Stanford Natural Language Inference (SNLI) corpus for NLI [Bowman et al.2015]:

  1. Premise: Two men and a woman are inspecting the front tire of a bicycle.
    Hypothesis: There are a group of people near a bike.

  2. Premise: A young boy in a beige jacket laughs as he reaches for a teal balloon.
    Hypothesis: The boy plays with the balloon.

In both cases the gold-standard label in the SNLI data set is entailment

, which is to say that if we assume that the premise is true, one can infer that the hypothesis is also true. However, looking at the two sentence pairs one could argue that they do not both equally describe entailment. The first example is a clear case: people inspecting a front tire of a bike are almost certainly standing near it. However the second example is less clear. Is the child laughing because he is playing? Or is he laughing for some other reason, and is simply grabbing for the balloon to hold it (or give it to someone else)? There is ambiguity associated with the two examples that is not captured in the data. To a machine learning model trained on SNLI, both examples are to be classified as entailment, and incorrect classifications should be penalized equally during learning.

Previous work has shown that leveraging crowd disagreements can improve the performance of named entity recognition (NER) models by treating disagreement not as noise but as signal 

[Inel and Aroyo2017]. We use the same assumption here and encode crowd disagreements directly into the model training data in the form of a distribution over labels (“soft labels”). These soft labels model uncertainty in training by representing human ambiguity

in the class labels. Ideally we would have soft labels for all of our training data, however when training large deep learning models it is prohibitively expensive to collect many annotations for all data in the huge datasets required for training. In this work we show that even a small amount of soft labeled data can improve generalization. This is the first work to fine-tune a deep neural network with soft labels from crowd annotations for a natural language processing (NLP) task.

With this in mind we propose soft label memorization-generalization (SLMG), a fine-tuning approach to training that uses distributions over labels for a subset of data as a supplemental training set for a learning model. Ideally a model could be trained with soft labels for all training examples, but because of the costs involved, in this work we explore using a small number of examples for fine-tuning on top of a larger data set. We seek understand the effect of including more informative labels as part of training.

Our hypothesis is that using labels that incorporate language ambiguity can improve model generalization in terms of test set accuracy, even for a small subset of the training data. By using a distribution over labels we hope to reduce overfitting by not pushing probabilities to for items where the empirical distribution is more spread out. Our results show that SLMG is a simple and effective way to improve generalization without a lot of additional data for training.

We evaluate our approach on NLI (also known as Recognizing Textual Entailment or RTE) [Dagan, Glickman, and Magnini2006] using the SNLI data set [Bowman et al.2015]. Prior work has shown that lexical phenomena in the SNLI dataset can be exploited by classifiers without learning the task, and performance on difficult examples in the data set is still relatively poor, making NLI a still-open problem [Gururangan et al.2018, Poliak et al.2018, Lalor et al.2018]. For soft labeled data we use the IRT evaluation scales for NLI data [Lalor, Wu, and Yu2016] where each premise-hypothesis pair was labeled by 1000 AMT workers. This way we are able to leverage an existing source of soft labeled data without additional annotation costs. We find that SLMG can improve generalization under certain circumstances, even thought the amount of soft labeled data used is tiny compared to the total training sets (0.03% of the SNLI training data set). SLMG outperforms the obvious but strong baseline of simply gathering more unseen data for labeling and training. Our results suggest that there are diminishing returns for simply adding more data past a certain point [Halevy, Norvig, and Pereira2009], and indicate that representing data uncertainty in the form of soft labels can have a positive impact on model generalization.

Our contributions are as follows: (i) We propose the SLMG framework for incorporating soft labels in machine learning training, (ii) We use previously-collected human annotated data to estimate soft label distributions for NLI and show that replacing less than 0.1% of training data with soft labeled data can improve generalization for three DNN models, and (iii) We demonstrate for the first time that soft labels can encode ambiguity in training data that can improve model generalization in terms of test set accuracy.

111We will release our code upon publication.

2 Soft Label Memorization-Generalization

2.1 Overview

In a traditional supervised learning single-label classification problem, a model is trained on some data set

, and tested on some test set . In this setting, learning is done by minimizing some loss function . We assume that the labels associated with instances in are correct. That is, for each we assume that is the correct class for the -th example, where is some set of features associated with the -th training example and is the corresponding class. However it is often the case, particularly in NLP, that examples may vary in terms of difficulty, ambiguity, and other characteristics that are often not captured by the single correct class to which the example belongs. The traditional single-label classification task does not take this into account.

For example, a popular loss function for classification tasks is Categorical Cross-Entropy (CCE). For a single training example with class where is the set of possible classes, CCE loss is defined as . In the single-class classification case where a single class has probability CCE loss is , where each example loss is summed over all of the training examples. With this loss function a learning model is encouraged to update its parameters in order to maximize the probability of the correct class for each training example. Without some stopping criteria, parameter updates will continue for a given example until . This may not always be ideal, since by pushing the model output probability to , the learner is encouraged to overfit on an example that may not be representative of the particular class.

Premise Hypothesis
A little boy is opening gifts surrounded by a group of children and adults. The boy is being punished 0.005 0.839 0.156
A man and woman walking away from a crowded street fair. There are a group of men walking together. 0.045 0.542 0.412
Two men and a woman are inspecting the front tire of a bicycle. There are a group of people near a bike. 0.861 0.032 0.108
A young boy in a beige jacket laughs as he reaches for a teal balloon. The boy plays with the balloon. 0.659 0.026 0.316
A man wearing a gray shirt waving in the middle of a plant nursery The man does not have a way to get home. 0.011 0.174 0.815
A wielder works on wielding a beam into place while other workers set beams. The wielder is working on a building. 0.486 0.013 0.501
Table 1: Examples of premise-hypothesis pairs from the SNLI data set and the AMT-estimated probability that the correct label is Entailment (E), Contradiction (C), or Neutral (N). The original gold-standard label from SNLI is bolded. In some cases, the gold label provided originally has a low probability based on AMT-population estimates (i.e. less than 75%).

With SLMG we want to take advantage of the fact that differences between examples in the same class can be useful during training. Instead of treating each training example as having a single correct class, SLMG uses a distribution over labels for the gold standard. This way examples with varying degrees of uncertainty are reflected during training.

We make a different assumption regarding noise in human generated labels than previous work [Dawid and Skene1979, Bachrach et al.2012]. The presence of noise when multiple labels are obtained is often attributed to labeler error, lack of expertise, adversarial actions, or other negative causes. However, we believe that the noise in the labels can be considered a signal [Inel et al.2014, Aroyo and Welty2015]. Examples with less uncertainty about the label (in the form of a label distribution with a single high peak) should be associated with similarly high model confidence.

2.2 Training with SLMG

In our experiments we investigated two ways to incorporate the soft labeled data into model training, which we define below. Let be the original training set, and let be the test set. Let be the soft labeled training data with class probabilities. There are two ways to incorporate the

data into a learning task that we investigate: (i) at each training epoch, training with

and interspersed (SLMG-I), and (ii) train a model on for a predefined number of epochs, followed by training on for a predefined number of epochs, repeated some number of times (meta-epochs) in a sequential fashion (SLMG-S). Algorithms 1 and 2 define the two training sequences, respectively. In our experiments we tested two loss functions for the SLMG data, CCE (§2.1) and Mean Squared Error (MSE): .

Interspersed Fine-Tuning

The motivation for interspersing fine-tuning with soft labels is to prevent overfitting as the model learns. After each epoch in the training cycle, the learning model will have made updates to the model weights according to the outputs on the full training set. By interspersing the fine-tuning after each epoch, our expectation is that we can account for and correct overfitting earlier in the process by making smaller updates to the model weights according to the soft label distributions. This method encourages generalization early in the process, before the model can memorize the training data and possibly overfit.

Sequential Fine-Tuning

In contrast with the interspersed fine-tuning, the motivation for sequential fine-tuning is to adjust a well-trained model to improve generalization. After a full training cycle of some number of epochs, the learning model is then fine-tuned using the soft-labeled data. This way the fine-tuning takes place after the model has learned a set of weights that perform well on the training data. Fine-tuning here can improve generalization by updating the model weights to be less extreme when dealing with examples that are more ambiguous than others. Since these updates happen on a trained model, there is less risk of the model performance drastically reducing. By repeating this process over a number of meta-epochs, the learning model can memorize, generalize, and repeat the cycle.

  Input: Model , NumEpochs , ,
  for  to  do
     Train on
     Train on
  end for
Algorithm 1 SLMG-I Algorithm
  Input: Model , NumMetaEpochs , NumEpochs , ,
  for  to  do
     for  to  do
        Train on
     end for
     for  to do
        Train on
     end for
  end for
Algorithm 2 SLMG-S Algorithm

2.3 Collecting Soft Labeled Data

For our NLI soft labeled data, we use data collected by [Lalor, Wu, and Yu2016]. 180 SNLI training examples split evenly between the three labels were randomly selected and given to Amazon Mechanical Turk (AMT) workers (Turkers) for additional labeling. For each example 1000 additional labels were collected. In order to estimate a distribution over labels for these examples we calculate the probability of a certain label according to the proportion of humans that selected the label: , where is the number of times was selected by the crowd and is the total number of responses obtained.

Table 1 shows example premise-hypothesis pairs taken from the SNLI data set for NLI [Bowman et al.2015]. Table 1 includes the premise and hypothesis sentences, the gold standard class as included in the data set, as well as estimated soft labels using human responses obtained by [Lalor, Wu, and Yu2016]. There are premise-hypothesis pairs that share a class label (e.g. the first two examples) yet are very different in terms of how they are perceived by a crowd of human labelers. In a traditional setup both examples would have a single class label associated with contradiction (class label if = entailment, = contradiction, and = neutral). Certain training examples have much less uncertainty associated with them, which is reflected in the high probability weight on the correct label. In other cases, there is a more evenly spread distribution, which can be interpreted as a higher degree of uncertainty. In a learning scenario, one may want to treat these examples differently according to their uncertainty, as opposed to the common practice of weighing each equally.

Consider calculating the entropy, , of the first two training examples from Table 1: . If we assume that the probability of the correct label (in this case, contradiction), is , and the probability of all other labels is , then entropy in both cases is 0.222Where . However if we use the distributions from Table 1, then entropy is and respectively. There is much more uncertainty in the second example than the first, which is not reflected if we assume that both examples are labeled contradiction with probability . This uncertainty may be important when learning for classification.

2.4 Learning from the Crowd

In this work we take advantage of the fact that we have a distribution over labels provided by the human labelers. We can train using CCE or MSE: as our loss function, where we minimize the difference between the estimated probabilities learned by the model and the empirical distributions obtained from AMT over the training examples. With SLMG we are attempting to move the model predictions closer to the soft label distribution of responses. We are not necessarily trying to push predicted probability values to 1, which is a departure from the standard understanding of single label classification in ML. Here we hypothesize that updating weights according to differences in the observed probability distributions will improve the model by preventing it from updating too much for more uncertain items (that is, examples where the empirical distribution is more evenly spread across the three labels).

This scenario assumes that the crowdsourced distribution of responses is a better measure of correctness than a single gold-standard label. We hypothesize that the crowd distribution over labels gives a fuller understanding of the items being used for training. SLMG can update parameters to move closer to this distribution without making large parameter updates under the assumption that a single correct label should have probability 1.

If we assume that ML performance is not at the level of an average human (which is reasonable in many cases), then SLMG can help pull models towards average human behavior when we use human annotations to generate the soft labels. If the model updates parameters to minimize the difference between predictions and the distribution of responses provided by AMT workers, then the model predictions should look like that of the crowd. When ML model performance is better than the average AMT user, there is a risk that performance may suffer, if we assume that our model would outperform a human population. The model may have learned a set of parameters that better models the data than the human population, and updating parameters to reflect the human distribution could lead to a drop in performance. However since we are only using SLMG as a fine-tuning mechanism, the risk here is mitigated by the larger training set that we use alongside the SLMG data.

3 Experiments

Our hypothesis is that soft labeled data, even in very small amounts, can improve model generalization by capturing ambiguity of language data in the form of distributions over labels. In this section we describe our experiments to test this hypothesis, as well as the data sets and models used in the experiments.

3.1 Models

For our experiments we tested three deep learning models, an LSTM RNN [Hochreiter and Schmidhuber1997, Bowman et al.2015] that was released with the original SNLI data set, a memory-augmented LSTM network [Munkhdalai and Yu2017], and a recently released hierarchical network with very strong performance on the SNLI task [Chen et al.2017]. Each model was trained according to the original parameters provided in the respective papers.333Due to space constraints, please refer to the original papers for descriptions of the model architectures. Word embeddings for all models were initialized with GloVe 840B 300D word embeddings [Pennington, Socher, and Manning2014].

Our first model is a re-implementation of the 100D LSTM model that was released with the original SNLI data set [Bowman et al.2015]. For the NLI task, the premise and hypothesis sentences were both passed through a 100D LSTM sequence embedding [Hochreiter and Schmidhuber1997]

. The output embeddings were concatenated and fed through 3 200D tanh layers, followed by a final softmax layer for classification. We implemented in DyNet 

[Neubig et al.2017].

Neural Semantic Encoder (NSE) [Munkhdalai and Yu2017] is a memory augmented neural network. NSE uses read, compute, and write operations to maintain and update an external memory during training and outputs an encoding that is used for downstream classification tasks: We used the publicly available version of the NSE model released by the authors444 and implemented in Chainer [Tokui et al.2015]

. We followed the original NSE training parameters and hyperparameters 

[Munkhdalai and Yu2017].

The Enhanced Sequential Inference Model (ESIM) [Chen et al.2017] consists of three stages: (i) input premise and hypothesis encoding with BiLSTMs, (ii) local inference modeling with attention, and (iii) inference composition with a second BiLSTM encoding over the local inference information. We used the publicly available ESIM model released by the authors555

implemented in Theano 

[Theano Development Team2016] and kept all of the hyperparameters the same as in the original paper.

3.2 Data

For NLI data we used the SNLI corpus [Bowman et al.2015]. SNLI is an order of magnitude larger than previously available NLI data sets (550k train/10k dev/10k test), and consists entirely of human-generated P-H pairs. SNLI is evenly split across three labels: entailment, contradiction, and neutral. SNLI is large, well-studied, and often used as a benchmark for new NLP models for NLI.

Premise Hypothesis Model
This church choir sings to the masses as they sing joyous songs from the book at a church. The church is filled with song B1 0.191 0.021 0.788
SLMG-I-CCE 0.520 0.028 0.452
A land rover is being driven across a river. A sedan is stuck in the middle of a river. B1 0.014 0.561 0.435
SLMG-I-CCE 0.011 0.241 0.749
Table 2: Examples of premise-hypothesis pairs from the SNLI data set and output probabilities from the LSTM model. For both examples the probabilities associated with the gold label are bolded.
Experiment Model
B1: Traditional 76.7 84.6 87.7
B2: CLE 76.9 84.8 87.1
B3: AOC 75.7 84.0 87.7
SLMG-S-MSE 76.5 84.1 87.7
SLMG-S-CCE 77.4 85.1 87.6
SLMG-I-MSE 76.9 84.3 87.8
SLMG-I-CCE 76.7 84.4 87.9
Table 3: Test accuracy results for incorporating SLMG for NLI. Refer to §3.3 for descriptions of the baselines. Highest accuracy result for each model is bolded (one per column).

3.3 Baselines

We evaluate SLMG against three baselines: (i) B1, Traditional: We train the DNN models (§3.1) in a traditional supervised learning setup, where the soft labeled training data () is incorporated in the hard labeled training data () with their original gold-standard labels, (ii) B2, Comparable Label Effort (CLE): Because each of the 180 examples have 1000 human annotations, our second baseline is to add new single label training data to B1, to evaluate against a comparable data labeling effort. To that end, we randomly selected 180,000 additional training data points from the Multi-NLI data set [Williams, Nangia, and Bowman] for additional training data, (iii) B3, AOC: The third baseline is the All in one Classifier (AOC) approach proposed by [Kajino, Tsuboi, and Kashima2012], where for each example in , every label obtained from the crowd is used as a unique example in the training data. This baseline also has an addition 180,000 training data points as in B2, but the additional pairs all come from and have varying labels depending on the crowd responses.

4 Results and Analysis

Table 3 reports results on the SNLI test set. For each model on the NLI task, we are able to improve generalization performance (i.e. test set accuracy) by injecting soft labeled data at some point. Note that the best performance with SLMG varies according to the model, but for each model there is some configuration that does improve performance. As with all model training, the effect of SLMG requires experimentation according to the use case. In all cases, using CCE as the loss function performs better than using MSE. We suspect that this is due to the fact that small differences are penalized less with CCE than with MSE.

Table 2 shows example of two premise-hypothesis pairs from the SNLI test set, and the model output probabilities from the B1 baseline and the SLMG-I model trained with CCE as the soft label loss function. In the first example, using SLMG results in flipping the output from incorrect (neutral) to correct (entailment). However, this pair seems to be a weak case of entailment, and could be argued to be neutral. The SLMG model considers this and has a reasonably high probability for the neutral class. In the second case, training with SLMG results in the wrong label, but again it could be argued that this is a case where neutral is appropriate. The “sedan” that is stuck may not be the Land Rover (Land Rovers are SUVs), so neutral is a reasonable output here.

4.1 Changes in Outputs from SLMG

E 2739 191 438
Baseline C 333 2360 544
N 441 332 2446
E 2828 157 383
SLMG-S (CCE) C 375 2401 461
N 520 328 2371
E 2967 158 243
SLMG-S (MSE) C 466 2415 356
N 677 422 2120
Table 4: Confusion matrices for the LSTM model, trained according to the baseline (first block), using SLMG-S with CCE (second block), and using SLMG-S with MSE (third block). Gold standard labels run down the left hand side, while predicted labels are across the top in the matrix. The highest count of True Positives for each label across the three model-training setups are bolded.

To better understand the effects of SLMG on generalization, we look at the changes in test set performance when SLMG is used as compared to the baseline case. Table 4 shows 3 confusion matrices: the test-set output for the baseline LSTM model on the NLI task, and the same model when trained with SLMG-S and CCE as the loss function for the soft labeled data, which improved test set performance and SLMG-S with MSE as the loss function for the soft labeled data, which did not. In both cases of training with SLMG, the number of correctly classified entailment and contradiction examples increased, while the number of neutral examples correctly classified decreased. However when MSE is used as the soft label loss function, the increase in misclassified neutral examples was enough to offset the gains in correctly classified entailment and contradiction examples. Depending on the use case, this result could be useful for applications. Fewer false negatives for entailment and contradiction examples may be more important than fewer true positives for the neutral class.

If we consider SNLI as a binary classification task, with two possible labels “entailment” and “not entailment” (where we combine contradiction and neutral), and look at Table 4 we see that SLMG outperforms the baseline in both cases. In fact, the SLMG-MSE method outperforms SLMG-CCE in the binary task ( vs. ) due to the fact that its performance on the entailment label is much higher.

4.2 Comparing the Crowd to the Gold Standard

We also looked at the soft labeled data itself to understand how well the crowd label distributions align with the accepted gold-standard labels in the original data set. Figure 1 reports on how well the crowd distributions align with the gold standard labels included in the original SNLI data set. We see that there are quite a few examples where the gold standard class label does not have a high degree of probability weight as estimated from the crowd.

For NLI, there is a high percentage of examples where the gold label has an estimated probability of less than . This may be due to the fact that individuals have different understanding of what constitutes entailment. This uncertainty among humans is useful for understanding outputs from ML models. This is consistent with the inter-rater reliability (IRR) scores originally reported by [Lalor, Wu, and Yu2016] with the IRT data set. IRR scores (Fleiss’ ) for the data ranged from to , which is considered moderate agreement [Landis and Koch1977]. The moderate agreement indicates that there is a general consensus about which label is correct (which is consistent with Figure 1), but there is enough disagreement among the annotators that the disagreements should be incorporated into the training data, and not discarded in favor of majority vote or another single label selection criteria.

Figure 1: Relative frequency histograms for the crowd-estimated probability of the original gold-standard label.
Figure 2: Average KL-Divergence between sub-sampled crowd distributions and the estimated soft label distribution from the entire crowd data. By sampling 20 crowd workers we achieve a good estimate of the label distributions without the cost of using the full 1000 worker population.

4.3 How Many Labels do we Need?

Of course, collecting 1000 labels per example to estimate soft labels becomes prohibitively expensive very quickly. However it may not be necessary to collect that many labels in practice. To determine how many labels are needed to arrive at a reasonable estimate of the soft label distributions, we randomly sampled crowd workers from our dataset one at a time. At each step, we used the sampled workers responses to estimate the soft labels for each example and calculated the Kullback Liebler divergence (KL-Divergence) between the true soft label distributions and the sampled soft label distributions:, where is the true soft label distribution estimated from the full data set and is the sampled soft label distribution. Figure 2 plots the KL-Divergence averaged over the number of data set examples (180) as a function of the number of crowd workers selected.666We truncate the x-axis to focus on the lower values. We plot results for 5 runs of the random sampling procedure. As the figure shows, the average KL-Divergence approaches 0 well before all 1000 labels are necessary.

When sampling randomly, the average difference drops very quickly, and is very low with as few as 15 or 20 labels per example. Active learning techniques could reduce this number further, either by selecting “good annotators” or identifying examples for which more labels are needed. This is left for future work.

To confirm the observation that significantly fewer labels are necessary, we randomly sampled 20 annotators from the dataset, used their responses to estimate the soft label distributions, and re-trained the LSTM model with SLMG-I using CCE as the soft label loss function. We ran this training 10 times, where each time we sampled a new selection of 20 annotators for estimating the soft label distributions. The average accuracy for these models was

and the standard deviation was

. These models perform as well as the model using the distributions learned from 1000 annotators, with significantly less annotation cost.

5 Related Work

Other work on modeling uncertainty in labels is Knowledge Distillation [Hinton, Vinyals, and Dean2015]. In Knowledge Distillation, output probabilities of a complex expert model are used as input to a simpler model so the simpler model can learn to generalize based on the output weights of the expert model. A key distinction between Knowledge Distillation and our work is that the expert model that is distilling its knowledge was still trained with a single class label as the gold standard, and the expert passes its uncertainty to the simpler model. In our work we capture uncertainty at the original training data, in order to induce generalization as part of the original training.

This work is related to the idea of “crowd truth” and collecting and using annotations from the crowd [Kajino, Tsuboi, and Kashima2012, Inel et al.2014]

. We use the CrowdTruth assumption that disagreement between annotators provides signal about data ambiguity and should be used in the learning process. In addition this work is closely related to the idea of Label Distribution Learning (LDL) from Computer Vision (CV) 

[Geng2016]. For training and testing, LDL assumes that is a probability distribution over labels. With LDL, the goal is to learn a distribution over labels. However in our case we would still like to learn a classifier that outputs a single class, while using the distribution over training labels as a measure of uncertainty in the data. We use the distribution over labels to represent the uncertainty associated with different examples in order to improve model training.

There are several other areas of study regarding how best to use training data that are related to this work. Re-weighting or re-ordering training examples is a well-studied and related area of supervised learning. Often examples are re-weighted according to some notion of difficulty, or model uncertainty [Bengio et al.2009, Chang, Learned-Miller, and McCallum2017]. In particular, the internal uncertainty of the model is used as the basis for selecting how training examples are weighted. However, model uncertainty is dependent upon the original training data the model was trained on, while here we use an external human measure of uncertainty. Curriculum learning (CL) is a training procedure where models are trained to learn simple concepts before more complex concepts are introduced [Bengio et al.2009]. CL training for neural networks can improve generalization and speed up convergence. They demonstrate the effectiveness of curriculum learning on several tasks and draw a comparison with boosting and active learning [Freund and Schapire1997]. Our representation of uncertainty via soft labels can be thought of as a measure of difficulty (i.e. more uncertainty is associated with more difficult examples).

Finally, this work is related to transfer learning and domain adaptation 

[Caruana1995, Bengio et al.2011, Bengio2012], but with an important distinction. Transfer learning and domain adaptation repurpose representations learned for a source domain to facilitate learning in a target domain. In this paper we want to improve performance in the source domain by fine-tuning with data from the source domain with distributions over class labels. This work differs from domain adaptation and transfer learning in that we are not adding data from a different domain or applying a learned model to a new task. Instead, we are augmenting a single classification task by using a richer representation of where the data lies within the class labels to inform training. The goal is that by fine tuning with a distribution over labels, a model will be less likely to overfit on a training set. To the best of our knowledge this is the first work to use a subset of soft labeled data for fine-tuning, whereas previous work used an all-or-none approach (all hard or soft labels).

6 Discussion

In this paper we have introduced SLMG, a fine-tuning approach to training that can improve classification performance by leveraging uncertainty in data. In the NLI task, incorporating the more informative class distribution labels leads to improved performance under certain training setups. By introducing specialized supplemental data the model is able to update its representations to boost performance. With SLMG, a learning model can update parameters according to a gold-standard that allows for uncertainty in predictions, as opposed to the classic case where each training example should be equally important during parameter updates. Training examples with higher degrees of uncertainty within a human population have less of an effect on gradient updates than those examples where confidence in the label is very high as measured by the crowd.

SLMG is an easy fix, but it is not a silver bullet for improving generalization. In our experiments we found that under different training settings SLMG can improve performance for the different models. It is worthwhile to experiment with SLMG to see if and how it can improve performance on other NLP tasks. NLI is a particularly good use case for SLMG because of the ambiguity inherent in language and the potential disagreements that can arise from different interpretations of text. In addition, further experimentation with the way soft labels are generated can lead to further generalization improvements.

There are limitations to this work. One bottleneck is the requirement for having a large amount of human labels for a small number of examples, which goes against the traditional strategy for crowdsourcing label-generation. However one can probably estimate a reasonable distribution over labels with significantly fewer labels than obtained here for each example (Figure 2). Identifying a suitable number using active learning techniques is left for future work.

While SLMG requires soft labels, it does not necessarily require human-annotated soft labels. Rather, SLMG only requires some measure of uncertainty between training examples as part of the generalization step. This can come from human annotators, an ensemble of machine learning models, or some other pre-defined uncertainty metric. In our experiments we demonstrate the validity of SLMG using an existing data set from which we can extract soft labels, and leave experimentation with different soft label generation methods to future work.

Future work includes investigation into data sets that can be used with SLMG and why certain fine-tuning sets lead to better performance in certain scenarios. Experiments with different loss functions (e.g. KL-Divergence) and different data can help to understand how SLMG affects the representations learned by a model. Our results suggest that future work training DNNs to learn a distribution over labels can lead to further improvements.


  • [Aroyo and Welty2015] Aroyo, L., and Welty, C. 2015. Truth is a lie: Crowd truth and the seven myths of human annotation. AI Magazine 36(1):15–24.
  • [Bachrach et al.2012] Bachrach, Y.; Graepel, T.; Minka, T.; and Guiver, J. 2012. How to grade a test without knowing the answers — a bayesian graphical model for adaptive crowdsourcing and aptitude testing. In Proceedings of the 29th International Conference on Machine Learning, 1183–1190. New York, NY, USA: Omnipress.
  • [Bengio et al.2009] Bengio, Y.; Louradour, J.; Collobert, R.; and Weston, J. 2009. Curriculum learning. In Proceedings of the 26th International Conference on Machine Learning, 41–48. ACM.
  • [Bengio et al.2011] Bengio, Y.; Bastien, F.; Bergeron, A.; Boulanger-lew, N.; Breuel, T.; Chherawala, Y.; Cisse, M.; Côté, M.; Erhan, D.; Eustache, J.; Glorot, X.; Muller, X.; Lebeuf, S. P.; Pascanu, R.; Rifai, S.; Savard, F.; and Sicard, G. 2011. Deep learners benefit more from out-of-distribution examples. In AISTATS, 164–172.
  • [Bengio2012] Bengio, Y. 2012. Deep learning of representations for unsupervised and transfer learning. ICML Unsupervised and Transfer Learning 27:17–36.
  • [Bowman et al.2015] Bowman, S. R.; Angeli, G.; Potts, C.; and Manning, D. C. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 632–642. Association for Computational Linguistics.
  • [Caruana1995] Caruana, R. 1995.

    Learning many related tasks at the same time with backpropagation.

    Advances in Neural Information Processing Systems 657–664.
  • [Chang, Learned-Miller, and McCallum2017] Chang, H.-S.; Learned-Miller, E.; and McCallum, A. 2017.

    Active bias: Training a more accurate neural network by emphasizing high variance samples.

    In Advances in Neural Information Processing Systems.
  • [Chen et al.2017] Chen, Q.; Zhu, X.; Ling, Z.; Wei, S.; Jiang, H.; and Inkpen, D. 2017. Enhanced lstm for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017). Vancouver: ACL.
  • [Dagan, Glickman, and Magnini2006] Dagan, I.; Glickman, O.; and Magnini, B. 2006. The PASCAL Recognising Textual Entailment Challenge. In Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment. Springer. 177–190.
  • [Dawid and Skene1979] Dawid, A. P., and Skene, A. M. 1979. Maximum likelihood estimation of observer error-rates using the em algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics) 28(1):20–28.
  • [Druck and McCallum2011] Druck, G., and McCallum, A. 2011. Toward interactive training and evaluation. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management, CIKM ’11, 947–956. New York, NY, USA: ACM.
  • [Freund and Schapire1997] Freund, Y., and Schapire, R. E. 1997. A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. J. Comput. Syst. Sci. 55(1):119–139.
  • [Geng2016] Geng, X. 2016. Label distribution learning. IEEE Transactions on Knowledge and Data Engineering 28(7):1734–1748.
  • [Gururangan et al.2018] Gururangan, S.; Swayamdipta, S.; Levy, O.; Schwartz, R.; Bowman, S.; and Smith, N. A. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, 107–112.
  • [Halevy, Norvig, and Pereira2009] Halevy, A.; Norvig, P.; and Pereira, F. 2009. The unreasonable effectiveness of data. IEEE Intelligent Systems 24(2):8–12.
  • [Hinton, Vinyals, and Dean2015] Hinton, G.; Vinyals, O.; and Dean, J. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
  • [Hochreiter and Schmidhuber1997] Hochreiter, S., and Schmidhuber, J. 1997. Long Short-Term Memory. Neural Computation 9(8):1735–1780.
  • [Inel and Aroyo2017] Inel, O., and Aroyo, L. 2017. Harnessing diversity in crowds and machines for better ner performance. In European Semantic Web Conference, 289–304. Springer.
  • [Inel et al.2014] Inel, O.; Khamkham, K.; Cristea, T.; Dumitrache, A.; Rutjes, A.; van der Ploeg, J.; Romaszko, L.; Aroyo, L.; and Sips, R.-J. 2014. Crowdtruth: Machine-human computation framework for harnessing disagreement in gathering annotated data. In International Semantic Web Conference, 486–504. Springer.
  • [Kajino, Tsuboi, and Kashima2012] Kajino, H.; Tsuboi, Y.; and Kashima, H. 2012. A convex formulation for learning from crowds. In

    Twenty-Sixth AAAI Conference on Artificial Intelligence

  • [Kamar, Kapoor, and Horvitz2015] Kamar, E.; Kapoor, A.; and Horvitz, E. 2015. Identifying and accounting for task-dependent bias in crowdsourcing. In Third AAAI Conference on Human Computation and Crowdsourcing.
  • [Lalor et al.2018] Lalor, J. P.; Wu, H.; Munkhdalai, T.; and Yu, H. 2018. Understanding deep learning performance through an examination of test set difficulty: A psychometric case study. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
  • [Lalor, Wu, and Yu2016] Lalor, J. P.; Wu, H.; and Yu, H. 2016. Building an evaluation scale using item response theory. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 648–657. Association for Computational Linguistics.
  • [Landis and Koch1977] Landis, J. R., and Koch, G. G. 1977. The measurement of observer agreement for categorical data. biometrics 159–174.
  • [Munkhdalai and Yu2017] Munkhdalai, T., and Yu, H. 2017. Neural semantic encoders. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics.
  • [Neubig et al.2017] Neubig, G.; Dyer, C.; Goldberg, Y.; Matthews, A.; Ammar, W.; Anastasopoulos, A.; Ballesteros, M.; Chiang, D.; Clothiaux, D.; Cohn, T.; Duh, K.; Faruqui, M.; Gan, C.; Garrette, D.; Ji, Y.; Kong, L.; Kuncoro, A.; Kumar, G.; Malaviya, C.; Michel, P.; Oda, Y.; Richardson, M.; Saphra, N.; Swayamdipta, S.; and Yin, P. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980.
  • [Pennington, Socher, and Manning2014] Pennington, J.; Socher, R.; and Manning, C. D. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), 1532–1543.
  • [Poliak et al.2018] Poliak, A.; Naradowsky, J.; Haldar, A.; Rudinger, R.; and Van Durme, B. 2018. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, 180–191.
  • [Theano Development Team2016] Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints abs/1605.02688.
  • [Tokui et al.2015] Tokui, S.; Oono, K.; Hido, S.; and Clayton, J. 2015. Chainer: a next-generation open source framework for deep learning. In Proceedings of Workshop on Machine Learning Systems (LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS).
  • [Williams, Nangia, and Bowman] Williams, A.; Nangia, N.; and Bowman, S. R. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL).