Dice Loss for Data-imbalanced NLP Tasks

11/07/2019
by   Xiaoya Li, et al.
0

Many NLP tasks such as tagging and machine reading comprehension are faced with the severe data imbalance issue: negative examples significantly outnumber positive examples, and the huge number of background examples (or easy-negative examples) overwhelms the training. The most commonly used cross entropy (CE) criteria is actually an accuracy-oriented objective, and thus creates a discrepancy between training and test: at training time, each training instance contributes equally to the objective function, while at test time F1 score concerns more about positive examples. In this paper, we propose to use dice loss in replacement of the standard cross-entropy objective for data-imbalanced NLP tasks. Dice loss is based on the Sorensen-Dice coefficient or Tversky index, which attaches similar importance to false positives and false negatives, and is more immune to the data-imbalance issue. To further alleviate the dominating influence from easy-negative examples in training, we propose to associate training examples with dynamically adjusted weights to deemphasize easy-negative examples.Theoretical analysis shows that this strategy narrows down the gap between the F1 score in evaluation and the dice loss in training. With the proposed training objective, we observe significant performance boost on a wide range of data imbalanced NLP tasks. Notably, we are able to achieve SOTA results on CTB5, CTB6 and UD1.4 for the part of speech tagging task; SOTA results on CoNLL03, OntoNotes5.0, MSRA and OntoNotes4.0 for the named entity recognition task; along with competitive results on the tasks of machine reading comprehension and paraphrase identification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2019

DSReg: Using Distant Supervision as a Regularizer

In this paper, we aim at tackling a general issue in NLP tasks where som...
research
07/22/2021

Improving Polyphonic Sound Event Detection on Multichannel Recordings with the Sørensen-Dice Coefficient Loss and Transfer Learning

The Sørensen–Dice Coefficient has recently seen rising popularity as a l...
research
03/13/2021

Simpson's Bias in NLP Training

In most machine learning tasks, we evaluate a model M on a given data po...
research
11/05/2019

Optimizing the Dice Score and Jaccard Index for Medical Image Segmentation: Theory Practice

The Dice score and Jaccard index are commonly used metrics for the evalu...
research
10/06/2022

Adaptive Ranking-based Sample Selection for Weakly Supervised Class-imbalanced Text Classification

To obtain a large amount of training labels inexpensively, researchers h...
research
04/26/2023

Impact of Position Bias on Language Models in Token Classification

Language Models (LMs) have shown state-of-the-art performance in Natural...
research
04/16/2020

CrossCheck: Rapid, Reproducible, and Interpretable Model Evaluation

Evaluation beyond aggregate performance metrics, e.g. F1-score, is cruci...

Please sign up or login with your details

Forgot password? Click here to reset