Avoiding catastrophic forgetting in mitigating model biases in sentence-pair classification with elastic weight consolidation

04/29/2020 ∙ by James Thorne, et al. ∙ 0

The biases present in training datasets have been shown to be affecting models for a number of tasks such as natural language inference(NLI) and fact verification. While fine-tuning models on additional data has been used to mitigate such biases, a common issue is that of catastrophic forgetting of the original task. In this paper, we show that elastic weight consolidation (EWC) allows fine-tuning of models to mitigate biases for NLI and fact verification while being less susceptible to catastrophic forgetting. In our evaluation on fact verification systems, we show that fine-tuning with EWC Pareto dominates standard fine-tuning, yielding models lower levels of forgetting on the original task for equivalent gains in accuracy on the fine-tuned task. Additionally, we show that systems trained on NLI can be fine-tuned to improve their accuracy on stress test challenge tasks with minimal loss in accuracy on the MultiNLI dataset despite greater domain shift.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.