Which Shortcut Solution Do Question Answering Models Prefer to Learn?

11/29/2022
by   Kazutoshi Shinoda, et al.
0

Question answering (QA) models for reading comprehension tend to learn shortcut solutions rather than the solutions intended by QA datasets. QA models that have learned shortcut solutions can achieve human-level performance in shortcut examples where shortcuts are valid, but these same behaviors degrade generalization potential on anti-shortcut examples where shortcuts are invalid. Various methods have been proposed to mitigate this problem, but they do not fully take the characteristics of shortcuts themselves into account. We assume that the learnability of shortcuts, i.e., how easy it is to learn a shortcut, is useful to mitigate the problem. Thus, we first examine the learnability of the representative shortcuts on extractive and multiple-choice QA datasets. Behavioral tests using biased training sets reveal that shortcuts that exploit answer positions and word-label correlations are preferentially learned for extractive and multiple-choice QA, respectively. We find that the more learnable a shortcut is, the flatter and deeper the loss landscape is around the shortcut solution in the parameter space. We also find that the availability of the preferred shortcuts tends to make the task easier to perform from an information-theoretic viewpoint. Lastly, we experimentally show that the learnability of shortcuts can be utilized to construct an effective QA training set; the more learnable a shortcut is, the smaller the proportion of anti-shortcut examples required to achieve comparable performance on shortcut and anti-shortcut examples. We claim that the learnability of shortcuts should be considered when designing mitigation methods.

READ FULL TEXT

page 4

page 5

research
10/26/2022

Look to the Right: Mitigating Relative Position Bias in Extractive Question Answering

Extractive question answering (QA) models tend to exploit spurious corre...
research
01/12/2019

HAS-QA: Hierarchical Answer Spans Model for Open-domain Question Answering

This paper is concerned with open-domain question answering (i.e., OpenQ...
research
11/01/2021

Introspective Distillation for Robust Question Answering

Question answering (QA) models are well-known to exploit data bias, e.g....
research
09/23/2021

Can Question Generation Debias Question Answering Models? A Case Study on Question-Context Lexical Overlap

Question answering (QA) models for reading comprehension have been demon...
research
10/29/2017

Simple and Effective Multi-Paragraph Reading Comprehension

We consider the problem of adapting neural paragraph-level question answ...
research
10/09/2021

A Framework for Rationale Extraction for Deep QA models

As neural-network-based QA models become deeper and more complex, there ...

Please sign up or login with your details

Forgot password? Click here to reset