Penalizing Confident Predictions on Largely Perturbed Inputs Does Not Improve Out-of-Distribution Generalization in Question Answering

11/29/2022
by   Kazutoshi Shinoda, et al.
0

Question answering (QA) models are shown to be insensitive to large perturbations to inputs; that is, they make correct and confident predictions even when given largely perturbed inputs from which humans can not correctly derive answers. In addition, QA models fail to generalize to other domains and adversarial test sets, while humans maintain high accuracy. Based on these observations, we assume that QA models do not use intended features necessary for human reading but rely on spurious features, causing the lack of generalization ability. Therefore, we attempt to answer the question: If the overconfident predictions of QA models for various types of perturbations are penalized, will the out-of-distribution (OOD) generalization be improved? To prevent models from making confident predictions on perturbed inputs, we first follow existing studies and maximize the entropy of the output probability for perturbed inputs. However, we find that QA models trained to be sensitive to a certain perturbation type are often insensitive to unseen types of perturbations. Thus, we simultaneously maximize the entropy for the four perturbation types (i.e., word- and sentence-level shuffling and deletion) to further close the gap between models and humans. Contrary to our expectations, although models become sensitive to the four types of perturbations, we find that the OOD generalization is not improved. Moreover, the OOD generalization is sometimes degraded after entropy maximization. Making unconfident predictions on largely perturbed inputs per se may be beneficial to gaining human trust. However, our negative results suggest that researchers should pay attention to the side effect of entropy maximization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2022

Improving Lexical Embeddings for Robust Question Answering

Recent techniques in Question Answering (QA) have gained remarkable perf...
research
06/16/2020

Selective Question Answering under Domain Shift

To avoid giving wrong answers, question answering (QA) models need to kn...
research
10/27/2022

TASA: Deceiving Question Answering Models by Twin Answer Sentences Attack

We present Twin Answer Sentences Attack (TASA), an adversarial attack me...
research
10/26/2022

Look to the Right: Mitigating Relative Position Bias in Extractive Question Answering

Extractive question answering (QA) models tend to exploit spurious corre...
research
11/20/2020

What do we expect from Multiple-choice QA Systems?

The recent success of machine learning systems on various QA datasets co...
research
10/14/2021

Retrieval-guided Counterfactual Generation for QA

Deep NLP models have been shown to learn spurious correlations, leaving ...
research
04/09/2020

Natural Perturbation for Robust Question Answering

While recent models have achieved human-level scores on many NLP dataset...

Please sign up or login with your details

Forgot password? Click here to reset