Introspective Distillation for Robust Question Answering

11/01/2021
by   Yulei Niu, et al.
0

Question answering (QA) models are well-known to exploit data bias, e.g., the language prior in visual QA and the position bias in reading comprehension. Recent debiasing methods achieve good out-of-distribution (OOD) generalizability with a considerable sacrifice of the in-distribution (ID) performance. Therefore, they are only applicable in domains where the test distribution is known in advance. In this paper, we present a novel debiasing method called Introspective Distillation (IntroD) to make the best of both worlds for QA. Our key technical contribution is to blend the inductive bias of OOD and ID by introspecting whether a training sample fits in the factual ID world or the counterfactual OOD one. Experiments on visual QA datasets VQA v2, VQA-CP, and reading comprehension dataset SQuAD demonstrate that our proposed IntroD maintains the competitive OOD performance compared to other debiasing methods, while sacrificing little or even achieving better ID performance compared to the non-debiasing ones.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/06/2023

Adaptive loose optimization for robust question answering

Question answering methods are well-known for leveraging data bias, such...
research
07/01/2020

DocVQA: A Dataset for VQA on Document Images

We present a new dataset for Visual Question Answering on document image...
research
10/19/2020

Technical Question Answering across Tasks and Domains

Building automatic technical support system is an important yet challeng...
research
11/29/2022

Which Shortcut Solution Do Question Answering Models Prefer to Learn?

Question answering (QA) models for reading comprehension tend to learn s...
research
05/11/2023

Think Twice: Measuring the Efficiency of Eliminating Prediction Shortcuts of Question Answering Models

While the Large Language Models (LLMs) dominate a majority of language u...
research
04/06/2023

Evaluating the Robustness of Machine Reading Comprehension Models to Low Resource Entity Renaming

Question answering (QA) models have shown compelling results in the task...
research
10/09/2021

A Framework for Rationale Extraction for Deep QA models

As neural-network-based QA models become deeper and more complex, there ...

Please sign up or login with your details

Forgot password? Click here to reset