Robust Reading Comprehension with Linguistic Constraints via Posterior Regularization

11/16/2019
by   Mantong Zhou, et al.
0

In spite of great advancements of machine reading comprehension (RC), existing RC models are still vulnerable and not robust to different types of adversarial examples. Neural models over-confidently predict wrong answers to semantic different adversarial examples, while over-sensitively predict wrong answers to semantic equivalent adversarial examples. Existing methods which improve the robustness of such neural models merely mitigate one of the two issues but ignore the other. In this paper, we address the over-confidence issue and the over-sensitivity issue existing in current RC models simultaneously with the help of external linguistic knowledge. We first incorporate external knowledge to impose different linguistic constraints (entity constraint, lexical constraint, and predicate constraint), and then regularize RC models through posterior regularization. Linguistic constraints induce more reasonable predictions for both semantic different and semantic equivalent adversarial examples, and posterior regularization provides an effective mechanism to incorporate these constraints. Our method can be applied to any existing neural RC models including state-of-the-art BERT models. Extensive experiments show that our method remarkably improves the robustness of base RC models, and is better to cope with these two issues simultaneously.

READ FULL TEXT
research
07/01/2022

An Understanding-Oriented Robust Machine Reading Comprehension Model

Although existing machine reading comprehension models are making rapid ...
research
08/31/2019

Giving BERT a Calculator: Finding Operations and Arguments with Reading Comprehension

Reading comprehension models have been successfully applied to extractiv...
research
02/24/2022

Using calibrator to improve robustness in Machine Reading Comprehension

Machine Reading Comprehension(MRC) has achieved a remarkable result sinc...
research
11/09/2019

Improving Machine Reading Comprehension via Adversarial Training

Adversarial training (AT) as a regularization method has proved its effe...
research
04/04/2019

Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples

When humans learn to perform a difficult task (say, reading comprehensio...
research
12/16/2021

DuQM: A Chinese Dataset of Linguistically Perturbed Natural Questions for Evaluating the Robustness of Question Matching Models

In this paper, we focus on studying robustness evaluation of Chinese que...

Please sign up or login with your details

Forgot password? Click here to reset