Evaluating Paraphrastic Robustness in Textual Entailment Models

06/29/2023
by   Dhruv Verma, et al.
0

We present PaRTE, a collection of 1,126 pairs of Recognizing Textual Entailment (RTE) examples to evaluate whether models are robust to paraphrasing. We posit that if RTE models understand language, their predictions should be consistent across inputs that share the same meaning. We use the evaluation set to determine if RTE models' predictions change when examples are paraphrased. In our experiments, contemporary models change their predictions on 8-16% of paraphrased examples, indicating that there is still room for improvement.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset