EQUATE: A Benchmark Evaluation Framework for Quantitative Reasoning in Natural Language Inference

01/11/2019
by   Abhilasha Ravichander, et al.
0

Quantitative reasoning is an important component of reasoning that any intelligent natural language understanding system can reasonably be expected to handle. We present EQUATE (Evaluating Quantitative Understanding Aptitude in Textual Entailment), a new dataset to evaluate the ability of models to reason with quantities in textual entailment (including not only arithmetic and algebraic computation, but also other phenomena such as range comparisons and verbal reasoning with quantities). The average performance of 7 published textual entailment models on EQUATE does not exceed a majority class baseline, indicating that current models do not implicitly learn to reason with quantities. We propose a new baseline Q-REAS that manipulates quantities symbolically, achieving some success on numerical reasoning, but struggling at more verbal aspects of the task. We hope our evaluation framework will support the development of new models of quantitative reasoning in language understanding.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset