DeepAI
Log In Sign Up

A Survey on Recognizing Textual Entailment as an NLP Evaluation

10/06/2020
by   Adam Poliak, et al.
0

Recognizing Textual Entailment (RTE) was proposed as a unified evaluation framework to compare semantic understanding of different NLP systems. In this survey paper, we provide an overview of different approaches for evaluating and understanding the reasoning capabilities of NLP systems. We then focus our discussion on RTE by highlighting prominent RTE datasets as well as advances in RTE dataset that focus on specific linguistic phenomena that can be used to evaluate NLP systems on a fine-grained level. We conclude by arguing that when evaluating NLP systems, the community should utilize newly introduced RTE datasets that focus on specific linguistic phenomena.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/02/2021

Figurative Language in Recognizing Textual Entailment

We introduce a collection of recognizing textual entailment (RTE) datase...
10/06/2020

Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment as a Start

A standard way to address different NLP problems is by first constructin...
05/28/2020

What is SemEval evaluating? A Systematic Analysis of Evaluation Campaigns in NLP

SemEval is the primary venue in the NLP community for the proposal of ne...
11/10/2020

Medical Knowledge-enriched Textual Entailment Framework

One of the cardinal tasks in achieving robust medical question answering...
05/31/2018

Incremental Natural Language Processing: Challenges, Strategies, and Evaluation

Incrementality is ubiquitous in human-human interaction and beneficial f...
02/26/2021

Methods for the Design and Evaluation of HCI+NLP Systems

HCI and NLP traditionally focus on different evaluation methods. While H...
07/25/2018

Evaluating Creativity in Computational Co-Creative Systems

This paper provides a framework for evaluating creativity in co-creative...