Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment as a Start

10/06/2020
by   Wenpeng Yin, et al.
0

A standard way to address different NLP problems is by first constructing a problem-specific dataset, then building a model to fit this dataset. To build the ultimate artificial intelligence, we desire a single machine that can handle diverse new problems, for which task-specific annotations are limited. We bring up textual entailment as a unified solver for such NLP problems. However, current research of textual entailment has not spilled much ink on the following questions: (i) How well does a pretrained textual entailment system generalize across domains with only a handful of domain-specific examples? and (ii) When is it worth transforming an NLP task into textual entailment? We argue that the transforming is unnecessary if we can obtain rich annotations for this task. Textual entailment really matters particularly when the target NLP task has insufficient annotations. Universal NLP can be probably achieved through different routines. In this work, we introduce Universal Few-shot textual Entailment (UFO-Entail). We demonstrate that this framework enables a pretrained entailment model to work well on new entailment domains in a few-shot setting, and show its effectiveness as a unified solver for several downstream NLP tasks such as question answering and coreference resolution when the end-task annotations are limited. Code: https://github.com/salesforce/UniversalFewShotNLP

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/09/2017

Natural Language Inference from Multiple Premises

We define a novel textual entailment task that requires inference over m...
research
05/03/2022

Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning

Recent work has shown that NLP tasks such as Relation Extraction (RE) ca...
research
10/06/2020

A Survey on Recognizing Textual Entailment as an NLP Evaluation

Recognizing Textual Entailment (RTE) was proposed as a unified evaluatio...
research
04/24/2018

End-Task Oriented Textual Entailment via Deep Exploring Inter-Sentence Interactions

This work deals with SciTail, a natural entailment problem derived from ...
research
04/24/2018

End-Task Oriented Textual Entailment via Deep Explorations of Inter-Sentence Interactions

This work deals with SciTail, a natural entailment challenge derived fro...
research
09/08/2019

QuaRTz: An Open-Domain Dataset of Qualitative Relationship Questions

We introduce the first open-domain dataset, called QuaRTz, for reasoning...
research
06/08/2016

Addressing Limited Data for Textual Entailment Across Domains

We seek to address the lack of labeled data (and high cost of annotation...

Please sign up or login with your details

Forgot password? Click here to reset