Natural Language Inference from Multiple Premises

10/09/2017
by   Alice Lai, et al.
0

We define a novel textual entailment task that requires inference over multiple premise sentences. We present a new dataset for this task that minimizes trivial lexical inferences, emphasizes knowledge of everyday events, and presents a more challenging setting for textual entailment. We evaluate several strong neural baselines and analyze how the multiple premise task differs from standard textual entailment.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/26/2018

Visual Entailment Task for Visually-Grounded Language Learning

We introduce a new inference task - Visual Entailment (VE) - which diffe...
research
11/05/2019

Infusing Knowledge into the Textual Entailment Task Using Graph Convolutional Networks

Textual entailment is a fundamental task in natural language processing....
research
01/11/2019

EQUATE: A Benchmark Evaluation Framework for Quantitative Reasoning in Natural Language Inference

Quantitative reasoning is an important component of reasoning that any i...
research
12/14/2018

A corpus of precise natural textual entailment problems

In this paper, we present a new corpus of entailment problems. This corp...
research
05/12/2018

AdvEntuRe: Adversarial Training for Textual Entailment with Knowledge-Guided Examples

We consider the problem of learning textual entailment models with limit...
research
10/06/2020

Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment as a Start

A standard way to address different NLP problems is by first constructin...
research
08/28/2018

Bridging Knowledge Gaps in Neural Entailment via Symbolic Models

Most textual entailment models focus on lexical gaps between the premise...

Please sign up or login with your details

Forgot password? Click here to reset