AdvEntuRe: Adversarial Training for Textual Entailment with Knowledge-Guided Examples

05/12/2018
by   Dongyeop Kang, et al.
4

We consider the problem of learning textual entailment models with limited supervision (5K-10K training examples), and present two complementary approaches for it. First, we propose knowledge-guided adversarial example generators for incorporating large lexical resources in entailment models via only a handful of rule templates. Second, to make the entailment model - a discriminator - more robust, we propose the first GAN-style approach for training it using a natural language example generator that iteratively adjusts based on the discriminator's performance. We demonstrate effectiveness using two entailment datasets, where the proposed methods increase accuracy by 4.7 on SciTail and by 2.8 single hand-written rule, negate, improves the accuracy on the negation examples in SNLI by 6.1

READ FULL TEXT
research
10/09/2017

Natural Language Inference from Multiple Premises

We define a novel textual entailment task that requires inference over m...
research
11/09/2015

Enacting textual entailment and ontologies for automated essay grading in chemical domain

We propose a system for automated essay grading using ontologies and tex...
research
06/10/2018

What Knowledge is Needed to Solve the RTE5 Textual Entailment Challenge?

This document gives a knowledge-oriented analysis of about 20 interestin...
research
06/29/2023

Evaluating Paraphrastic Robustness in Textual Entailment Models

We present PaRTE, a collection of 1,126 pairs of Recognizing Textual Ent...
research
05/13/2023

SCENE: Self-Labeled Counterfactuals for Extrapolating to Negative Examples

Detecting negatives (such as non-entailment relationships, unanswerable ...
research
12/13/2022

Modelling Stance Detection as Textual Entailment Recognition and Leveraging Measurement Knowledge from Social Sciences

Stance detection (SD) can be considered a special case of textual entail...
research
04/24/2020

Collecting Entailment Data for Pretraining: New Protocols and Negative Results

Textual entailment (or NLI) data has proven useful as pretraining data f...

Please sign up or login with your details

Forgot password? Click here to reset