Generating Contradictory, Neutral, and Entailing Sentences

03/07/2018
by   Yikang Shen, et al.
0

Learning distributed sentence representations remains an interesting problem in the field of Natural Language Processing (NLP). We want to learn a model that approximates the conditional latent space over the representations of a logical antecedent of the given statement. In our paper, we propose an approach to generating sentences, conditioned on an input sentence and a logical inference label. We do this by modeling the different possibilities for the output sentence as a distribution over the latent representation, which we train using an adversarial objective. We evaluate the model using two state-of-the-art models for the Recognizing Textual Entailment (RTE) task, and measure the BLEU scores against the actual sentences as a probe for the diversity of sentences produced by our model. The experiment results show that, given our framework, we have clear ways to improve the quality and diversity of generated sentences.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/04/2016

Generating Natural Language Inference Chains

The ability to reason with natural language is a fundamental prerequisit...
research
07/23/2019

Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference

Natural Language Inference (NLI), also known as Recognizing Textual Enta...
research
08/19/2019

Polly Want a Cracker: Analyzing Performance of Parroting on Paraphrase Generation Datasets

Paraphrase generation is an interesting and challenging NLP task which h...
research
11/08/2016

Sentence Ordering and Coherence Modeling using Recurrent Neural Networks

Modeling the structure of coherent texts is a key NLP problem. The task ...
research
03/19/2017

Métodos de Otimização Combinatória Aplicados ao Problema de Compressão MultiFrases

The Internet has led to a dramatic increase in the amount of available i...
research
07/11/2017

Refining Raw Sentence Representations for Textual Entailment Recognition via Attention

In this paper we present the model used by the team Rivercorners for the...
research
01/18/2018

Natural Language Multitasking: Analyzing and Improving Syntactic Saliency of Hidden Representations

We train multi-task autoencoders on linguistic tasks and analyze the lea...

Please sign up or login with your details

Forgot password? Click here to reset