Reasoning about Entailment with Neural Attention

09/22/2015 ∙ by Tim Rocktäschel, et al. ∙ 0

While most approaches to automatically recognizing entailment relations have used classifiers employing hand engineered features derived from complex natural language processing pipelines, in practice their performance has been only slightly better than bag-of-word pair classifiers using only lexical similarity. The only attempt so far to build an end-to-end differentiable neural network for entailment failed to outperform such a simple similarity classifier. In this paper, we propose a neural model that reads two sentences to determine entailment using long short-term memory units. We extend this model with a word-by-word neural attention mechanism that encourages reasoning over entailments of pairs of words and phrases. Furthermore, we present a qualitative analysis of attention weights produced by this model, demonstrating such reasoning capabilities. On a large entailment dataset this model outperforms the previous best neural model and a classifier with engineered features by a substantial margin. It is the first generic end-to-end differentiable system that achieves state-of-the-art accuracy on a textual entailment dataset.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

Code Repositories

reasoning_attention

Unofficial implementation algorithms of attention models on SNLI dataset


view repo

entailment-neural-attention-lstm-tf

(arXiv:1509.06664) Reasoning about Entailment with Neural Attention


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.