Attention Boosted Sequential Inference Model

12/05/2018
by   Guanyu Li, et al.
0

Attention mechanism has been proven effective on natural language processing. This paper proposes an attention boosted natural language inference model named aESIM by adding word attention and adaptive direction-oriented attention mechanisms to the traditional Bi-LSTM layer of natural language inference models, e.g. ESIM. This makes the inference model aESIM has the ability to effectively learn the representation of words and model the local subsentential inference between pairs of premise and hypothesis. The empirical studies on the SNLI, MultiNLI and Quora benchmarks manifest that aESIM is superior to the original ESIM model.

READ FULL TEXT
research
12/30/2015

Learning Natural Language Inference with LSTM

Natural language inference (NLI) is a fundamentally important task in na...
research
07/22/2016

Syntax-based Attention Model for Natural Language Inference

Introducing attentional mechanism in neural network is a powerful concep...
research
02/04/2019

Attention, please! A Critical Review of Neural Attention Models in Natural Language Processing

Attention is an increasingly popular mechanism used in a wide range of n...
research
04/22/2019

Understanding Roles and Entities: Datasets and Models for Natural Language Inference

We present two new datasets and a novel attention mechanism for Natural ...
research
08/30/2018

Iterative Recursive Attention Model for Interpretable Sequence Classification

Natural language processing has greatly benefited from the introduction ...
research
12/20/2016

Exploring Different Dimensions of Attention for Uncertainty Detection

Neural networks with attention have proven effective for many natural la...
research
10/05/2022

GAPX: Generalized Autoregressive Paraphrase-Identification X

Paraphrase Identification is a fundamental task in Natural Language Proc...

Please sign up or login with your details

Forgot password? Click here to reset