Distance-based Self-Attention Network for Natural Language Inference

12/06/2017
by   Jinbae Im, et al.
0

Attention mechanism has been used as an ancillary means to help RNN or CNN. However, the Transformer (Vaswani et al., 2017) recently recorded the state-of-the-art performance in machine translation with a dramatic reduction in training time by solely using attention. Motivated by the Transformer, Directional Self Attention Network (Shen et al., 2017), a fully attention-based sentence encoder, was proposed. It showed good performance with various data by using forward and backward directional information in a sentence. But in their study, not considered at all was the distance between words, an important feature when learning the local dependency to help understand the context of input text. We propose Distance-based Self-Attention Network, which considers the word distance by using a simple distance mask in order to model the local dependency without losing the ability of modeling global dependency which attention has inherent. Our model shows good performance with NLI data, and it records the new state-of-the-art result with SNLI data. Additionally, we show that our model has a strength in long sentences or documents.

READ FULL TEXT

page 9

page 12

research
09/14/2017

DiSAN: Directional Self-Attention Network for RNN/CNN-Free Language Understanding

Recurrent neural nets (RNN) and convolutional neural nets (CNN) are wide...
research
06/28/2020

Self-Attention Networks for Intent Detection

Self-attention networks (SAN) have shown promising performance in variou...
research
01/31/2018

Reinforced Self-Attention Network: a Hybrid of Hard and Soft Attention for Sequence Modeling

Many natural language processing tasks solely rely on sparse dependencie...
research
03/11/2022

Integrating Dependency Tree Into Self-attention for Sentence Representation

Recent progress on parse tree encoder for sentence representation learni...
research
03/25/2021

Mask Attention Networks: Rethinking and Strengthen Transformer

Transformer is an attention-based neural network, which consists of two ...
research
12/29/2020

Multiple Structural Priors Guided Self Attention Network for Language Understanding

Self attention networks (SANs) have been widely utilized in recent NLP s...
research
02/11/2020

Feature Importance Estimation with Self-Attention Networks

Black-box neural network models are widely used in industry and science,...

Please sign up or login with your details

Forgot password? Click here to reset