Dynamic Self-Attention : Computing Attention over Words Dynamically for Sentence Embedding

08/22/2018
by   Deunsol Yoon, et al.
0

In this paper, we propose Dynamic Self-Attention (DSA), a new self-attention mechanism for sentence embedding. We design DSA by modifying dynamic routing in capsule network (Sabouretal.,2017) for natural language processing. DSA attends to informative words with a dynamic weight vector. We achieve new state-of-the-art results among sentence encoding methods in Stanford Natural Language Inference (SNLI) dataset with the least number of parameters, while showing comparative results in Stanford Sentiment Treebank (SST) dataset.

READ FULL TEXT
research
03/09/2017

A Structured Self-attentive Sentence Embedding

This paper proposes a new model for extracting an interpretable sentence...
research
11/16/2020

Text Information Aggregation with Centrality Attention

A lot of natural language processing problems need to encode the text se...
research
12/02/2020

Parallel Scheduling Self-attention Mechanism: Generalization and Optimization

Over the past few years, self-attention is shining in the field of deep ...
research
06/24/2019

Is It Worth the Attention? A Comparative Evaluation of Attention Layers for Argument Unit Segmentation

Attention mechanisms have seen some success for natural language process...
research
10/01/2022

Construction and Evaluation of a Self-Attention Model for Semantic Understanding of Sentence-Final Particles

Sentence-final particles serve an essential role in spoken Japanese beca...
research
05/30/2016

Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention

In this paper, we proposed a sentence encoding-based model for recognizi...
research
12/30/2018

Variational Self-attention Model for Sentence Representation

This paper proposes a variational self-attention model (VSAM) that emplo...

Please sign up or login with your details

Forgot password? Click here to reset