Rethinking Self-Attention: An Interpretable Self-Attentive Encoder-Decoder Parser

11/10/2019
by   Khalil Mrini, et al.
0

Attention mechanisms have improved the performance of NLP tasks while providing for appearance of model interpretability. Self-attention is currently widely used in NLP models, however it is difficult to interpret due to the numerous attention distributions. We hypothesize that model representations can benefit from label-specific information, while facilitating interpretation of predictions. We introduce the Label Attention Layer: a new form of self-attention where attention heads represent labels. We validate our hypothesis by running experiments in constituency and dependency parsing and show our new model obtains new state-of-the-art results for both tasks on the English Penn Treebank. Our neural parser obtains 96.34 F1 score for constituency parsing, and 97.33 UAS and 96.29 LAS for dependency parsing. Additionally, our model requires fewer layers, therefore, fewer parameters compared to existing work.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/02/2018

Constituency Parsing with a Self-Attentive Encoder

We demonstrate that replacing an LSTM encoder with a self-attentive arch...
research
12/25/2021

Combining Improvements for Exploiting Dependency Trees in Neural Semantic Parsing

The dependency tree of a natural language sentence can capture the inter...
research
12/31/2018

Multilingual Constituency Parsing with Self-Attention and Pre-Training

We extend our previous work on constituency parsing (Kitaev and Klein, 2...
research
06/24/2017

Encoder-Decoder Shift-Reduce Syntactic Parsing

Starting from NMT, encoder-decoder neu- ral networks have been used for ...
research
07/09/2021

Levi Graph AMR Parser using Heterogeneous Attention

Coupled with biaffine decoders, transformers have been effectively adapt...
research
03/24/2022

Probing for Labeled Dependency Trees

Probing has become an important tool for analyzing representations in Na...
research
10/18/2018

Reduction of Parameter Redundancy in Biaffine Classifiers with Symmetric and Circulant Weight Matrices

Currently, the biaffine classifier has been attracting attention as a me...

Please sign up or login with your details

Forgot password? Click here to reset