Multi-Head Self-Attention with Role-Guided Masks

12/22/2020
by   Dongsheng Wang, et al.
0

The state of the art in learning meaningful semantic representations of words is the Transformer model and its attention mechanisms. Simply put, the attention mechanisms learn to attend to specific parts of the input dispensing recurrence and convolutions. While some of the learned attention heads have been found to play linguistically interpretable roles, they can be redundant or prone to errors. We propose a method to guide the attention heads towards roles identified in prior work as important. We do this by defining role-specific masks to constrain the heads to attend to specific parts of the input, such that different heads are designed to play different roles. Experiments on text classification and machine translation using 7 different datasets show that our method outperforms competitive attention-based, CNN, and RNN baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2019

Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned

Multi-head self-attention is a key component of the Transformer, a state...
research
08/03/2021

A Dynamic Head Importance Computation Mechanism for Neural Machine Translation

Multiple parallel attention mechanisms that use multiple attention heads...
research
05/26/2022

Other Roles Matter! Enhancing Role-Oriented Dialogue Summarization via Role Interactions

Role-oriented dialogue summarization is to generate summaries for differ...
research
07/02/2023

ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition

Situation Recognition is the task of generating a structured summary of ...
research
03/11/2022

Font Shape-to-Impression Translation

Different fonts have different impressions, such as elegant, scary, and ...
research
08/22/2019

From Community to Role-based Graph Embeddings

Roles are sets of structurally similar nodes that are more similar to no...
research
01/22/2021

The heads hypothesis: A unifying statistical approach towards understanding multi-headed attention in BERT

Multi-headed attention heads are a mainstay in transformer-based models....

Please sign up or login with your details

Forgot password? Click here to reset