Log In Sign Up

Multi-Head Self-Attention with Role-Guided Masks

by   Dongsheng Wang, et al.

The state of the art in learning meaningful semantic representations of words is the Transformer model and its attention mechanisms. Simply put, the attention mechanisms learn to attend to specific parts of the input dispensing recurrence and convolutions. While some of the learned attention heads have been found to play linguistically interpretable roles, they can be redundant or prone to errors. We propose a method to guide the attention heads towards roles identified in prior work as important. We do this by defining role-specific masks to constrain the heads to attend to specific parts of the input, such that different heads are designed to play different roles. Experiments on text classification and machine translation using 7 different datasets show that our method outperforms competitive attention-based, CNN, and RNN baselines.


page 1

page 2

page 3

page 4


Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned

Multi-head self-attention is a key component of the Transformer, a state...

A Dynamic Head Importance Computation Mechanism for Neural Machine Translation

Multiple parallel attention mechanisms that use multiple attention heads...

Other Roles Matter! Enhancing Role-Oriented Dialogue Summarization via Role Interactions

Role-oriented dialogue summarization is to generate summaries for differ...

Hard-Coded Gaussian Attention for Neural Machine Translation

Recent work has questioned the importance of the Transformer's multi-hea...

Character-Level Translation with Self-attention

We explore the suitability of self-attention models for character-level ...

Font Shape-to-Impression Translation

Different fonts have different impressions, such as elegant, scary, and ...

From Community to Role-based Graph Embeddings

Roles are sets of structurally similar nodes that are more similar to no...