DeepAI
Log In Sign Up

Multi-Head Self-Attention with Role-Guided Masks

12/22/2020
by   Dongsheng Wang, et al.
0

The state of the art in learning meaningful semantic representations of words is the Transformer model and its attention mechanisms. Simply put, the attention mechanisms learn to attend to specific parts of the input dispensing recurrence and convolutions. While some of the learned attention heads have been found to play linguistically interpretable roles, they can be redundant or prone to errors. We propose a method to guide the attention heads towards roles identified in prior work as important. We do this by defining role-specific masks to constrain the heads to attend to specific parts of the input, such that different heads are designed to play different roles. Experiments on text classification and machine translation using 7 different datasets show that our method outperforms competitive attention-based, CNN, and RNN baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/23/2019

Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned

Multi-head self-attention is a key component of the Transformer, a state...
08/03/2021

A Dynamic Head Importance Computation Mechanism for Neural Machine Translation

Multiple parallel attention mechanisms that use multiple attention heads...
05/26/2022

Other Roles Matter! Enhancing Role-Oriented Dialogue Summarization via Role Interactions

Role-oriented dialogue summarization is to generate summaries for differ...
05/02/2020

Hard-Coded Gaussian Attention for Neural Machine Translation

Recent work has questioned the importance of the Transformer's multi-hea...
04/30/2020

Character-Level Translation with Self-attention

We explore the suitability of self-attention models for character-level ...
03/11/2022

Font Shape-to-Impression Translation

Different fonts have different impressions, such as elegant, scary, and ...
08/22/2019

From Community to Role-based Graph Embeddings

Roles are sets of structurally similar nodes that are more similar to no...