DeepAI AI Chat
Log In Sign Up

Transformer Dissection: An Unified Understanding for Transformer's Attention via the Lens of Kernel

by   Yao-Hung Hubert Tsai, et al.
Kyoto University
Carnegie Mellon University

Transformer is a powerful architecture that achieves superior performance on various sequence learning tasks, including neural machine translation, language understanding, and sequence prediction. At the core of the Transformer is the attention mechanism, which concurrently processes all inputs in the streams. In this paper, we present a new formulation of attention via the lens of the kernel. To be more precise, we realize that the attention can be seen as applying kernel smoother over the inputs with the kernel scores being the similarities between inputs. This new formulation gives us a better way to understand individual components of the Transformer's attention, such as the better way to integrate the positional embedding. Another important advantage of our kernel-based formulation is that it paves the way to a larger space of composing Transformer's attention. As an example, we propose a new variant of Transformer's attention which models the input as a product of symmetric kernels. This approach achieves competitive performance to the current state of the art model with less computation. In our experiments, we empirically study different kernel construction strategies on two widely used tasks: neural machine translation and sequence prediction.


page 1

page 2

page 3

page 4


Random Feature Attention

Transformers are state-of-the-art models for a variety of sequence model...

Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation

Transformer-based models have brought a radical change to neural machine...

TRANS-BLSTM: Transformer with Bidirectional LSTM for Language Understanding

Bidirectional Encoder Representations from Transformers (BERT) has recen...

Rethinking the Value of Transformer Components

Transformer becomes the state-of-the-art translation model, while it is ...

Learning Hard Retrieval Cross Attention for Transformer

The Transformer translation model that based on the multi-head attention...

Transformers are Deep Infinite-Dimensional Non-Mercer Binary Kernel Machines

Despite their ubiquity in core AI fields like natural language processin...

SimpleTron: Eliminating Softmax from Attention Computation

In this paper, we propose that the dot product pairwise matching attenti...