RoFormer: Enhanced Transformer with Rotary Position Embedding

04/20/2021
by   Jianlin Su, et al.
0

Position encoding in transformer architecture provides supervision for dependency modeling between elements at different positions in the sequence. We investigate various methods to encode positional information in transformer-based language models and propose a novel implementation named Rotary Position Embedding(RoPE). The proposed RoPE encodes absolute positional information with rotation matrix and naturally incorporates explicit relative position dependency in self-attention formulation. Notably, RoPE comes with valuable properties such as flexibility of being expand to any sequence lengths, decaying inter-token dependency with increasing relative distances, and capability of equipping the linear self-attention with relative position encoding. As a result, the enhanced transformer with rotary position embedding, or RoFormer, achieves superior performance in tasks with long texts. We release the theoretical analysis along with some preliminary experiment results on Chinese data. The undergoing experiment for English benchmark will soon be updated.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/13/2021

Conformer-based End-to-end Speech Recognition With Rotary Position Embedding

Transformer-based end-to-end speech recognition models have received con...
research
09/06/2021

PermuteFormer: Efficient Relative Position Encoding for Long Sequences

A recent variation of Transformer, Performer, scales Transformer to long...
research
09/27/2021

Multiplicative Position-aware Transformer Models for Language Understanding

Transformer models, which leverage architectural improvements like self-...
research
03/30/2022

Transformer Language Models without Positional Encodings Still Learn Positional Information

Transformers typically require some form of positional encoding, such as...
research
06/03/2021

The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models

Mechanisms for encoding positional information are central for transform...
research
09/01/2019

Self-Attention with Structural Position Representations

Although self-attention networks (SANs) have advanced the state-of-the-a...
research
08/30/2021

Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning

The highly popular Transformer architecture, based on self-attention, is...

Please sign up or login with your details

Forgot password? Click here to reset