SHAPE: Shifted Absolute Position Embedding for Transformers

09/13/2021
by   Shun Kiyono, et al.
8

Position representation is crucial for building position-aware representations in Transformers. Existing position representations suffer from a lack of generalization to test data with unseen lengths or high computational cost. We investigate shifted absolute position embedding (SHAPE) to address both issues. The basic idea of SHAPE is to achieve shift invariance, which is a key property of recent successful position representations, by randomly shifting absolute positions during training. We demonstrate that SHAPE is empirically comparable to its counterpart while being simpler and faster.

READ FULL TEXT
research
09/28/2020

Improve Transformer Models with Better Relative Position Embeddings

Transformer architectures rely on explicit position encodings in order t...
research
05/26/2023

Improving Position Encoding of Transformers for Multivariate Time Series Classification

Transformers have demonstrated outstanding performance in many applicati...
research
05/31/2023

The Impact of Positional Encoding on Length Generalization in Transformers

Length generalization, the ability to generalize from small training con...
research
03/07/2016

Position paper: Towards an observer-oriented theory of shape comparison

In this position paper we suggest a possible metric approach to shape co...
research
05/07/2020

How Can CNNs Use Image Position for Segmentation?

Convolution is an equivariant operation, and image position does not aff...
research
12/27/2019

Encoding word order in complex embeddings

Sequential word order is important when processing text. Currently, neur...
research
12/31/2020

Shortformer: Better Language Modeling using Shorter Inputs

We explore the benefits of decreasing the input length of transformers. ...

Please sign up or login with your details

Forgot password? Click here to reset