The Curious Case of Absolute Position Embeddings

10/23/2022
by   Koustuv Sinha, et al.
0

Transformer language models encode the notion of word order using positional information. Most commonly, this positional information is represented by absolute position embeddings (APEs), that are learned from the pretraining data. However, in natural language, it is not absolute position that matters, but relative position, and the extent to which APEs can capture this type of information has not been investigated. In this work, we observe that models trained with APE over-rely on positional information to the point that they break-down when subjected to sentences with shifted position information. Specifically, when models are subjected to sentences starting from a non-zero position (excluding the effect of priming), they exhibit noticeably degraded performance on zero to full-shot tasks, across a range of model families and model sizes. Our findings raise questions about the efficacy of APEs to model the relativity of position information, and invite further introspection on the sentence and word order processing strategies employed by these models.

READ FULL TEXT

page 16

page 17

page 19

page 20

page 21

page 22

page 24

research
03/21/2022

Word Order Does Matter (And Shuffled Language Models Know It)

Recent studies have shown that language models pretrained and/or fine-tu...
research
11/08/2022

Word Order Matters when you Increase Masking

Word order, an essential property of natural languages, is injected in T...
research
03/30/2022

Transformer Language Models without Positional Encodings Still Learn Positional Information

Transformers typically require some form of positional encoding, such as...
research
01/22/2020

How Much Position Information Do Convolutional Neural Networks Encode?

In contrast to fully connected networks, Convolutional Neural Networks (...
research
05/07/2020

How Can CNNs Use Image Position for Segmentation?

Convolution is an equivariant operation, and image position does not aff...
research
01/28/2021

Position, Padding and Predictions: A Deeper Look at Position Information in CNNs

In contrast to fully connected networks, Convolutional Neural Networks (...
research
09/28/2020

Improve Transformer Models with Better Relative Position Embeddings

Transformer architectures rely on explicit position encodings in order t...

Please sign up or login with your details

Forgot password? Click here to reset