Position Prediction as an Effective Pretraining Strategy

07/15/2022
by   Shuangfei Zhai, et al.
1

Transformers have gained increasing popularity in a wide range of applications, including Natural Language Processing (NLP), Computer Vision and Speech Recognition, because of their powerful representational capacity. However, harnessing this representational capacity effectively requires a large amount of data, strong regularization, or both, to mitigate overfitting. Recently, the power of the Transformer has been unlocked by self-supervised pretraining strategies based on masked autoencoders which rely on reconstructing masked inputs, directly, or contrastively from unmasked content. This pretraining strategy which has been used in BERT models in NLP, Wav2Vec models in Speech and, recently, in MAE models in Vision, forces the model to learn about relationships between the content in different parts of the input using autoencoding related objectives. In this paper, we propose a novel, but surprisingly simple alternative to content reconstruction – that of predicting locations from content, without providing positional information for it. Doing so requires the Transformer to understand the positional relationships between different parts of the input, from their content alone. This amounts to an efficient implementation where the pretext task is a classification problem among all possible positions for each input token. We experiment on both Vision and Speech benchmarks, where our approach brings improvements over strong supervised training baselines and is comparable to modern unsupervised/self-supervised pretraining methods. Our method also enables Transformers trained without position embeddings to outperform ones trained with full position information.

READ FULL TEXT

page 5

page 9

page 13

page 14

page 15

page 16

page 17

page 18

research
11/23/2022

ASiT: Audio Spectrogram vIsion Transformer for General Audio Representation

Vision transformers, which were originally developed for natural languag...
research
03/29/2022

Investigating Self-supervised Pretraining Frameworks for Pathological Speech Recognition

We investigate the performance of self-supervised pretraining frameworks...
research
02/03/2023

SPADE: Self-supervised Pretraining for Acoustic DisEntanglement

Self-supervised representation learning approaches have grown in popular...
research
05/10/2023

XTab: Cross-table Pretraining for Tabular Transformers

The success of self-supervised learning in computer vision and natural l...
research
09/16/2023

RMP: A Random Mask Pretrain Framework for Motion Prediction

As the pretraining technique is growing in popularity, little work has b...
research
01/28/2021

A Neural Few-Shot Text Classification Reality Check

Modern classification models tend to struggle when the amount of annotat...
research
04/21/2021

Improving BERT Pretraining with Syntactic Supervision

Bidirectional masked Transformers have become the core theme in the curr...

Please sign up or login with your details

Forgot password? Click here to reset