Rethinking the Positional Encoding in Language Pre-training

06/28/2020
by   Guolin Ke, et al.
0

How to explicitly encode positional information into neural networks is an important problem in natural language processing. In the Transformer model, the positional information is simply encoded as embedding vectors, which are used in the input layer, or encoded as a bias term in the self-attention module. In this work, we investigate the problems in the previous formulations and propose a new positional encoding method for BERT called Transformer with Untied Positional Encoding (TUPE). Different from all other works, TUPE only uses the word embedding as input. In the self-attention module, the word correlation and positional correlation are computed separately with different parameterizations and then added together. This design removes the noisy word-position correlation and gives more expressiveness to characterize the relationship between words/positions by using different projection matrices. Furthermore, TUPE unties the [CLS] symbol from other positions to provide it with a more specific role to capture the global representation of the sentence. Extensive experiments and ablation studies on GLUE benchmark demonstrate the effectiveness and efficiency of the proposed method: TUPE outperforms several baselines on almost all tasks by a large margin. In particular, it can achieve a higher score than baselines while only using 30% pre-training computational costs. We release our code at https://github.com/guolinke/TUPE.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset