Rethinking embedding coupling in pre-trained language models

10/24/2020
by   Hyung Won Chung, et al.
0

We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that allocating additional capacity to the output embedding provides benefits to the model that persist through the fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the number of parameters at the fine-tuning stage.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/18/2023

SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models

The pre-training and fine-tuning paradigm has contributed to a number of...
research
08/07/2023

Towards General Text Embeddings with Multi-stage Contrastive Learning

We present GTE, a general-purpose text embedding model trained with mult...
research
05/08/2023

A Frustratingly Easy Improvement for Position Embeddings via Random Padding

Position embeddings, encoding the positional relationships among tokens ...
research
01/21/2021

Distilling Large Language Models into Tiny and Effective Students using pQRNN

Large pre-trained multilingual models like mBERT, XLM-R achieve state of...
research
05/23/2022

Simple Recurrence Improves Masked Language Models

In this work, we explore whether modeling recurrence into the Transforme...
research
06/25/2022

Adversarial Self-Attention for Language Understanding

An ultimate language system aims at the high generalization and robustne...
research
09/15/2023

Frustratingly Simple Memory Efficiency for Pre-trained Language Models via Dynamic Embedding Pruning

The extensive memory footprint of pre-trained language models (PLMs) can...

Please sign up or login with your details

Forgot password? Click here to reset