How BPE Affects Memorization in Transformers

10/06/2021
by   Eugene Kharitonov, et al.
0

Training data memorization in NLP can both be beneficial (e.g., closed-book QA) and undesirable (personal data extraction). In any case, successful model training requires a non-trivial amount of memorization to store word spellings, various linguistic idiosyncrasies and common knowledge. However, little is known about what affects the memorization behavior of NLP models, as the field tends to focus on the equally important question of generalization. In this work, we demonstrate that the size of the subword vocabulary learned by Byte-Pair Encoding (BPE) greatly affects both ability and tendency of standard Transformer models to memorize training data, even when we control for the number of learned parameters. We find that with a large subword vocabulary size, Transformer models fit random mappings more easily and are more vulnerable to membership inference attacks. Similarly, given a prompt, Transformer-based language models with large subword vocabularies reproduce the training data more often. We conjecture this effect is caused by reduction in the sequences' length that happens as the BPE vocabulary grows. Our findings can allow a more informed choice of hyper-parameters, that is better tailored for a particular use-case.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2019

Ouroboros: On Accelerating Training of Transformer-Based Language Models

Language models are essential for natural language processing (NLP) task...
research
12/04/2022

Understanding How Model Size Affects Few-shot Instruction Prompting

Large Language Models are affected by the phenomena of memorizing and fo...
research
03/29/2022

Training Compute-Optimal Large Language Models

We investigate the optimal model size and number of tokens for training ...
research
10/21/2020

Amnesiac Machine Learning

The Right to be Forgotten is part of the recently enacted General Data P...
research
09/15/2021

Sequence Length is a Domain: Length-based Overfitting in Transformer Models

Transformer-based sequence-to-sequence architectures, while achieving st...
research
03/11/2021

CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation

Pipelined NLP systems have largely been superseded by end-to-end neural ...
research
04/12/2021

Updater-Extractor Architecture for Inductive World State Representations

Developing NLP models traditionally involves two stages - training and a...

Please sign up or login with your details

Forgot password? Click here to reset