Log In Sign Up

Training a Tokenizer for Free with Private Federated Learning

by   Eugene Bagdasaryan, et al.

Federated learning with differential privacy, i.e. private federated learning (PFL), makes it possible to train models on private data distributed across users' devices without harming privacy. PFL is efficient for models, such as neural networks, that have a fixed number of parameters, and thus a fixed-dimensional gradient vector. Such models include neural-net language models, but not tokenizers, the topic of this work. Training a tokenizer requires frequencies of words from an unlimited vocabulary, and existing methods for finding an unlimited vocabulary need a separate privacy budget. A workaround is to train the tokenizer on publicly available data. However, in this paper we first show that a tokenizer trained on mismatched data results in worse model performance compared to a privacy-violating "oracle" tokenizer that accesses user data, with perplexity increasing by 20 sub-word tokenizers are better suited to the federated context than word-level ones, since they can encode new words, though with more tokens per word. Second, we propose a novel method to obtain a tokenizer without using any additional privacy budget. During private federated learning of the language model, we sample from the model, train a new tokenizer on the sampled sequences, and update the model embeddings. We then continue private federated learning, and obtain performance within 1 this process trains the tokenizer only indirectly on private data, we can use the "postprocessing guarantee" of differential privacy and thus use no additional privacy budget.


page 1

page 2

page 3

page 4


Enforcing fairness in private federated learning via the modified method of differential multipliers

Federated learning with differential privacy, or private federated learn...

Protection Against Reconstruction and Its Applications in Private Federated Learning

Federated learning has become an exciting direction for both research an...

LDP-FL: Practical Private Aggregation in Federated Learning with Local Differential Privacy

Train machine learning models on sensitive user data has raised increasi...

Federated Learning Of Out-Of-Vocabulary Words

We demonstrate that a character-level recurrent neural network is able t...

Federated Learning of N-gram Language Models

We propose algorithms to train production-quality n-gram language models...

LDP-Fed: Federated Learning with Local Differential Privacy

This paper presents LDP-Fed, a novel federated learning system with a fo...