Training a Tokenizer for Free with Private Federated Learning

03/15/2022
by   Eugene Bagdasaryan, et al.
0

Federated learning with differential privacy, i.e. private federated learning (PFL), makes it possible to train models on private data distributed across users' devices without harming privacy. PFL is efficient for models, such as neural networks, that have a fixed number of parameters, and thus a fixed-dimensional gradient vector. Such models include neural-net language models, but not tokenizers, the topic of this work. Training a tokenizer requires frequencies of words from an unlimited vocabulary, and existing methods for finding an unlimited vocabulary need a separate privacy budget. A workaround is to train the tokenizer on publicly available data. However, in this paper we first show that a tokenizer trained on mismatched data results in worse model performance compared to a privacy-violating "oracle" tokenizer that accesses user data, with perplexity increasing by 20 sub-word tokenizers are better suited to the federated context than word-level ones, since they can encode new words, though with more tokens per word. Second, we propose a novel method to obtain a tokenizer without using any additional privacy budget. During private federated learning of the language model, we sample from the model, train a new tokenizer on the sampled sequences, and update the model embeddings. We then continue private federated learning, and obtain performance within 1 this process trains the tokenizer only indirectly on private data, we can use the "postprocessing guarantee" of differential privacy and thus use no additional privacy budget.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/17/2021

Enforcing fairness in private federated learning via the modified method of differential multipliers

Federated learning with differential privacy, or private federated learn...
research
12/03/2018

Protection Against Reconstruction and Its Applications in Private Federated Learning

Federated learning has become an exciting direction for both research an...
research
07/31/2020

LDP-FL: Practical Private Aggregation in Federated Learning with Local Differential Privacy

Train machine learning models on sensitive user data has raised increasi...
research
03/26/2019

Federated Learning Of Out-Of-Vocabulary Words

We demonstrate that a character-level recurrent neural network is able t...
research
10/08/2019

Federated Learning of N-gram Language Models

We propose algorithms to train production-quality n-gram language models...
research
07/20/2023

Private Federated Learning with Autotuned Compression

We propose new techniques for reducing communication in private federate...
research
06/24/2022

"You Can't Fix What You Can't Measure": Privately Measuring Demographic Performance Disparities in Federated Learning

Federated learning allows many devices to collaborate in the training of...

Please sign up or login with your details

Forgot password? Click here to reset