Training Production Language Models without Memorizing User Data

09/21/2020
by   Swaroop Ramaswamy, et al.
0

This paper presents the first consumer-scale next-word prediction (NWP) model trained with Federated Learning (FL) while leveraging the Differentially Private Federated Averaging (DP-FedAvg) technique. There has been prior work on building practical FL infrastructure, including work demonstrating the feasibility of training language models on mobile devices using such infrastructure. It has also been shown (in simulations on a public corpus) that it is possible to train NWP models with user-level differential privacy using the DP-FedAvg algorithm. Nevertheless, training production-quality NWP models with DP-FedAvg in a real-world production environment on a heterogeneous fleet of mobile phones requires addressing numerous challenges. For instance, the coordinating central server has to keep track of the devices available at the start of each round and sample devices uniformly at random from them, while ensuring secrecy of the sample, etc. Unlike all prior privacy-focused FL work of which we are aware, for the first time we demonstrate the deployment of a differentially private mechanism for the training of a production neural network in FL, as well as the instrumentation of the production training infrastructure to perform an end-to-end empirical measurement of unintended memorization.

READ FULL TEXT
research
05/20/2023

Can Public Large Language Models Help Private Cross-device Federated Learning?

We study (differentially) private federated learning (FL) of language mo...
research
02/15/2022

OLIVE: Oblivious and Differentially Private Federated Learning on Trusted Execution Environment

Differentially private federated learning (DP-FL) has received increasin...
research
11/17/2021

Differentially Private Federated Learning on Heterogeneous Data

Federated Learning (FL) is a paradigm for large-scale distributed learni...
research
01/08/2021

Differentially Private Federated Learning for Cancer Prediction

Since 2014, the NIH funded iDASH (integrating Data for Analysis, Anonymi...
research
09/17/2020

FLAME: Differentially Private Federated Learning in the Shuffle Model

Differentially private federated learning has been intensively studied. ...
research
06/13/2023

(Amplified) Banded Matrix Factorization: A unified approach to private training

Matrix factorization (MF) mechanisms for differential privacy (DP) have ...
research
12/20/2017

Differentially Private Distributed Learning for Language Modeling Tasks

One of the big challenges in machine learning applications is that train...

Please sign up or login with your details

Forgot password? Click here to reset