Federated Learning for Emoji Prediction in a Mobile Keyboard

by   Swaroop Ramaswamy, et al.

We show that a word-level recurrent neural network can predict emoji from text typed on a mobile keyboard. We demonstrate the usefulness of transfer learning for predicting emoji by pretraining the model using a language modeling task. We also propose mechanisms to trigger emoji and tune the diversity of candidates. The model is trained using a distributed on-device learning framework called federated learning. The federated model is shown to achieve better performance than a server-trained model. This work demonstrates the feasibility of using federated learning to train production-quality models for natural language understanding tasks while keeping users' data on their devices.


page 1

page 2

page 3

page 4


Federated Learning for Mobile Keyboard Prediction

We train a recurrent neural network language model using a distributed, ...

Pretraining Federated Text Models for Next Word Prediction

Federated learning is a decentralized approach for training models on di...

Two Models are Better than One: Federated Learning Is Not Private For Google GBoard Next Word Prediction

In this paper we present new attacks against federated learning when use...

Turn Signal Prediction: A Federated Learning Case Study

Driving etiquette takes a different flavor for each locality as drivers ...

Federated Learning of N-gram Language Models

We propose algorithms to train production-quality n-gram language models...

Training Keyword Spotting Models on Non-IID Data with Federated Learning

We demonstrate that a production-quality keyword-spotting model can be t...

Federated Learning Of Out-Of-Vocabulary Words

We demonstrate that a character-level recurrent neural network is able t...

Please sign up or login with your details

Forgot password? Click here to reset