Federated Learning for Emoji Prediction in a Mobile Keyboard

06/11/2019
by   Swaroop Ramaswamy, et al.
0

We show that a word-level recurrent neural network can predict emoji from text typed on a mobile keyboard. We demonstrate the usefulness of transfer learning for predicting emoji by pretraining the model using a language modeling task. We also propose mechanisms to trigger emoji and tune the diversity of candidates. The model is trained using a distributed on-device learning framework called federated learning. The federated model is shown to achieve better performance than a server-trained model. This work demonstrates the feasibility of using federated learning to train production-quality models for natural language understanding tasks while keeping users' data on their devices.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/08/2018

Federated Learning for Mobile Keyboard Prediction

We train a recurrent neural network language model using a distributed, ...
research
05/11/2020

Pretraining Federated Text Models for Next Word Prediction

Federated learning is a decentralized approach for training models on di...
research
10/30/2022

Two Models are Better than One: Federated Learning Is Not Private For Google GBoard Next Word Prediction

In this paper we present new attacks against federated learning when use...
research
12/22/2020

Turn Signal Prediction: A Federated Learning Case Study

Driving etiquette takes a different flavor for each locality as drivers ...
research
10/08/2019

Federated Learning of N-gram Language Models

We propose algorithms to train production-quality n-gram language models...
research
05/21/2020

Training Keyword Spotting Models on Non-IID Data with Federated Learning

We demonstrate that a production-quality keyword-spotting model can be t...
research
03/26/2019

Federated Learning Of Out-Of-Vocabulary Words

We demonstrate that a character-level recurrent neural network is able t...

Please sign up or login with your details

Forgot password? Click here to reset