Efficient Natural Language Response Suggestion for Smart Reply

05/01/2017
by   Matthew Henderson, et al.
0

This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/01/2017

Natural Language Processing with Small Feed-Forward Networks

We show that small and shallow feed-forward neural networks can achieve ...
research
07/09/2019

Sequence-to-Sequence Natural Language to Humanoid Robot Sign Language

This paper presents a study on natural language to sign language transla...
research
06/29/2017

Talking Drums: Generating drum grooves with neural networks

Presented is a method of generating a full drum kit part for a provided ...
research
09/18/2020

Hardware Accelerator for Multi-Head Attention and Position-Wise Feed-Forward in the Transformer

Designing hardware accelerators for deep neural networks (DNNs) has been...
research
06/17/2016

Sequence-to-Sequence Generation for Spoken Dialogue via Deep Syntax Trees and Strings

We present a natural language generator based on the sequence-to-sequenc...
research
11/08/2020

Best Practices for Data-Efficient Modeling in NLG:How to Train Production-Ready Neural Models with Less Data

Natural language generation (NLG) is a critical component in conversatio...
research
02/12/2020

GLU Variants Improve Transformer

Gated Linear Units (arXiv:1612.08083) consist of the component-wise prod...

Please sign up or login with your details

Forgot password? Click here to reset