RNN Approaches to Text Normalization: A Challenge

10/31/2016 ∙ by Richard Sproat, et al. ∙ 0

This paper presents a challenge to the community: given a large corpus of written text aligned to its normalized spoken form, train an RNN to learn the correct normalization function. We present a data set of general text where the normalizations were generated using an existing text normalization component of a text-to-speech system. This data set will be released open-source in the near future. We also present our own experiments with this data set with a variety of different RNN architectures. While some of the architectures do in fact produce very good results when measured in terms of overall accuracy, the errors that are produced are problematic, since they would convey completely the wrong message if such a system were deployed in a speech application. On the other hand, we show that a simple FST-based filter can mitigate those errors, and achieve a level of accuracy not achievable by the RNN alone. Though our conclusions are largely negative on this point, we are actually not arguing that the text normalization problem is intractable using an pure RNN approach, merely that it is not going to be something that can be solved merely by having huge amounts of annotated text data and feeding that to a general RNN model. And when we open-source our data, we will be providing a novel data set for sequence-to-sequence modeling in the hopes that the the community can find better solutions. The data used in this work have been released and are available at: https://github.com/rwsproat/text-normalization-data

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

Code Repositories

text-normalization-data

Links to data used in Sproat & Jaitly (https://arxiv.org/abs/1611.00068) experiments.


view repo

Keras-LSTM-Text-Normalization

Deep learning with text has caught up a lot over the last couple of years. This repository is born out of the ongoing Kaggle contest hosted by Google researchers. Text normalization using recurrent neural networks, mostly LSTMs, is being attempted.


view repo

text-normalization-token-classes

Example of a project for Cloud Training tool


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.