Sequence-to-sequence neural network models for transliteration

10/29/2016
by   Mihaela Rosca, et al.
0

Transliteration is a key component of machine translation systems and software internationalization. This paper demonstrates that neural sequence-to-sequence models obtain state of the art or close to state of the art results on existing datasets. In an effort to make machine transliteration accessible, we open source a new Arabic to English transliteration dataset and our trained models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/19/2016

Tree-to-Sequence Attentional Neural Machine Translation

Most of the existing Neural Machine Translation (NMT) models focus on th...
research
04/25/2018

Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models

Neural Sequence-to-Sequence models have proven to be accurate and robust...
research
11/14/2017

Classical Structured Prediction Losses for Sequence to Sequence Learning

There has been much recent work on training neural attention models at t...
research
11/14/2018

Predicting the time-evolution of multi-physics systems with sequence-to-sequence models

In this work, sequence-to-sequence (seq2seq) models, originally develope...
research
05/30/2019

Interactive-predictive neural multimodal systems

Despite the advances achieved by neural models in sequence to sequence l...
research
06/22/2015

A Deep Memory-based Architecture for Sequence-to-Sequence Learning

We propose DEEPMEMORY, a novel deep architecture for sequence-to-sequenc...
research
05/29/2018

Can DNNs Learn to Lipread Full Sentences?

Finding visual features and suitable models for lipreading tasks that ar...

Please sign up or login with your details

Forgot password? Click here to reset