How Much Does Tokenization Affect Neural Machine Translation?

12/20/2018
by   Miguel Domingo, et al.
0

Tokenization or segmentation is a wide concept that covers simple processes such as separating punctuation from words, or more sophisticated processes such as applying morphological knowledge. Neural Machine Translation (NMT) requires a limited-size vocabulary for computational cost and enough examples to estimate word embeddings. Separating punctuation and splitting tokens into words or subwords has proven to be helpful to reduce vocabulary and increase the number of examples of each word, improving the translation quality. Tokenization is more challenging when dealing with languages with no separator between words. In order to assess the impact of the tokenization in the quality of the final translation on NMT, we experimented on five tokenizers over ten language pairs. We reached the conclusion that the tokenization significantly affects the final translation quality and that the best tokenizer differs for different language pairs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/20/2018

How Much Does Tokenization Affect in Neural Machine Translation?

Tokenization or segmentation is a wide concept that covers simple proces...
research
09/15/2016

Factored Neural Machine Translation

We present a new approach for neural machine translation (NMT) using the...
research
12/05/2017

Neural Machine Translation by Generating Multiple Linguistic Factors

Factored neural machine translation (FNMT) is founded on the idea of usi...
research
07/25/2018

Finding Better Subword Segmentation for Neural Machine Translation

For different language pairs, word-level neural machine translation (NMT...
research
08/10/2022

How Effective is Byte Pair Encoding for Out-Of-Vocabulary Words in Neural Machine Translation?

Neural Machine Translation (NMT) is an open vocabulary problem. As a res...
research
06/02/2023

Assessing the Importance of Frequency versus Compositionality for Subword-based Tokenization in NMT

Subword tokenization is the de facto standard for tokenization in neural...
research
07/30/2018

Training Neural Machine Translation using Word Embedding-based Loss

In neural machine translation (NMT), the computational cost at the outpu...

Please sign up or login with your details

Forgot password? Click here to reset