Local Byte Fusion for Neural Machine Translation

Subword tokenization schemes are the dominant technique used in current NLP models. However, such schemes can be rigid and tokenizers built on one corpus do not adapt well to other parallel corpora. It has also been observed that in multilingual corpora, subword tokenization schemes over-segment low-resource languages leading to a drop in translation performance. A simple alternative to subword tokenizers is byte-based methods i.e. tokenization into byte sequences using encoding schemes such as UTF-8. Byte tokens often represent inputs at a sub-character granularity i.e. one character can be represented by a sequence of multiple byte tokens. This results in byte sequences that are significantly longer than character sequences. Enforcing aggregation of local information in the lower layers can guide the model to build higher-level semantic information. We propose a Local Byte Fusion (LOBEF) method for byte-based machine translation – utilizing byte n-gram and word boundaries – to aggregate local semantic information. Extensive experiments on multilingual translation, zero-shot cross-lingual transfer, and domain adaptation reveal a consistent improvement over traditional byte-based models and even over subword techniques. Further analysis also indicates that our byte-based models are parameter-efficient and can be trained faster than subword models.

READ FULL TEXT
research
05/09/2023

Utilizing Lexical Similarity to Enable Zero-Shot Machine Translation for Extremely Low-resource Languages

We address the task of machine translation from an extremely low-resourc...
research
11/03/2020

Cross-lingual Word Embeddings beyond Zero-shot Machine Translation

We explore the transferability of a multilingual neural machine translat...
research
10/04/2020

Improving Target-side Lexical Transfer in Multilingual Neural Machine Translation

To improve the performance of Neural Machine Translation (NMT) for low-r...
research
10/10/2016

Fully Character-Level Neural Machine Translation without Explicit Segmentation

Most existing machine translation systems operate at the level of words,...
research
07/15/2020

A Multilingual Parallel Corpora Collection Effort for Indian Languages

We present sentence aligned parallel corpora across 10 Indian Languages ...
research
09/05/2018

BPE and CharCNNs for Translation of Morphology: A Cross-Lingual Comparison and Analysis

Neural Machine Translation (NMT) in low-resource settings and of morphol...
research
08/21/2020

Neural Machine Translation without Embeddings

Many NLP models follow the embed-contextualize-predict paradigm, in whic...

Please sign up or login with your details

Forgot password? Click here to reset