Log In Sign Up

Towards Character-Level Transformer NMT by Finetuning Subword Systems

by   Jindřich Libovický, et al.

Applying the Transformer architecture on the character level usually requires very deep architectures that are difficult and slow to train. A few approaches have been proposed that partially overcome this problem by using explicit segmentation into tokens. We show that by initially training a subword model based on this segmentation and then finetuning it on characters, we can obtain a neural machine translation model that works at the character level without requiring segmentation. Without changing the vanilla 6-layer Transformer Base architecture, we train purely character-level models. Our character-level models better capture morphological phenomena and show much higher robustness towards source-side noise at the expense of somewhat worse overall translation quality. Our study is a significant step towards high-performance character-based models that are not extremely large.


Character-level Transformer-based Neural Machine Translation

Neural machine translation (NMT) is nowadays commonly applied at the sub...

Character-based NMT with Transformer

Character-based translation has several appealing advantages, but its pe...

SCIMAT: Science and Mathematics Dataset

In this work, we announce a comprehensive well curated and opensource da...

On Target Segmentation for Direct Speech Translation

Recent studies on direct speech translation show continuous improvements...

Patching Leaks in the Charformer for Efficient Character-Level Generation

Character-based representations have important advantages over subword-b...

Why don't people use character-level machine translation?

We present a literature and empirical survey that critically assesses th...