Multi-Unit Transformers for Neural Machine Translation

10/21/2020
by   Jianhao Yan, et al.
0

Transformer models achieve remarkable success in Neural Machine Translation. Many efforts have been devoted to deepening the Transformer by stacking several units (i.e., a combination of Multihead Attentions and FFN) in a cascade, while the investigation over multiple parallel units draws little attention. In this paper, we propose the Multi-Unit Transformers (MUTE), which aim to promote the expressiveness of the Transformer by introducing diverse and complementary units. Specifically, we use several parallel units and show that modeling with multiple units improves model performance and introduces diversity. Further, to better leverage the advantage of the multi-unit setting, we design biased module and sequential dependency that guide and encourage complementariness among different units. Experimental results on three machine translation tasks, the NIST Chinese-to-English, WMT'14 English-to-German and WMT'18 Chinese-to-English, show that the MUTE models significantly outperform the Transformer-Base, by up to +1.52, +1.90 and +1.10 BLEU points, with only a mild drop in inference speed (about 3.1 Transformer-Big model, with only 54% of its parameters. These results demonstrate the effectiveness of the MUTE, as well as its efficiency in both the inference process and parameter usage.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/16/2020

Neural Machine Translation with Joint Representation

Though early successes of Statistical Machine Translation (SMT) systems ...
research
06/04/2021

Scalable Transformers for Neural Machine Translation

Transformer has been widely adopted in Neural Machine Translation (NMT) ...
research
05/10/2023

Multi-Path Transformer is Better: A Case Study on Neural Machine Translation

For years the model performance in machine learning obeyed a power-law r...
research
09/11/2023

Combinative Cumulative Knowledge Processes

We analyze Cumulative Knowledge Processes, introduced by Ben-Eliezer, Mi...
research
06/05/2019

Learning Deep Transformer Models for Machine Translation

Transformer is the state-of-the-art model in recent machine translation ...
research
02/16/2020

Multi-layer Representation Fusion for Neural Machine Translation

Neural machine translation systems require a number of stacked layers fo...
research
10/30/2018

Simplifying Neural Machine Translation with Addition-Subtraction Twin-Gated Recurrent Networks

In this paper, we propose an additionsubtraction twin-gated recurrent ne...

Please sign up or login with your details

Forgot password? Click here to reset