On Compositional Generalization of Neural Machine Translation

05/31/2021
by   Yafu Li, et al.
0

Modern neural machine translation (NMT) models have achieved competitive performance in standard benchmarks such as WMT. However, there still exist significant issues such as robustness, domain generalization, etc. In this paper, we study NMT models from the perspective of compositional generalization by building a benchmark dataset, CoGnition, consisting of 216k clean and consistent sentence pairs. We quantitatively analyze effects of various factors using compound translation error rate, then demonstrate that the NMT model fails badly on compositional generalization, although it performs remarkably well under traditional metrics.

READ FULL TEXT
research
07/29/2022

Benchmarking Azerbaijani Neural Machine Translation

Little research has been done on Neural Machine Translation (NMT) for Az...
research
10/13/2022

Categorizing Semantic Representations for Neural Machine Translation

Modern neural machine translation (NMT) models have achieved competitive...
research
12/08/2020

Revisiting Iterative Back-Translation from the Perspective of Compositional Generalization

Human intelligence exhibits compositional generalization (i.e., the capa...
research
04/05/2020

Detecting and Understanding Generalization Barriers for Neural Machine Translation

Generalization to unseen instances is our eternal pursuit for all data-d...
research
05/25/2020

The Unreasonable Volatility of Neural Machine Translation Models

Recent works have shown that Neural Machine Translation (NMT) models ach...
research
05/04/2020

Evaluating Explanation Methods for Neural Machine Translation

Recently many efforts have been devoted to interpreting the black-box NM...
research
11/04/2020

PheMT: A Phenomenon-wise Dataset for Machine Translation Robustness on User-Generated Contents

Neural Machine Translation (NMT) has shown drastic improvement in its qu...

Please sign up or login with your details

Forgot password? Click here to reset