Benchmarking Azerbaijani Neural Machine Translation

07/29/2022
by   Chih-Chen Chen, et al.
0

Little research has been done on Neural Machine Translation (NMT) for Azerbaijani. In this paper, we benchmark the performance of Azerbaijani-English NMT systems on a range of techniques and datasets. We evaluate which segmentation techniques work best on Azerbaijani translation and benchmark the performance of Azerbaijani NMT models across several domains of text. Our results show that while Unigram segmentation improves NMT performance and Azerbaijani translation models scale better with dataset quality than quantity, cross-domain generalization remains a challenge

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/08/2020

Neural Machine Translation for South Africa's Official Languages

Recent advances in neural machine translation (NMT) have led to state-of...
research
05/06/2018

Multi-Domain Neural Machine Translation

We present an approach to neural machine translation (NMT) that supports...
research
07/03/2019

Depth Growing for Neural Machine Translation

While very deep neural networks have shown effectiveness for computer vi...
research
04/29/2020

Adversarial Subword Regularization for Robust Neural Machine Translation

Exposing diverse subword segmentations to neural machine translation (NM...
research
05/31/2021

On Compositional Generalization of Neural Machine Translation

Modern neural machine translation (NMT) models have achieved competitive...
research
11/08/2019

Domain Robustness in Neural Machine Translation

Translating text that diverges from the training domain is a key challen...
research
03/31/2023

ℰ KÚ [MASK]: Integrating Yorùbá cultural greetings into machine translation

This paper investigates the performance of massively multilingual neural...

Please sign up or login with your details

Forgot password? Click here to reset