Approches quantitatives de l'analyse des prédictions en traduction automatique neuronale (TAN)

12/10/2020
by   Maria Zimina-Poirot, et al.
0

As part of a larger project on optimal learning conditions in neural machine translation, we investigate characteristic training phases of translation engines. All our experiments are carried out using OpenNMT-Py: the pre-processing step is implemented using the Europarl training corpus and the INTERSECT corpus is used for validation. Longitudinal analyses of training phases suggest that the progression of translations is not always linear. Following the results of textometric explorations, we identify the importance of the phenomena related to chronological progression, in order to map different processes at work in neural machine translation (NMT).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/17/2016

Pre-Translation for Neural Machine Translation

Recently, the development of neural machine translation (NMT) has signif...
research
08/08/2018

Debugging Neural Machine Translations

In this paper, we describe a tool for debugging the output and attention...
research
06/14/2020

FFR v1.1: Fon-French Neural Machine Translation

All over the world and especially in Africa, researchers are putting eff...
research
04/10/2020

An In-depth Walkthrough on Evolution of Neural Machine Translation

Neural Machine Translation (NMT) methodologies have burgeoned from using...
research
04/14/2021

The Curious Case of Hallucinations in Neural Machine Translation

In this work, we study hallucinations in Neural Machine Translation (NMT...
research
07/17/2018

A Hand-Held Multimedia Translation and Interpretation System with Application to Diet Management

We propose a network independent, hand-held system to translate and disa...

Please sign up or login with your details

Forgot password? Click here to reset