Debugging Neural Machine Translations

08/08/2018
by   Matīss Rikters, et al.
0

In this paper, we describe a tool for debugging the output and attention weights of neural machine translation (NMT) systems and for improved estimations of confidence about the output based on the attention. The purpose of the tool is to help researchers and developers find weak and faulty example translations that their NMT systems produce without the need for reference translations. Our tool also includes an option to directly compare translation outputs from two different NMT engines or experiments. In addition, we present a demo website of our tool with examples of good and bad translations: http://attention.lielakeda.lv

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/24/2020

Why Neural Machine Translation Prefers Empty Outputs

We investigate why neural machine translation (NMT) systems assign high ...
research
06/25/2022

Probing Causes of Hallucinations in Neural Machine Translations

Hallucination, one kind of pathological translations that bothers Neural...
research
10/10/2017

Confidence through Attention

Attention distributions of the generated translations are a useful bi-pr...
research
12/19/2022

Optimal Transport for Unsupervised Hallucination Detection in Neural Machine Translation

Neural machine translation (NMT) has become the de-facto standard in rea...
research
12/10/2020

Approches quantitatives de l'analyse des prédictions en traduction automatique neuronale (TAN)

As part of a larger project on optimal learning conditions in neural mac...
research
04/13/2023

Bidirectional UML Visualisation of VDM Models

The VDM-PlantUML Plugin enables translations between the text based UML ...
research
07/06/2018

Testing Untestable Neural Machine Translation: An Industrial Case

Neural Machine Translation (NMT) has been widely adopted recently due to...

Please sign up or login with your details

Forgot password? Click here to reset