An Empirical Accuracy Law for Sequential Machine Translation: the Case of Google Translate

03/05/2020
by   Lucas Nunes Sequeira, et al.
0

We have established, through empirical testing, a law that relates the number of translating hops to translation accuracy in sequential machine translation in Google Translate. Both accuracy and size decrease with the number of hops; the former displays a decrease closely following a power law. Such a law allows one to predict the behavior of translation chains that may be built as society increasingly depends on automated devices.

READ FULL TEXT
research
03/20/2023

Translate your gibberish: black-box adversarial attack on machine translation systems

Neural networks are deployed widely in natural language processing tasks...
research
08/16/2018

Augmenting Statistical Machine Translation with Subword Translation of Out-of-Vocabulary Words

Most statistical machine translation systems cannot translate words that...
research
06/07/2018

A Challenge Set for French --> English Machine Translation

We present a challenge set for French --> English machine translation ba...
research
05/18/2019

A Case Study: Exploiting Neural Machine Translation to Translate CUDA to OpenCL

The sequence-to-sequence (seq2seq) model for neural machine translation ...
research
01/13/2021

Calibration Methods of Touch-Point Ambiguity for Finger-Fitts Law

Finger-Fitts law (FFitts law) is a model to predict touch-pointing times...
research
12/29/2016

Verifying Heaps' law using Google Books Ngram data

This article is devoted to the verification of the empirical Heaps law i...
research
07/17/2018

A Hand-Held Multimedia Translation and Interpretation System with Application to Diet Management

We propose a network independent, hand-held system to translate and disa...

Please sign up or login with your details

Forgot password? Click here to reset