Machine translation (MT) is one of the most successful applications in natural language processing, as exemplified by its numerous practical applications and the number of contributions on this topic at major machine learning and natural language processing venues. Despite recent advances in translation quality for a handful of language pairs and domains, MT systems still perform poorly onlow-resource languages, i.e. languages without a lot of training data. In fact, many low-resource languages are not even supported by most popular translation engines. Yet, the majority of the world’s population speak low-resource languages and would benefit from improvements in translation quality on their native languages. As a result, the field has been shifting focus towards low-resource languages.
Over the past decade, the research community has made a lot of recent progress on models for low-resource machine translation. Approaches like iterative backtranslation (sennrich2015improving), multilingual machine translation gmt17; tang2020ml50; fan2020beyond, and even unsupervised machine translation (lample_emnlp2018; artetxe_emnlp2018) have shown promising results. Beyond modeling, a major challenge for research in low-resource machine translation is evaluation. Low-resource evaluation is critical to the scientific progress of the field, because evaluation enables proper comparison of approaches and ultimately, a better understanding of what needs further investigation and improvement. Unfortunately, finding high-quality data suitable for the evaluation process is even more difficult in low-resource scenarios.
At present, there are very few benchmarks on low-resource languages. These often have very low coverage of low-resource languages (riza2016introduction; thu-etal-2016-introducing; guzman-etal-2019-flores; barrault-EtAl:2020:WMT1; nekoto2020participatory; ebrahimi2021americasnli; kuwanto2021low), limiting our understanding of how well methods generalize and scale to a larger number of languages with a diversity of linguistic features. There are some benchmarks that have high coverage, but these are often in specific domains, like COVID-19 tico19 or religious texts christodouloupoulos2015massively; malaviya2017learning; tiedemann2018emerging; agic-vulic-2019-jw300; or have low quality because they are built using automatic approaches (zhang-etal-2020-improving; schwenk2019ccmatrix; schwenk-etal-2021-wikimatrix). As a result, it is difficult to draw firm conclusions about research efforts on low-resource MT. In particular, there are even fewer benchmarks that are suitable for evaluation of many-to-many multilingual translation, as these require multi-lingual alignment (i.e. having the translation of the same sentence in multiple languages), which hampers the progress of the field despite all the recent excitement on this research direction. As an additional challenge, there are no established practices for how to build such benchmark. Working with professional translators in low-resource languages is difficult because of their scarce availability, and because it is non-trivial to check the quality of their work guzman-etal-2019-flores.
We present the Flores-101 benchmark, consisting of 3001 sentences sampled from English Wikipedia and professionally translated in 101 languages. With this dataset, we make several contributions. First, we provide the community with a high-quality benchmark that has much larger breadth of topics and coverage of low resource languages than any other existing dataset (§4). Second, Flores-101 is suitable for many-to-many evaluation, meaning that it enables seamless evaluation of 10100 language pairs. This enables the evaluation of popular multilingual MT systems as well as the evaluation of regionally-relevant language pairs like Spanish-Aymara and Vietnamese-Thai, for example. Third, we thoroughly document the annotation process we followed (§3), helping the community build institutional knowledge about how to construct MT datasets. Fourth, we release not only sentences with their translation but also rich meta-data that will support other kinds of evaluations and tasks, such as document level translation, multimodal translation and text classification. Fifth, we propose a BLEU metric based on sentence piece tokenization kudo2018sentencepiece (§5) that enables evaluation of all languages in the set in a unified and extensible framework. Finally, we publicly release both data and baselines used in our experiments (§6), to foster research in low-resource machine translation and related areas.
This paper is organized as follows: In Section 2, we describe related work to construct evaluation benchmarks in machine translation. In Section 3, we detail the construction process of Flores-101, from sourcing sentences to translate to defining the translation workflow. Section 4 gives a detailed overview of the sentences, languages, and quality of Flores-101. In Section 5, we describe our proposed SentencePiece BLEU metric which unifies and simplifies evaluation. Section 6 uses Flores-101 to evaluate various public translation models, and breaks down model performance by amount of training data, domain, sentence length, and language family. We present our conclusions in Section 7.
2 Related Work
A major challenge in machine translation, particularly as the field shifts its focus to low-resource languages, is the lack of availability of evaluation benchmarks. Much recent work has focused on the creation of training corpora auguste2021domain; ali2021towards; adelani2021menyo; gezmu2021extended; nyoni2021low; chauhan2021monolingual and development of models koneru2021unsupervised; nagoudi2021indt5; aulamo2021boosting, but evaluation is critical to being able to assess and improve translation quality.
Traditionally, the yearly Workshop on Machine Translation (WMT) and its associated shared tasks have provided standardized benchmarks and metrics to the community, fostering progress by providing means of fair comparison among various approaches. Over recent years, the main translation task at WMT has challenged participants with low-resource languages, but the evaluation has been limited to a handful of languages — for example, Latvian in 2017 ondrej2017findings, Kazakh in 2018 rej2018findings, Gujarati and Lithuanian in 2019 barrault2019findings, and Inuktitut, Khmer, Pashto, and Tamil in 2020 barrault2020findings. Moreover, these tasks have considered translation to and from English only, while the field has been recently focusing on large-scale multilingual models (gmt17; aharoni-etal-2019-massively; freitag-firat-2020-complete; fan2020beyond).
To date, the largest resource of parallel data which can also be used for evaluation purposes is OPUS tiedemann2012parallel, which is itself a collection of publicly available parallel datasets. While OPUS has by far the largest coverage of languages, particularly to and from English, it consists of a mixture of manually translated and mined data, which results in a large variety of datasets and domains with varying level of quality. For instance, OPUS contains parallel data translated by humans coming from operating system handbooks like Ubuntu, or parallel data from religious documents liu2021usefulness like Jehovah’s Witness magazines (agic-vulic-2019-jw300) and the Bible. These have recently been expanded to include more languages nicolai2021expanding as well. OPUS also contains a variety of other automatically-aligned datasets, such as various versions of TED talks, which are usually of lower quality (Ye2018WordEmbeddings; zhang-etal-2020-improving; fan2020beyond). Similarly, OPUS contains large parallel datasets generated via automatic filtering and alignment methods, such as WikiMatrix (schwenk-etal-2021-wikimatrix), ccMatrix schwenk2019ccmatrix, ccAligned elkishky2020ccaligned, and ParaCrawl paracrawl:2019:mt_summit, which contain noisy translations. While these may be utilized for training, they are clearly unsuitable for evaluation purposes due to automatic alignment.
There are other datasets for evaluation purposes, such as Flores v1.0 (guzman-etal-2019-flores), LORELEI (strassel-tracey-2016-lorelei), ALT (thu-etal-2016-introducing; riza2016introduction; ding2016similar) and TICO-19 (tico19), as well as datasets for specific languages such as Igbo ezeani2020igbo and Fon (dossou2020ffr). These are similar to Flores-101 because they focus on low-resource languages. However, the language coverage of these datasets is much smaller. Among these, only TICO-19 is suitable for multilingual machine translation, but its content is centered around COVID-19, unlike the much broader coverage of topics offered by Flores-101.
Lastly, the current literature in low-resource translation provides very scarce guidance in terms of best practices and methodology to construct parallel datasets and perform quality assurance. The much lower number of translators is problematic because it makes the annotation process much more susceptible to variance in the proficiency level of such annotators. In Flores v1.0(guzman-etal-2019-flores), a mixture of human and automatic checks were used to filter and rework problematic translations. In TICO-19 tico19, a two-step translation and quality assurance process was followed. Despite its technical complexity, the annotation process for benchmark sets is in fact seldom documented in technical reports. This is still largely an uncharted territory. However, there are a lot of practical questions related to setting up and ensuring the quality of large-scale translation campaigns targeting low-resource languages which may have very few annotators. For example: What guidelines should be considered for translators and evaluators? What workflow is most efficient and effective? What automatic checks should be put in place to minimize human intervention? When can a dataset be declared to have reached a sufficient level of quality to be released? In this study, we document our choices and processes in a hope to build and consolidate best practices of dataset construction for the machine translation community.
3 Dataset Construction
The construction of Flores-101 is intended to accomplish several goals: (i) to enable the evaluation of many-to-many multilingual models, meaning the evaluation of translations from any language to any other language including very long-tail languages; (ii) to enable other kinds of evaluation beyond machine translation, such as document-level translation, multi-modal translation, multilingual classification, and so on; (iii) most importantly, to build a high-quality evaluation benchmark.
To achieve the above goals, the overall construction process consisted of three phases, as outlined in Fig 1. First, we extracted sentences to translate from English Wikipedia. Second, we designed and ran pilot experiments to determine the translation process, and finally we launched the actual translation workflow for over 100 languages. In this section, we describe the process in detail. The reader who is more curious about general statistics can safely skip this section and go to Section 4.
3.1 Sourcing Sentences
We describe how the domains and sentences in Flores-101 were selected. A high-level summary of the dataset can be found in Table 1.
All source sentences were extracted from multiple Wikimedia sources, as this is a repository of text that is public and freely available under permissive licensing, and covers a broad range of topics. Although Wikipedia is currently supported in more than 260 languages111https://en.wikipedia.org/wiki/Wikipedia:Multilingual_statistics, several low-resource languages have relatively few articles containing well structured sentences. Moreover, translating a few hundred sentences for several thousand different language pairs would be infeasible, at the very least because of the lack of qualified professional translators that could read both the source and target side.
Instead, we opted to source all sentences from English Wikipedia, while considering a broad set of topics that could be of general interest regardless of the native language of the reader. In particular, we collected a third of the sentences from Wikinews222https://en.wikinews.org/wiki/Main_Page, which is a collection of international news articles, a third from Wikijunior333https://en.wikibooks.org/wiki/Wikijunior, which is a collection of age-appropriate nonfiction books for children from birth to age 12, and a third from WikiVoyage444https://en.wikivoyage.org/wiki/Main_Page which is a travel guide with a collection of articles about travel tips, food and destinations around the globe. By translating the same set of English sentences in more than hundred languages, we enable evaluation of multilingual MT with the only caveat that source sentences not in English are produced by human translators. While translationese (or overly literal or awkward translations) has known idiosyncrasies (zhang-toral-2019-effect), we conjecture that these effects are rather marginal when evaluating models in low-resource languages, where current MT systems produce many severe mistakes. We believe the benefits of many-to-many evaluation, which supports the measurement of traditionally neglected regionally-relevant pairs such as Xhosa-Zulu, Vietnamese-Thai, and Spanish-Aymara, largely outsize the risk of evaluating translationese.
The sentence selection process consisted of selecting an article at random from each source, and then manually selecting a few (typically between 3 and 5) contiguous sentences from each article, avoiding segments with very short or malformed sentences. To avoid bias coming from the document structure, we carefully selected one paragraph per document, from either the beginning, middle or end of the article. We balanced the location selection to be equally distributed across the whole corpus — roughly one third of paragraphs were sampled from the beginning of the article, one third from the middle, and so on. For each sentence, we also extracted the Wikipedia URL, topic, and noted boolean flags to indicate whether the sentence contained entities linked to other Wikipedia pages and images. The selection process was performed by 10 different annotators in our lab, 6 male and 4 female; with different roles in research: researchers (scientists and engineers) and program/project managers; originally coming from different regions of the world: East Asia, South Asia, Southern Europe, Latin America and North America. We manually labeled all sentences by a more detailed sub-topic, one of 10 possibilities: crime, disasters, entertainment, geography, health, nature, politics, science, sports, and travel. Table 1 reports basic statistics of the originating English sentences.
Since several contiguous sentences are extracted from the same article and since we also provide the corresponding URL, we support evaluation of machine translation at the document level. With the additional meta-data, we also enable evaluation of multimodal machine translation.
|Number of Sentences||3001|
|Average Words per Sentence||21|
|Number of Articles||842|
|Average Number of Sentences per Article||3.5|
|% of Articles with Hyperlinked Entities||40|
|% of Articles with Images||66|
|Evaluation Split||# Articles||# Sentences|
|Domain||# Articles||# Sentences|
|Sub-Topic||# Articles||# Sentences|
3.2 Pilot Experiments
Obtaining high translation quality in low-resource languages is difficult because the translation job relies on the skill of a small set of translators. If one translator is not perfectly fluent or uses a different local convention for that language, this could render the quality of the dataset insufficient or inconsistent for that language. Here, we describe the process we followed to define an efficient and high quality translation workflow. To this end, we report two pilot experiments we used to determine how we should proceed with the creation of this large-scale evaluation dataset.
3.2.1 Providers and Workflows
To ensure the best possible level of quality for our translations, we designed two pilots aimed to determine the best workflow to follow for translating hundreds of languages. The first pilot experiment was meant to select the best translation providers for each language and the second, to determine the best translation-quality assurance workflow.
Language Service Providers.
As a starting point, let us assume that each language can be translated by different Language Service Providers (LSPs) and that they all charge the same price for translating a sentence. We randomly selected 100 sentences and 8 language pairs, and assigned each language to at least two LSPs. We then used another LSP to evaluate all translations.
Based on human assessment of translation quality, we selected the two LSPs that produced the highest quality translations. We chose two translation LSPs to make our translation process not rely entirely on a single-party, while reducing the communication overhead created by working with too many external parties.
Translation and Quality Assurance Workflow.
Despite having reliable translation LSPs, we need to ensure that each translation conforms to the highest level of quality required by a benchmark. Therefore, we split our workflow into two parts: translation (which includes editing), performed by an initial LSP, and quality assurance (QA) performed by an independent LSP. After the QA process, a translation might need re-translation or minimal editing to improve its quality. Here, we explored the best workflow for when re-translation is needed. Assuming there are two translation LSPs, A and B, and a separate QA LSP C, we can have two possible workflows: (i) A–C–B we can have B re-translate translations flagged by C that were produced by A; (ii) A–C–A an alternative and simpler workflow is to have the same LSP take care of both translation and re-translation for a given language, and to have each translation LSP handle half of the languages.
The advantage of workflow (i) is the re-translation process is the least biased, particularly on low-resource languages where the re-translator and the translator could be the same person. On the other hand, the re-translator has less context and the workflow has higher complexity because data comes in and out of LSPs at different times, making the whole process more error prone.
We tested both workflows and observed negligible differences between the two workflows, and therefore, we chose workflow (ii) , i.e. A–C–A, with the same LSP taking care of both translation and re-translation as it is operationally simpler.
3.2.2 Automatic Translation Quality
The second pilot experiment aimed to investigate how to assess translation quality automatically.
We implemented several checks to ensure the first round of translations were of acceptable quality: (i) language identification, (ii) checking whether the translation is a copy of the source sentence; (iii) checking whether the translation has significantly different length, (iv) checking translation fluency according to a language model, (v) and checking whether the translation is a copy of the translation produced by publicly available translation engines. Among all checks, we found that (v) was the most significant issue, despite formulating clear guidelines forbidding the use of translation engines. This is important, as we want to our translations to be as unbiased as possible. Relying on verbatim or post-edited translations from online engines would be misguiding, and give them unfair advantage when using a reference-based automatic metric for comparison.
To address the issue, we proposed a heuristic to detect and reject professional translations that are likely copies from translation engines based on their sentence-level similarity. Moreover, when the translations of two different engines are available, we check whether a sentence is very similar to the output of a specific translation engine while being different from the output produced by other translation engines.
The heuristic is as follows: let be the translation
produced by a translation LSP, and be translations
produced by translation engine A and B; and let spBLEU(, ) be the sentence-level SentencePiece BLEU (cfr. Sec. 5) between
sentence and the reference . Then, we declare a that a translation was a copy if , when the language in question is supported by both engines A and B; and if when the translation is only supported by translation engine A.
The values of and were based on the analysis of the distribution of scores for translations of tens of languages, where we used clustering techniques to determine the right cutoff values.
Moreover, we established that any set of translations with more than 10% of the sentences violating the above criteria condition would need to be re-translated prior to perform any subsequent human evaluation. We show in Figure 3 an example of a language that passed this test and one which did not. Thanks to these automatic checks, we reduced the amount of copying from popular translation engines, streamlined the translation workflow before human evaluators assessed quality, and fully automated the process of translation, evaluation, and re-translation. This is described in the next section.
|# of Languages requiring Re-translation||45|
|Avg # of Re-translations||1|
|Max # of Re-translations||3|
|Avg # of Days to Translate 1 language||26|
|Avg # of Days to Re-Translate||35|
|Avg # of Days for 1 language||61|
|Shortest Turnaround (days) for 1 language||31|
|Longest Turnaround (days) for 1 language||89|
3.3 Translation and Evaluation
We describe the final workflow for collecting data for all languages in Flores-101. We start with how we decide when a language is ready to be included in Flores-101: the final translation quality score. Then, we detail the full translation process, including automatic and human quality checks.
Translation Quality Score.
How do we know if the translations are good enough to include in Flores-101, and how do we know when a language has completed translation? Before we summarize the workflow to produce translations, we briefly discuss how we measure translation quality. We assess translation quality through a Translation Quality Score, calculated per language on a 0 to 100 scale. The translation quality score is determined based on the number of identified errors by the evaluation LSPs. The following errors are examined: grammar, punctuation, spelling, capitalization, addition or omission of information, mistranslation, unnatural translation, untranslated text, and register. Each error is also associated with a severity level, between minor, major, and critical. Based on tallying these different error types, the overall final score is determined. We encouraged evaluators to pay particularly high attention to unnatural translation errors. Based on our pilot experiments, we set the acceptable translation quality score to 90%.
The overall translation workflow is depicted in Figure 2. For each language, all source sentences are sent to a certain translation LSPs. Once sentences are translated, the data is sent to different translators within the LSP for editing and then moves on to automated quality control steps. An additional verification step is added to this specific workflow with comparison of the translated data to translations from commercial engines as previously mentioned. If any of the checks fail, the LSP has to re-translate until all verification is passed. Afterwards, translations are sent to an evaluation LSP that performs quality assessment, providing a translation quality score and constructive linguistic feedback both on the sentence and language levels. If the score is below the accepted threshold, translations together with the assessment report are sent back to the translation LSP for re-translation. If the initial score is below another certain threshold (that is associated to good translation quality), the re-translated translations are evaluated by humans one more time. We summarize in Table 2 the overall statistics around the translation process. We include the guidelines used for quality evaluation in the Appendix.
4 Flores-101 At a Glance
In this section, we analyze Flores-101. We provide a high level comparison of Flores-101 with existing benchmarks, then discuss the sentences, languages, and translation quality in detail.
4.1 Comparison with Existing Benchmarks
We compare Flores-101 with several existing benchmarks, summarized in Table 3. Flores-101 combines large language coverage with topic diversity, support for many-to-many evaluation, and high quality human translations (e.g. produced with no automatic alignment). Further, Flores-101 adds document-level evaluation and support multimodal translation evaluation.
|# Languages||Diverse||Many to||Human||Document||Multi|
|FLORES v1 guzman-etal-2019-flores||2||✓||✗||✓||✗||✗|
4.2 Sentences in Flores-101
Table 1 provides an overview of Flores-101
. The total dataset translates 3001 sentences into 101 languages. On average, sentences contain around 20 words. These sentences originate from 1,175 different articles in three domains: WikiNews, WikiJunior, and WikiVoyage. On average, 3 sentences are selected from each document, and then documents are divided into dev, devtest, and test sets. The articles are rich in metadata: 40% of articles contain hyperlinks to other pages, and 66% of articles contain images. We manually classify the content of the sentences into one of 10 broader topics, and display the distribution. Overall, most sentences are about world travel (sourced from WikiVoyage), though there are also a large number of sentences about science, politics, and crime.
4.3 Languages in Flores-101
We summarize all 101 languages in Flores-101 and their scripts and language families in Table 4. We note that language classification is a complex task with different classification hierarchies. We chose language families at a reasonable level of detail, i.e. fine enough such that languages can be grouped with a few other languages but not so fine that each language is in its own group. Overall, our selected languages cover a large percentage of people all over the world, with a large diversity of scripts and families. Most of these languages are spoken by millions of people, despite being considered low-resource in the research community.
In Table 4, we depict all Flores-101 languages by their resource level. The amount of data available for a language is difficult to accurately denote, in part because quality is very important and thus, the amount of data does not necessary reflect its usefulness. Further, some data may be proprietary, and new datasets for new languages are actively being created by the research community. Thus, we report the amount of data to/from English and the amount of monolingual data available in OPUS, a public repository for multilingual data. OPUS is a heavily used resource, and itself is a collection of a large number of research datasets produced by the community over decades. The majority of languages have both bilingual data through English and monolingual data, though a number of languages have less than 100K sentences through English. Many of those also have no monolingual data available, making these truly low-resource. Examples include Shona and Nyanja.
|jpn||Japanese||Japonic||Other||Han, Hiragana, Katakana||23.2M||458M|
|spa||Spanish (Latin America)||Indo-European||Romance||Latin||315M||379M|
4.4 Translation Quality
The translation quality score across all languages is depicted in Figure 4. All 101 languages in Flores-101 meet our initial threshold of 90% quality based on human evaluation. Note that several languages were considered beyond our set of 101, but were unable to meet the bar after rounds of re-translation. Overall, about 50% of languages have fairly high quality (above 95%), with few near the 90% threshold boundary. Even low-resource languages like Lao and Zulu can score well on the quality metric.
We breakdown the main translation errors observed based on the quality assessments and re-translations. The largest error category across all languages was mistranslation, a broad error category that generally notes that the source text was not translated faithfully and the translation has rendered an incorrect meaning in the target language. Examples of mistranslation include (but not limited to) incorrect interpretation of the source text, literal translations and mistranslations of phrasal verbs and lack of disambiguation of ambiguous terms. For example, writing …recommends hand washing with hand sanitizer rubs instead of …recommends hand washing over hand sanitizer rubs would represent a mistranslation. Error categories with few errors include register, grammar, and punctuation.
We also examined if certain domains are more difficult to translate than others. Within a language, we did identify variation in the percentage of errors contributed by domain (often one domain could contribute up to 10% more errors than the others), but across languages, there was no clear trend. Overall, it appears that all domains are challenging to translate for human translators.
5 Metric: SentencePiece BLEU
How do we evaluate the performance of translation models at the scale of 101 languages? In this section, we propose the SentencePiece BLEU metric and analyze its performance across languages compared to various alternatives.
Automatic evaluation of translation quality is an active field. Each year, the WMT Metrics shared task seeks to determine the automatic metric that better correlates with human evaluations mathur-EtAl:2020:WMT. While many metrics have been proposed through the years, the analysis has only included a handful of low-resource languages. Further, despite the progress in automatic metrics, the common practice is to use BLEU (bleu)
when reporting results. Unfortunately, using BLEU directly is suboptimal, as it relies on n-gram overlap which is heavily dependent on the particular tokenization used, i.e. tokenizing more aggressively can artificially raise the score and make it difficult to compare across reported results.
The challenge of making BLEU comparable by using equivalent tokenization schemes has been challenging for the translation community and has been partially addressed by sacrebleu post2018call. Previous standards usually leverage the mosestokenizer555https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/tokenizer.perl, which is the default tool in sacrebleu. However, for many languages, these existing tools and tokenizers are not sufficient.
For example, mosestokenizer supports a limited number of languages (often activated with the --tok flag). While its default tokenization rules might operate reasonably for European languages, they do not extend to global support. For example, white-space tokenization is insufficient for some languages like Burmese or Khmer, which do not segment words with white space. Other languages like Arabic are morphologically rich, which has incentivized the creation of BLEU variants bouamor-etal-2014-human. To further complicate matters, some languages like Hindi and Japanese already have custom tokenizers that are used when computing BLEU, although these appear scattered in various publication footnotes, while for others, no such special tokenizers have been developed yet. Further, developing tokenizers for each language of interest is a challenging effort dossou2021crowdsourced; li2021finding that is difficult to scale.
Ideally, we would like an automatic evaluation process that is robust, simple and that can be applied to any language without the need to specify any particular tokenizer, as this will make it easier for researchers to compare against each other. We would like our automatic evaluation to also support future languages — as translation quality continues to improve, the community will naturally produce models for more and more languages.
5.2 SentencePiece BLEU
Towards this goal, we have trained a SentencePiece (SPM) tokenizer (kudo2018sentencepiece) with 256,000 tokens using monolingual data (conneau2020unsupervised; wenzek2019ccnet) from all the Flores-101 languages. SPM is a system that learns subword units based on training data, and does not require tokenization. The logic is not dependent on language, as the system treats all sentences as sequences of Unicode. Given the large amount of multilingual data and the large number of languages, this essentially provides a universal tokenizer, that can operate on any language.
One challenge is that the amount of monolingual data available for different languages is not the same — an effect that is extreme when considering low-resource languages. Languages with small quantities of data may not have the same level of coverage in subword units, or an insufficient quantity of sentences to represent a diverse enough set of content. To address this, we train our SPM model with temperature upsampling similar to conneau2020unsupervised, so that low-resource languages are represented. In the future if a new language is added to Flores-101 and this tokenizer does not support its script, we can easily add new tokens to encode it as desired.
Given this SPM-tokenizer, we compute BLEU by tokenizing the system output and the reference, and then calculate BLEU in the space of sentence-pieces. We dub this metric as sentence-piece BLEU, and we denote it as spBLEU. It is integrated into sacrebleu for ease of use666https://github.com/ngoyal2707/sacrebleu/tree/adding_spm_tokenized_bleu as the spm tokenizer.
|spBLEU v. BLEU||char-BLEU v. BLEU|
5.3 Experiments and Analysis
We want to validate the spBLEU metric, to (1) see that it trends with the standard BLEU metric on languages where mosestokenizer is often used as the default, (2) for languages where custom tokenizers are currently used, see that spBLEU correlates with custom-tokenizer-BLEU more strongly than alternatives such as character-level BLEU, and finally (3) verify that spBLEU can be used for model selection purposes.
|spBLEU v. Human Eval||spBLEU v. BLEU||Human Eval v. BLEU|
|Language||Kendall||Same Best Model||Kendall||Same Best Model||Kendall||Same Best Model|
spBLEU correlates with BLEU.
First, we examine if spBLEU has strong positive correlation with BLEU across various languages where the mosestokenizer is widely used by default. We examine Spanish, Italian, and French. As shown in Table 5 (top), we find that spBLEU correlates very well with BLEU on these languages.
spBLEU is better than char-BLEU when custom tokenization is needed.
Next, we examine the performance of spBLEU on languages where custom tokenizers are often used, or special rules are written for tokenization. We look at three languages: Chinese, Hindi, and Tamil. Chinese is supported by mosetokenizer with special rules. Hindi and Tamil have a popularly used tokenizer in the community from IndicNLP777https://anoopkunchukuttan.github.io/indic_nlp_library/. While these language-specific tokenizers are excellent, the challenge of scale and comparability exists: there are often different competing tokenizers and tokenizers need to be developed for each language we want to evaluate. A possible alternative is instead to eliminate tokenization, and evaluate characters directly and compute character-level BLEU (char-BLEU).
We examine if spBLEU is a better alternative to char-BLEU for languages that currently use special tokenizers. As shown in Table 5 (bottom), spBLEU correlates more strongly with custom tokenizer BLEU compared to the correlation between char-BLEU and custom tokenizer BLEU. While the development of custom tokenizers for specific languages produces much more accurate tokenization, spBLEU is a good alternative for comparability and scalability across a large number of languages.
|< 100K||(100K, 1M)||(1M, 100M)||> 100M|
spBLEU has similar performance as BLEU for model selection.
Next, we turn to verifying that spBLEU can be used compare the quality of models for model selection purposes. This is important, as oftentimes automatic metrics are used in an outer loop of the training process, to select various hyper-parameters, such as model size, dropout rate, learning rate and so on. Thus, we replicate the selection of various models using spBLEU instead of BLEU, experimenting on three language directions: English to Pashto, English to Russian, and English to Chinese. We choose these languages because they were part of WMT2020 human evaluations, and thus we know the ground-truth ranking. We evaluate five models for Pashto, eight for Russian, and seven for Chinese. We focus on evaluation of models in directions out of English, as mosestokenizer works well on English.
For each of the three language directions, we compute the spBLEU and BLEU with language-specific tokenizers between different systems and the reference translation. Overall results are shown in Table 6. We first compare if the ranking of systems produced by spBLEU matches that of systems ranked by human evaluation and BLEU. We calculate exact match ranking accuracy using the Kendall coefficient, which ranges between -1 and 1. The ranking of systems by spBLEU matches human evaluation and BLEU perfectly for Pashto, and has strong correlation with both human evaluation and BLEU for Chinese and Russian.
However, the exact ranking may not be the most important part of a metric. Oftentimes, we want to use automatic metrics to understand which model improvement is the most effective — for example, which model to submit to WMT? Thus, it is important to check whether the best scoring model according to spBLEU matches the best scoring model according to BLEU. We find that spBLEU and BLEU indeed select the same best model on all three languages. Note that BLEU has no guarantee of selecting the same model as human evaluators888There are a number of ways that BLEU score can be improved that may not have an effect on human evaluation. Punctuation normalization is an example..
Overall, we conclude that spBLEU functions fairly similarly to BLEU, especially on languages that usually default to the mosestokenizer. On languages that use custom tokenization, spBLEU correlates more strongly with BLEU than other alternatives, such as char-BLEU. Further, spBLEU often produces a very similar ranking of models and selects the same best model as BLEU and human evaluation.
For the vast majority of languages without custom tokenizers, spBLEU provides the ability to quantify performance in the same way, with one model. We believe that having a single model to perform tokenization will help the research community to make progress on low-resource research, while opening the door for improved versions of spBLEU that treat low-resource languages more fairly. In the subsequent rest of the work, we use spBLEU to evaluate model performance.
6 Evaluating Baselines on Flores-101
In this section, we present evaluation of various models on Flores-101. We describe the dev, devtest, and test splits and how we intend them to be used. We then analyze the performance of a many-to-many model based on fan2020beyond and break down performance by resource level, sentence length, and language family. Finally, we compare various model variants.
6.1 Data Splits
Flores-101 is divided into three splits: dev, devtest, and test. Unless otherwise stated, we report results on the devtest portion of Flores-101. The dev set is meant to be used for hyper-parameter tuning. The devtest is meant to be used for testing purpose during the development phase. The test set will not be released, but will be available via a publicly available evaluation server, while the dev and devtest are publicly downloadable999https://dl.fbaipublicfiles.com/flores101/dataset/flores101_dataset.tar.gz. Through the evaluation server, the test set can be used by various evaluation campaigns, such as the WMT 2021 Large-Scale Multilingual Task101010http://statmt.org/wmt21/large-scale-multilingual-translation-task.html. The primary motivation for keeping the test set available only through an evaluation server is to guarantee equivalent assessment of models and reduce overfitting to the test set. Further, as the dataset is many-to-many, if the source sentences are released, the target sentences would also be released.
|<= 15 words||(15, 25) words||> 25 words|
We evaluate on different models to provide baselines for researchers who may be interested in the performance of certain directions, and to understand which languages and directions need substantial research improvements.
M2M-124: fan2020beyond trained the M2M-100 multilingual model by extending large-scale data mining to create training data for language pairs not going through English. The original M2M-100 model does not have full coverage of the languages in Flores-101. We extended their mined data with data from OPUS for the Flores-101 languages not present in mined data, extending to 124 total languages. Note that for the additional languages added, OPUS does not contain a large quantity of data, and the OPUS data is rather noisy; see Table 4 for further details. On this parallel data, we trained two different sizes of models, namely a model with 615M and one with 175M parameters. Unless otherwise stated, we will report results using the 615M parameter model; this is our default throughout the rest of this paper.
OPUS-100: zhang-etal-2020-improving trained multilingual machine translation models on an English-centric OPUS training dataset with language-aware layers and random online backtranslation (RoBT). We evaluate the 24-layer model with backtranslation (dubbed Ours + 24 layer + RoBT in their work111111https://github.com/bzhangGo/zero/tree/master/docs/multilingual_laln_lalt) with 254M parameters.
Models open-sourced by Masakhane
: The Maskhane Participatory Research effort, focusing on Natural Language Processing for African languages, has developed and open-sourced for the community various machine translation modelsnekoto2020participatory; abbott2019benchmarking; abbott-martinus-2019-benchmarking. We evaluate models from English to six languages121212https://github.com/masakhane-io/masakhane-mt: Yoruba, Zulu, Swahili, Shona, Nyanja, and Luo.
We generate from all models with beam size 5, setting the max generation length to 200. Given the large number of directions covered in Flores-101, we do not tune the beam size, length penalty, or minimum/maximum generation length.
In this section, we report the results of the evaluation of the baseline approaches described above on the Flores-101 devtest using spBLEU.
6.4.1 Findings From Evaluation on All Directions
We evaluated our M2M-124 model with 615M parameters on all language pairs and report spBLEU scores in Figure 8. In Figure 8, the languages are organized alphabetically by language code, while in Figure 9
, the rows and columns have been organized via spectral clustering. The spBLEU metric scores are used as affinity scores between each pair of languages. This produces clusters ordered by easiness to translate.
Across the board, performance of translation into English is strong, with only a few languages with spBLEU below 10. Performance out of English is much worse. We display this graphically in Figure 5, where we show that performance into English (circle markers) is has higher spBLEU than performance out of English (triangle markers). Further, performance is overall heavily correlated with amount of training data, which we discuss in greater detail later.
Across non-English-Centric directions, performance requires improvement — translation in and out of most African languages, for example, struggles to reach 5 spBLEU. In contrast, translation into many European languages, even low-resource languages such as Occitan, have much better performance (over 10 spBLEU for many directions). This result highlights the importance of both the amount of data and transfer learning from related languages. For instance, translation to and from Occitan can naturally borrow from related high-resource languages like French, Italian and Spanish. However, the same cannot be said about most African languages, for which related languages are also low resource and difficult to translate.
Performance by Resource Level.
A challenge of analyzing performance of various language families is that performance is often closely tied to the amount (and quality) of available training data. Certain language families have much less data. For example, almost every single African language is considered a low-resource translation direction. Thus, we next evaluate performance based on resource level. We classify languages into four bins based on resource level of bitext data with English: high-resource languages, with more than 100M sentences of training data, mid-resource with between 1M and 100M sentences, low-resource with between 100K and 1M sentences, and finally very low-resource with less than 100K sentences.
Our results are summarized in Table 7. As hypothesized, performance increases with greater quantity of training data, in a clear pattern. spBLEU increases moving from left to right, as well as from top to bottom. Translation between mid and high resource languages produces spBLEU scores around 20, whereas translating between very low and low-resource languages yields a mere spBLEU score of less than 5. Even translation between high resource and low-resource languages is still quite low, indicating that lack of training data strongly limits performance of current MT systems.
Performance by Sentence Length.
In the previous paragraphs we have found that translation quality is affected by the amount of training data and the properties of the language. Next, we examine if translation quality is also affected a property of the sentences themselves. In particular, we calculate if the sentence length affects model performance, based on the hypothesis that longer sentences may be more complex and difficult to translate (sutskever2014sequence). The results in Table 8 show spBLEU on different subsets of the devtest, grouped by sentence length. The sentence length is determined by the number of tokens in the original English sentence. The bucket with short sentences collect all sentences with up to 15 tokens (in English), the medium bucket has sentences with a number of tokens in the range 16 to 25, and the last bucket has sentences with more than 25 tokens. The table shows that spBLEU is rather constant with respect to the sentence length, and in fact it slightly increases with the length of the sentence, contrary to our initial conjecture.
Performance by Domain.
We analyze if model performance is affected by domain, to check whether certain domains are more difficult to translate than others. Flores-101 contains three domains: WikiNews, WikiJunior, and WikiVoyage. We report results of translating in and out of five languages, namely English, Chinese, Spanish, Hindi, and Arabic, as well as the average across all of the possible directions.
The results in Table 9 demonstrate that the factor that affects the most translation quality is the language we translate in and out of (with Arabic being the most challenging and English having the highest scores) rather than the domain. WikiNews is the easiest domain with slightly higher spBLEU, and WikiVoyage is the hardest domain, with an average spBLEU score lower by one point compared to WikiNews. We hypothesize that news-related content is often written in a certain fairly consistent, journalistic style, which could ease the challenge of translation, while WikiVoyage might be a little harder because it has more named entities of local regions of the world which might be harder to translate correctly. However, overall, there are no large differences in performance between domains.
Performance by Language Family.
We also group languages into eleven families based on general language families131313Note: We define language subgroups for analysis purposes only. These are based on general language families, but are not completely aligned with an agreed upon linguistic taxonomy. and report in Table 4 the average spBLEU for translating from and into each family. Our results indicate that Bantu, Dravidian, Indo-Aryan, and Nilotic are the language families where M2M-124 struggles the most, attaining an average spBLEU below 5 points. In fact, even translation within the language family (see values in the diagonal) works very poorly. For these languages, translating to/from Germanic and Romance languages works better. In general, Germanic, Romance, and Balto-Slavic are the language families that yield the largest spBLEU scores (above 10 spBLEU points in average). For these latter languages translation within the language family works the best. In this case, M2M-124 obtains an spBLEU score above 20. Overall, translation between all languages in a many-to-many fashion requires improvement, as evidenced by the overall quite low average scores.
6.4.2 Comparison of Various Systems.
We end by comparing various baseline systems, to understand the performance of some existing models on Flores-101.
Comparison to OPUS-100.
We evaluate OPUS-100 zhang-etal-2020-improving with 254M parameters and the two versions of M2M-124 fan2020beyond with 175 and 615M parameters. We calculate spBLEU in and out of five languages: English, Chinese, Spanish, Hindi, and Arabic.
Results are shown in Figure 6. Note that OPUS-100 only covers 80 languages in Flores-101, so this figure is on the subset of 80 languages covered by all models, for comparability. Overall, we see a consistent trend across models and directions: the larger M2M-124 has the best performance, followed by the smaller M2M-124 and OPUS-100. For all systems we evaluated, translation to and from English works the best, while translation to and from Chinese and Arabic struggles the most. In general, spBLEU scores are relatively low, suggesting ample room for improvement and need for further research in this area.
We next display results of M2M-124 175M parameters and 615M parameters on the full set of Flores-101 languages. This is shown in Figure 7. Comparing results with Figure 6, it is evident that the average performance in these language groupings has decreased, indicating that the additional languages in Flores-101 are likely very difficult. We see the same consistent trend that the larger M2M-124 model has stronger performance.
Comparison with Selected Masakhane Models.
The comparison with OPUS-100 compares M2M-124 with another multilingual model. However, various researchers in the low-resource translation community have developed models for specific languages. Many of these models are created by people who speak these languages. Further, focusing on specific directions of interest rather than multilingual models could produce specialized models with potentially higher quality.
Masakhane is a participatory research effort that focuses on African NLP. We end by comparing our M2M-124 model with several publicly available models from the Masakhane-MT repository. We evaluate models from English to the following languages: Yoruba141414https://github.com/masakhane-io/masakhane-mt/tree/master/benchmarks/en-yo/jw300-baseline-improve, Zulu151515https://github.com/masakhane-io/masakhane-mt/tree/master/benchmarks/en-zu/jw300-baseline, Swahili161616https://github.com/masakhane-io/masakhane-mt/tree/master/benchmarks/en-sw/fine-tuned-jw300-baseline, Shona171717https://github.com/masakhane-io/masakhane-mt/tree/master/benchmarks/en-sn/jw300-shona-baseline, Nyanja181818https://github.com/masakhane-io/masakhane-mt/tree/master/benchmarks/en-nya/jw-300-baseline and Luo191919https://github.com/masakhane-io/masakhane-mt/tree/master/benchmarks/en-luo/fine-tuned-jw300-baseline. The Masakhane models are trained on the JW300 dataset.
Results are shown in Table 11. We observe that for two languages — Zulu and Luo — Masakhane’s open sourced models have stronger performance on Flores-101 than the M2M-124 model. The remaining languages we assess have either similar or worse performance than M2M-124. Overall, all languages besides Swahili require significant improvement. Note that in many African countries, a large number of local and regional languages are spoken. We hope that Flores-101 can be used to develop non-English-centric models that directly translate between African languages.
The potential to develop translation systems for languages all over the world is hindered by lack of reliable, high quality evaluation. Without the fundamental ability to measure translation quality, it is impossible to develop, iterate, and test various models and techniques. Particularly for low-resource languages where there is little available training data, new methods and algorithms must be developed to improve translation of these languages. In this work, we create and open-source Flores-101, an evaluation benchmark covering 101 languages.
Flores-101 supports many-to-many evaluation, meaning any of 10,100 language directions can be evaluated. With rich metadata, it also supports multimodal translation via images, and document-level translation. Unlike many other multilingual datasets, Flores-101 is fully translated by humans using a detailed process with numerous quality control checks, including human evaluation during dataset creation.
Beyond translation, Flores-101 can be used to evaluate tasks such as sentence and document classification, language identification, and multilingual domain adaptation. We hope that the release of this dataset and our baseline M2M models will be useful for the community.
We hope to continue to expand the number of languages covered in Flores-101 and make the test set available to various community efforts to improve translation systems in shared tasks such as those from the Workshop on Machine Translation.
We’d like to thank Michael Auli and Sergey Edunov for enlightening discussions and advice. We’d like to thank Mona Diab and Denise Diaz for consulting on specific languages and providing invaluable guidance on translation quality. We’d like to thank Xian Li and Yuqing Tang for helping select the original sentences for translation as part of FLORES. Finally, we’d like to thank Brian Bui for helping with the organization of the data collection effort. We thank all of the translators and human evaluators, as well as the translation and quality assurance agencies we worked with, for helping create Flores-101.
Appendix A Translation Quality Guidelines
Critical Errors are issues that render the content unfit for use. An error is only critical if it seriously distorts the meaning of the source, in such a way that it becomes completely incomprehensible or that the essence of the message is lost.
Major Errors may confuse or mislead the user or hinder proper use of the product/service due to significant change in meaning or appear in a visible or important part of the content.
Minor Errors don’t lead to loss of meaning and wouldn’t confuse or mislead the user but would be noticed, would decrease stylistic quality, fluency or clarity, or would make the content less appealing.
Grammar — Noncompliance with target language’s grammar rules. Grammar errors may be at the word or sentence level. Types of grammar errors may include:
Incorrect person, number or case: the person, number or in the translation does not match the person, number or case of the source text.
Incorrect tense: the tense used in the translation does not correspond to the tense used in the source.
Incorrect subject/verb agreement: The subject and verb of a sentence must agree with one another in number whether they are singular or plural. If the subject of the sentence is singular, its verb must also be singular; and if the subject is plural, the verb must also be plural.
Incorrect use of singular or plural: if a noun in the source text is plural, the corresponding noun and its qualifiers must be plural in the translation.
Incorrect word order: word order of the translation is non-standard in the target language, or the translator has made a preferential change to the word order of the source.
Punctuation — Punctuation is missing, non-standard in the target language or inconsistent with the source punctuation.
Spelling — Incorrect spelling in the target language. Types of spelling errors may include:
Use of the wrong homophone for the context e.g. ‘bare with me’
Incorrect use of accents
Capitalization — Noncompliance with target language rules e.g. not capitalising the start of a sentence or a proper noun.
Addition/Omission — An essential element from the source text is missing in the translation or unnecessary/superfluous elements are present in the translation but were not originally present in the source text.
Mistranslation — Source text has not been translated faithfully. Types of mistranslation errors may include:
Incorrect interpretation of the source text
Literal translations and mistranslations of phrasal verbs, rendering incorrect meaning in the target
Lack of disambiguation of ambiguous terms
Using a subpar word
Unnatural Translation — Translation does not sound natural to a native speaker of the target language. Source text is translated word for word, rendering the translation unnatural, or is grammatically correct but unnatural to a native speaker.
Untranslated Text — Words are left untranslated from the source text. This is when there are words in the source language present in the translation which should have been translated into the target language.
Register — The style or register of the translation is inconsistent with the source and context.
Appendix B Additional Results
Details of Spectral Clusters
The list of clusters formed by spectral clustering on spBLEU scores is shown in Table 12.
|1||Assamese, Bengali, Hindi, Japanese, Korean, Malayalam, Marathi, Telugu, Urdu, Chinese|
|2||Asturian, Catalan, Spanish, Fula, French, Irish, Galician, Italian, Kabuverdianu, Lingala,|
|Norwegian, Occitan, Portuguese, Turkish, Wolof|
|3||Cebuano, Indonesian, Icelandic, Javanese, Khmer, Lao, Malay, Nyanja, Pashto, Shona,|
|Swahili, Thai, Tagalog, Vietnamese|
|4||Bulgarian, Bosnian, Czech, German, Luxembourgish, Macedonian, Romanian, Russian, Slovak,|
|Slovenian, Swedish, Ukrainian|
|5||Hausa, Igbo, Kamba, Luo, Maori, Northern Sotho, Oromo, Somali, Umbundu, Xhosa, Yoruba, Zulu|
|6||Azerbaijani, Belarusian, Greek, Estonian, Finnish, Hungarian, Kazakh, Kyrgyz, Lithuanian, Latvian,|
|Mongolian, Tajik, Uzbek|
|7||Gujarati, Kannada, Nepali, Oriya, Punjabi, Sindhi, Tamil|
|8||Afrikaans, Amharic, Arabic, Welsh, Danish, English, Farsi, Armenian, Hebrew, Georgian, Kurdish, Maltese|
Comparison of Many-to-Many with English-Centric Pivoting.
We compare the need of evaluation in a truly many-to-many sense. Instead of creating multilingual models that can translate directly between any pair of languages, pivoting through English is also possible. Pivoting works by first translating from language X into English, then from English to language Y, instead of translating from X to Y. Flores-101 supports the evaluation and comparison of these strategies. Unlike previous work such as fan2020beyond, which was unable to evaluate all directions of their many-to-many model, Flores-101 enables evaluation of all 101 x 101 pairs.
In Figure 10, we compare direct translation with English-Centric Pivoting for 10 Indic languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Nepali, Oriya, Punjabi, Sinhala, and Urdu. The spBLEU difference between direct translation and English pivoting is displayed in the heatmap. Overall, we see gains through 80% of the directions by translating directly in a many-to-many fashion. Some directions have gains of more than 3 spBLEU, while a majority of the quality decrease from pivoting is less than 1 spBLEU.
tSNE of Model Embeddings.
We examine the similarity of various languages by visualizing the tSNE of language embedding of the trained M2M-124 615M model. Unlike spectral clustering, this examination is a reflection of the model embeddings, rather than the spBLEU score. Figure 11 shows that the languages belonging to the same language family are often grouped together, clustered next to each other.