CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB

11/10/2019 ∙ by Holger Schwenk, et al. ∙ Facebook 0

We show that margin-based bitext mining in a multilingual sentence space can be applied to monolingual corpora of billions of sentences. We are using ten snapshots of a curated common crawl corpus (Wenzek et al., 2019) totaling 32.7 billion unique sentences. Using one unified approach for 38 languages, we were able to mine 3.5 billions parallel sentences, out of which 661 million are aligned with English. 17 language pairs have more then 30 million parallel sentences, 82 more then 10 million, and most more than one million, including direct alignments between many European or Asian languages. To evaluate the quality of the mined bitexts, we train NMT systems for most of the language pairs and evaluate them on TED, WMT and WAT test sets. Using our mined bitexts only and no human translated parallel data, we achieve a new state-of-the-art for a single system on the WMT'19 test set for translation between English and German, Russian and Chinese, as well as German/French. In particular, our English/German system outperforms the best single one by close to 4 BLEU points and is almost on pair with best WMT'19 evaluation system which uses system combination and back-translation. We also achieve excellent results for distant languages pairs like Russian/Japanese, outperforming the best submission at the 2019 workshop on Asian Translation (WAT).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 8

page 9

Code Repositories

LASER

Language-Agnostic SEntence Representations


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Most of the current approaches in Natural Language Processing (NLP) are data-driven. The size of the resources used for training is often the primary concern, but the quality and a large variety of topics may be equally important. Monolingual texts are usually available in huge amounts for many topics and languages. However, multilingual resources, typically sentences in two languages which are mutual translations, are more limited, in particular when the two languages do not involve English. An important source of parallel texts are international organizations like the European Parliament

Koehn (2005) or the United Nations Ziemski et al. (2016). These are professional human translations, but they are in a more formal language and tend to be limited to political topics. There are several projects relying on volunteers to provide translations for public texts, e.g. news commentary Tiedemann (2012), OpensubTitles Lison and Tiedemann (2016) or the TED corpus Qi et al. (2018).

A first system to systematically mine parallel sentences for many language pairs in Wikipedia, including bitexts without English as one of the languages, was presented in Schwenk et al. (2019). In that work, parallel sentence mining was based on a distance measure in a joint multilingual sentence embedding space Schwenk (2018); Artetxe and Schwenk (2018a), using the freely available LASER toolkit111https://github.com/facebookresearch/LASER which provides a language agnostic sentence encoder which was trained on 93 languages Artetxe and Schwenk (2018b).

In this paper, we use the same underlying mining approach based on LASER and scale to a much larger corpus: ten crawls of a curated common crawl data set Wenzek et al. (2019) instead of Wikipedia (32.7 billion against 550 million unique sentences). On one hand, we had to redesign the processing pipeline in order to attack the substantial computational challenge: billions of sentence embeddings have to be compared. One the other hand, it is an interesting research question whether global mining scales to billions of sentences, i.e. systematically comparing each sentence in a source language with all sentences in the target language. To the best of our knowledge, all existing large scale bitext mining techniques apply an hierarchical approach. First, a subset of all the texts is selected, e.g. documents, which are expected to contain parallel sentences. Then, sentences limited to previously aligned documents are compared and the parallel ones are identified. This type of local mining has the advantage of being very fast since only a few thousand sentences need to be compared for each document. However, sentences which appear in documents which were not preselected can not be aligned.

In this work, we make no assumption on the structure of the monolingual text corpora - we simply compare all sentences against each other. Our experimental results seem to indicate that such an approach works surprisingly well: we are able to mine billions of parallel sentences which seem to be of high quality: NMT systems trained only on our mined data outperform the currently best single NMT systems in WMT’19 and WAT’19.

The paper is organized as follows. In the next section, we first discuss related work. We then present the corpus used in this work and summarize the underlying mining approach. Section 4.3 describes in detail how we applied this approach to extract parallel sentences. To asses the quality of the extracted bitexts, we train NMT systems for a subset of language pairs and evaluate them on the TED corpus Qi et al. (2018), test sets of WMT Barrault et al. (2019) and of the the workshop for Asian language (WAT) Nakazawa et al. (2019). These results are presented in section 6. The paper concludes with a discussion of future research directions.

2 Related work

There is a large body of research on mining parallel sentences in collections of monolingual texts, usually named “comparable coprora”. Initial approaches to bitext mining have relied on heavily engineered systems often based on metadata information, e.g. (Resnik, 1999; Resnik and Smith, 2003). More recent methods explore the textual content of the comparable documents. For instance, it was proposed to rely on cross-lingual document retrieval, e.g. (Utiyama and Isahara, 2003; Munteanu and Marcu, 2005). or machine translation, e.g. (Abdul-Rauf and Schwenk, 2009; Bouamor and Sajjad, 2018), typically to obtain an initial alignment that is then further filtered. In the shared task for bilingual document alignment Buck and Koehn (2016), many participants used techniques based on

-gram or neural language models, neural translation models and bag-of-words lexical translation probabilities for scoring candidate document pairs. The STACC method uses seed lexical translations induced from IBM alignments, which are combined with set expansion operations to score translation candidates through the Jaccard similarity coefficient

(Etchegoyhen and Azpeitia, 2016; Azpeitia et al., 2017, 2018). Using multilingual noisy web-crawls such as ParaCrawl222http://www.paracrawl.eu/ for filtering good quality sentence pairs has been explored in the shared tasks for high resource Koehn et al. (2018) and low resource Koehn et al. (2019) languages.

In this work, we rely on massively multilingual sentence embeddings and margin-based mining in the joint embedding space, as described in Schwenk (2018); Artetxe and Schwenk (2018a, b). This approach has also proven to perform best in a low resource scenario Chaudhary et al. (2019); Koehn et al. (2019). Closest to this approach is the research described in España-Bonet et al. (2017); Hassan et al. (2018); Guo et al. (2018); Yang et al. (2019). However, in all these works, only bilingual sentence representations have been trained. Such an approach does not scale to many languages. Finally, related ideas have been also proposed in Bouamor and Sajjad (2018) or Grégoire and Langlais (2017). However, in those works, mining is not solely based on multilingual sentence embeddings, but they are part of a larger system.

Wikipedia is arguably the largest comparable corpus with high-quality human verified texts. One of the first attempts to exploit this resource was performed by Adafre and de Rijke (2006). An MT system was used to translate Dutch sentences into English and to compare them with the English texts. This method yielded several hundreds of Dutch/English parallel sentences. Later, a similar technique was applied to the Persian/English pair Mohammadi and GhasemAghaee (2010). Structural information in Wikipedia such as the topic categories of documents was used in the alignment of multilingual corpora Otero and López (2010). In another work, the mining approach of Munteanu and Marcu (2005) was applied to extract large corpora from Wikipedia in sixteen languages Smith et al. (2010). Otero et al. (2011) measured the comparability of Wikipedia corpora by the translation equivalents on three languages Portuguese, Spanish, and English. Patry and Langlais (2011) came up with a set of features such as Wikipedia entities to recognize parallel documents, and their approach was limited to a bilingual setting. Tufis et al. (2013) proposed an approach to mine parallel sentences from Wikipedia textual content, but they only considered high-resource languages, namely German, Spanish and Romanian paired with English. Tsai and Roth (2016) grounded multilingual mentions to English wikipedia by training cross-lingual embeddings on twelve languages. Gottschalk and Demidova (2017) searched for parallel text passages in Wikipedia by comparing their named entities and time expressions. Finally, Aghaebrahimian (2018) propose an approach based on bilingual BiLSTM sentence encoders to mine German, French and Persian parallel texts with English. Parallel data consisting of aligned Wikipedia titles have been extracted for twenty-three languages.333https://linguatools.org/tools/corpora/wikipedia-parallel-titles-corpora/ Since Wikipedia titles are rarely entire sentences with a subject, verb and object, it seems that only modest improvements were observed when adding this resource to the training material of NMT systems.

We are aware of two large-scale mining approaches applied to several languages pairs and large collections of texts. The European project ParaCrawl11footnotemark: 1 focuses on mining parallel data for all European languages, mainly aligned with English. The underlying alignment engine, called Bitextor,444https://github.com/bitextor/bitextor

uses a two stage approach: first parallel documents are identified, and then, pairs of documents are processed to identify parallel segments. Sentence alignments either uses a seed MT system, or bilingual lexicons

Esplà-Gomis and Forcada (2010), In another work, parallel sentences are mined in Wikipedia for many language pairs using a margin criterion in a multilingual sentence embedding space Schwenk et al. (2019)

3 The curated Common Crawl corpus

Figure 1: Number of unique sentences in ten crawls of the CCNet corpus (one crawl only for English).

In this work, we propose to mine parallel sentences from the Web, by using the data released by the Common Crawl project. Each month, a snapshot of the Web containing terabytes of web pages in various languages is obtained by randomly exploring URLs. We start by applying some preprocessing steps to the raw text data, following the pipeline introduced by Wenzek et al. (2019) and leading to the CCNet dataset. The first step is to deduplicate the data at the paragraph level, as the original crawls contain up to 70% of duplicated data. This preprocessing removes low quality content, such as boilerplate, navigation menus or cookie warnings. The second step of the pipeline is to identify the language of each document, using fastText555https://fasttext.cc/docs/en/language-identification.html Grave et al. (2018)

. This language identifier uses a linear classifier with character

-gram features, and can recognize up to 176 languages. Finally, the last step of the preprocessing is to filter low quality content by training a language model on Wikipedia, and only keeping documents with a low perplexity score. We refer the reader to ccnet:2019:arxiv for more details about this preprocessing pipeline. In Figure 1, we report the number of unique sentences obtained after preprocessing ten snapshots from Common Crawl. We currently process 38 languages. The English Web content is abundant and we used only one snapshot.

4 Distance-based mining approach

Table 1: Architecture of the system used to train massively multilingual sentence embeddings. See Artetxe and Schwenk (2018b) for details.

The underling idea of the mining approach used in this work is to first learn a multilingual sentence embedding, i.e. an embedding space in which semantically similar sentences are close independently of the language they are written in. This means that the distance in that space can be used as an indicator whether two sentences are mutual translations or not. Using a simple absolute threshold on the cosine distance was shown to achieve competitive results Schwenk (2018). However, it has been observed that an absolute threshold on the cosine distance is globally not consistent, e.g. Guo et al. (2018).

4.1 Margin criterion

Artetxe and Schwenk (2018a) showed that the alignment quality can be substantially improved by using a margin criterion instead of an absolute threshold. The margin between two candidate sentences and

is defined as the ratio between the cosine distance between the two sentence embeddings, and the average cosine similarity of its nearest neighbors in both directions:

(1)

where denotes the unique nearest neighbors of in the other language, and analogously for .

Artetxe and Schwenk (2018a) describe the “max-strategy” as one of the best performing ones: the margin is first calculated in both directions for all sentences in language and . Then, the union of these forward and backward candidates is build, candidates are sorted and pairs with source or target sentences which were already used are omitted. Finally, a threshold is applied on the margin score to decide whether two sentences are mutual translations or not. The reader is referred to Artetxe and Schwenk (2018a) for a detailed discussion with related work. The “max-strategy” was used in Schwenk et al. (2019) to mine parallel sentence in Wikipedia.

This strategy was initially motivated by an evaluation on the BUCC corpus Zweigenbaum et al. (2018), for which the reference alignments were known to be strictly 1:1. With increasing corpus size, namely billions of sentences in CCNet, the probability to find several perfect translations increases. This questions the restriction that each source sentence is aligned to exactly one and only one target sentence, and vice-versa. The value of in equation 1 should be also carefully selected to avoid that all the nearest sentences are valid translations, i.e. having similar distances and therefore a small margin. This would result in many valid translations being excluded. Therefore, we increased the value of the neighborhood in Equation 1 from 4, which was used in Schwenk et al. (2019), to 16.

4.2 Multilingual sentence embeddings

Distance-based bitext mining requires a joint sentence embedding for all the considered languages. One may be tempted to train a bi-lingual embedding for each language pair, e.g. España-Bonet et al. (2017); Hassan et al. (2018); Guo et al. (2018); Yang et al. (2019), but this is difficult to scale to thousands of language pairs present in CCNet. We follow Schwenk et al. (2019)

and use one single massively multilingual sentence embedding for all languages, namely the one proposed by the open-source LASER toolkit

Artetxe and Schwenk (2018b).

The underlying idea of LASER is to train a sequence-to-sequence system on many language pairs at once using a shared BPE vocabulary and a shared encoder for all languages. The sentence representation is obtained by max-pooling over all encoder output states. Figure 

1 illustrates this approach. The reader is referred to Artetxe and Schwenk (2018b) for a detailed description.

Figure 2: Parallelized processing flow to create an FAISS index for each language.

4.3 Scaling to billions of sentences

We use the same underlying mining procedure as Schwenk et al. (2019) who extracted 135 million parallel sentences from Wikipedia in 1620 different language pairs. However, our CCNet corpus is more than fifty times larger than Wikipedia: 32.7 billion against 595 million unique sentences. Our largest corpora are English and Russian, with 8.7 and 3 billion unique sentences, respectively. For ten languages, CCNet has more than one billion unique sentences (see Figure 1). This required to significantly modify the mining pipeline in order to tackle the substantially increased computational complexity. The overall processing pipeline can be structured into three tasks:

  1. text extraction and processing including sentence splitting, language identification (LID) and deduplication;

  2. creation of a compressed index for each language;

  3. mining parallel data for each language pair using the sentence embeddings and indexes.

For each step, we aimed to parallelize the processing as much as possible, by splitting the data into several blocks. We used blocks of about fifty millions sentences. This size was chosen so that the different operations can be performed in a couple of hours. As example, all the English texts are split into 160 blocks.

Text extraction

The first task, text extraction and processing, consists in the following steps:

  • Extract the texts from the JSON data of CCNet (see Wenzek et al. (2019) for details).

  • Split the “paragraphs” into sentences.

  • Perform LID and exclude sentences which are not in the expected language.

  • Mark all sentences which are duplicates within each block.

Each of these four steps are performed in parallel for all blocks, and languages. As a final step, we merge all the block-wise deduplicated sentences and create one set of globally unique sentences for each language. We used a freely available Python tool666https://pypi.org/project/sentence-splitter/ to detect sentence boundaries. If specific rules for a language are not available, we fall-back to a linguistically similar languages, e.g. we use Spanish rules for Gallican, and default to English otherwise. Most of the Asian languages are handled by regular expressions. We exclude sentences with more than 500 characters. LID is performed at the sentence level with fastText Joulin et al. (2016); Grave et al. (2018). Once, the text preparation task is finished, we have a corpus of unique sentences for each language . These texts are the basis for the index creation and mining tasks. The amount of data for each language is given in Table 3, third column.

Index creation

Figure 3: Parallelized processing flow to mine parallel sentences. Left: forward distances; Right: backward distances. Middle: both distances are combined according to Equation 1 and the bitexts extracted.

We follow Schwenk et al. (2019) and use the highly optimized FAISS toolkit Johnson et al. (2017)777https://github.com/facebookresearch/faiss/wiki/Faiss-indexes to create compact indexes of the sentence embedding. LASER’s sentence representations are 1024-dimensional. This means that the embeddings of all sentences would require

TB to store them. We use an aggressive vector compression based on a 64-bit product-quantizer

Jégou et al. (2011). In order to account for the huge number of sentences, we increase the amount of cells from 32k to 64k to partition the search space. This corresponds to the index type OPQ64,IVF65536,PQ64 in FAISS terms.

Exhaustive searching in huge indexes is only tractable if performed on GPU. FAISS supports sharding of a single index on multiple GPUs - this is most efficient if the GPUs are in the same machine and communicate very quickly. For our index type, and eight GPUs with 32GB of memory each, this allows to create an index of about three billion sentences. This includes all languages with the exception of English with 8.7 billion sentences. Therefore, we created three English indexes of 2.7 billion sentences each.

The processing pipeline to train and create the indexes is summarized in Figure 2. First, we train an index on 40 million sentences sampled in the whole corpus, when available. Once the index is trained, the data in each block is independently added to the common trained index. This can be also processed in parallel. These individual indexes are then merged into one index for each language. The Russian and and Japanese indexes with three billion sentences have a file size of about 200GB, all 28 indexes total about 2TB.

Mining

Once indexes for all languages are calculated, we can start the mining process for each language pair. Schwenk et al. (2019) pre-calculated the sentence embeddings for all languages and then started the pairwise mining process. The authors report that less than 3.5h on 8 GPUs are needed for the whole “max-mining” process between English and German, i.e 134M and 51M sentences respectively. This corresponds to about distances calculations.

Let us consider mining Japanese/Russian bitext in CCNet with 3.0 and 2.9 billion sentences respectively, i.e. . This means that we have to perform about 1300 times more distance calculations, which would translate to more than 6 months on a single machine with 8 GPUs. We tackle this computational challenge by decoupling the distance calculations in forward and backward direction and the margin calculation (see Equation 1), and processing all these steps in parallel. This processing pipeline is illustrated in Figure 3.

In addition, we had to use a special procedure to mine for parallel sentences with English due to the large amounts of English sentences. For the sake of explanation, let us assume that we want to extract German/English bitexts. It is computationally too expensive to perform -nn search in the German FAISS index for all the 8.7 billion English sentences (backward distances). Therefore, we are constraint to only use the forward distances . Remember that we had to partition all the English sentences in three indexes of about 2.7 billion sentences each. Consequently, for each German sentence, we search in the three different English indexes, and calculate the margin with respect to the nearest neighbors. We then combine the alignments and keep those which a margin superior to a threshold of 1.06. It can happen that the algorithm finds valid translation in each of the three indexes. We decided to keep those alternative translations.

For all other language pairs , we used the max-margin strategy as described in Section 4 and Equation 1, i.e. calculating the forward and backward distances .

5 Quantitative result analysis

Mining for parallel sentences in more than 32 billions sentences is computationally very expensive. In the current version of the CCMatrix corpus, we have limited the alignment process to 38 languages. Those were chosen to cover several language families and scripts. In the following, we first discuss the amount of extracted sentences. We then turn to a qualitative assessment by training NMT systems for many language pairs (Section 6).

5.1 Choosing the margin threshold

Figure 4: BLEU scores on the Hungarian-Danish TED test set for various margin threshold values.

The margin threshold used to mine parallel sentences will impact the quality of produced bitexts. A higher threshold will lead to better aligned sentences, and thus higher quality bitexts, but also to smaller datasets. Thus, there is a trade-off between the size of the extracted bitexts, and their quality. Exploratory experiments showed that a threshold around seems to give good results. To confirm this, we trained and evaluated machine translation systems on the Hu-Da pair for different values of the treshold. We report results in Figure 4, showing that leads to the best performance. Note that this value is different from the margin threshold of 1.04 reported in Schwenk et al. (2019) since we use neighborhood of instead of 4.

5.2 Analysis

We were able to mine in total 3.5 billion parallel sentences when using a threshold of 1.06 on the margin, out of which 661 million are aligned with English (see Table 2).

Most of the current MT system focus on the translation from or into English. Other language pairs are usually handled by pivoting through English since direct parallel texts are much smaller. This can be suboptimal when translating between two morphologically rich languages, e.g. French/German, or very different languages, e.g. Russian/Japanese. We also provide parallel data for many language pairs not involving English. Due the high computational complexity, we only considered 28 languages (see Table 3). This yielded about three million parallel sentence pairs. To the best of our knowledge, this makes CCMatrix the largest collection of high-quality mined parallel texts.

ISO Name Family #Sents [M] BLEU
Mono. Bitext xx/en en/xx
ar Arabic Arabic 196 6.5 27.7 15.7
bg Bulgarian Slavic 68 3.7 32.3 33.9
cs Czech Slavic 303 9.8 25.0 23.1
da Danish Germanic 109 4.5 42.3 41.2
de German Germanic 1728 67.3 31.6 30.5
el Greek Hellenic 144 5.6 31.6 32.8
en English Germanic 8677 - - -
eo Esperanto constructed 10 0.9 24.3 22.4
es Spanish Romance 1534 86.3 38.6 39.7
et Estonian Uralic 21 0.9 18.9 18.7
fa Farsi Iranian 192 2.5 25.1 15.2
fi Finnish Uralic 132 4.1 15.7 16.0
fr French Romance 1869 94.1 39.0 41.2
gl Galician Romance 26 1.1 26.5 25.1
he Hebrew Semitic 70 1.5 32.5 23.1
hi Hindi Indo-Aryan 48 0.7 24.2 24.9
hr Croatian Slavic 21 0.7 25.3 23.0
hu Hungarian Uralic 148 3.6 16.5 17.8
id Indonesian Malayo-Polynesian 366 13.4 32.5 32.5
it Italian Romance 686 31.3 34.0 33.4
ja Japanese Japonic 2944 33.7 11.5 11.3
ko Korean Koreanic 778 7.2 13.7 4.1
lt Lithuanian Baltic 38 1.3 18.4 17.0
no Norwegian Germanic 109 3.8 42.9 41.2
nl Dutch Germanic 510 23.8 33.0 31.9
pl Polish Slavic 505 16.0 17.8 17.0
pt Portuguese Romance 729 33.1 40.6 38.8
ro Romanian Romance 141 6.9 29.8 24.5
ru Russian Slavic 3047 72.4 20.1 20.1
sk Slovak Slavic 275 9.9 26.2 24.5
sl Slovenian Slavic 92 3.4 22.8 22.1
sr Serbian Slavic 83 2.7 27.0 16.1
sv Swedish Germanic 1200 43.8 37.3 35.2
tr Turkish Turkic 1382 26.8 18.9 15.8
uk Ukrainian Slavic 110 1.6 18.6 17.9
ur Urdu Indo-Aryan 19 0.3 9.3 9.8
vi Vietnamese Vietic 1172 18.5 27.5 28.9
zh Chinese Chinese 2512 17.6 19.6 13.9
Table 2: CCMatrix: alignments with English. We give the number of the monolingual texts and the extracted parallel sentences (all numbers in millions) for a margin threshold of 1.06, as well as the BLEU scores on the TED test.
ISO Name Family Size bg cs da de el en es fa fi fr he hi hu id it ja ko ms nl no pl pt ru tr uk vi zh Total
ar Arabic Arabic 196 3.0 3.9 2.7 7.5 3.3 6.5 10.0 3.1 2.7 - 2.2 1.4 2.7 4.1 5.8 5.0 2.5 1.5 5.1 2.5 4.5 6.7 9.2 5.5 1.5 4.2 5.4 112.3
bg Bulgarian Slavic 68 - 6.1 3.7 9.9 4.3 3.7 10.7 2.3 3.6 11.4 2.1 1.5 3.8 3.8 7.4 5.7 2.8 1.3 6.9 3.0 7.2 7.5 17.4 5.8 2.3 4.4 5.0 146.5
cs Czech Slavic 303 - - 5.9 18.3 5.4 9.8 15.5 2.9 6.1 17.3 3.1 2.0 6.1 5.3 11.2 8.0 4.0 2.0 11.6 4.9 13.2 10.7 18.1 8.6 2.6 6.0 7.0 215.8
da Danish Germanic 109 - - - 12.6 3.8 4.5 - 2.0 4.8 12.0 2.3 1.5 3.7 3.9 7.3 5.6 2.9 1.4 9.5 9.6 6.5 7.4 9.2 5.7 1.5 4.2 4.9 139.2
de German Germanic 1728 - - - - 9.8 67.3 - 4.8 11.3 50.0 5.6 3.2 11.0 9.6 29.5 11.6 6.2 3.5 33.2 10.4 20.5 23.4 29.3 - 3.8 9.7 11.8 413.9
el Greek Hellenic 144 - - - - - 5.6 12.2 2.2 3.6 12.9 2.3 1.4 3.7 3.7 8.5 5.2 2.6 1.4 6.9 3.0 6.2 8.4 9.9 5.6 1.7 4.2 4.7 142.7
en English Germanic 8677 - - - - - - 86.3 2.5 4.1 94.1 1.5 0.7 3.6 13.4 31.3 33.7 7.2 0.8 23.8 3.8 16.0 33.1 72.4 26.8 1.6 18.5 17.6 590.4
es Spanish Romance 1534 - - - - - - - 5.5 9.7 - 5.9 3.2 9.5 12.4 44.3 - 6.2 - 23.3 8.8 19.6 59.4 32.4 15.2 4.0 11.9 13.2 419.3
fa Farsi Iranian 192 - - - - - - - - 2.0 5.5 1.7 1.2 1.9 3.1 3.6 3.5 2.0 1.3 3.6 1.9 3.2 4.1 5.6 4.9 1.1 3.3 3.4 82.3
fi Finnish Uralic 132 - - - - - - - - - 11.1 2.2 1.4 4.2 3.8 7.1 6.2 3.0 1.4 8.1 4.1 6.8 7.1 9.9 6.2 1.7 4.4 5.2 142.0
fr French Romance 1869 - - - - - - - - - - 6.8 3.5 10.3 11.9 - 12.6 6.9 4.2 32.1 9.9 21.1 37.9 31.9 17.4 4.2 12.5 14.0 451.2
he Hebrew Semitic 70 - - - - - - - - - - - 1.2 1.9 2.8 4.0 5.3 2.5 1.1 4.2 2.0 3.6 4.3 6.4 4.4 1.2 3.6 3.6 87.8
hi Hindi Indo-Aryan 48 - - - - - - - - - - - - 1.3 1.9 2.3 2.7 1.6 0.9 2.4 1.4 2.1 2.6 3.4 3.2 0.8 1.9 2.4 53.0
hu Hungarian Uralic 148 - - - - - - - - - - - - - 3.2 7.0 5.2 2.6 1.3 7.1 3.0 7.1 6.8 9.6 5.6 1.7 3.7 4.6 132.2
id Indonesian Malayo-Polynesian 366 - - - - - - - - - - - - - - 7.4 5.9 3.5 4.4 7.6 3.7 6.0 9.1 9.9 8.1 1.7 7.9 6.3 164.4
it Italian Romance 686 - - - - - - - - - - - - - - - 8.9 4.7 2.5 16.6 6.1 14.7 25.4 20.5 10.5 2.8 8.0 8.6 306.1
ja Japanese Japonic 2944 - - - - - - - - - - - - - - - - - 3.3 8.9 5.1 - 9.1 11.6 - 2.8 6.5 13.5 186.0
ko Korean Koreanic 778 - - - - - - - - - - - - - - - - - 1.9 4.8 2.6 4.0 4.9 6.0 8.4 1.4 5.2 6.3 106.6
ms Malay Malayo-Polynesian 25 - - - - - - - - - - - - - - - - - - 2.6 1.3 2.3 2.8 3.7 3.4 0.8 3.2 2.8 57.1
nl Dutch Germanic 510 - - - - - - - - - - - - - - - - - - - 7.8 12.9 15.5 17.7 11.0 2.7 7.2 8.4 301.3
no Norwegian Germanic 109 - - - - - - - - - - - - - - - - - - - - 5.5 6.4 8.1 5.2 1.4 3.9 4.3 130.0
pl Polish Slavic 505 - - - - - - - - - - - - - - - - - - - - - 13.5 22.9 - 3.4 6.5 7.1 236.5
pt Portuguese Romance 729 - - - - - - - - - - - - - - - - - - - - - - 20.9 11.0 3.0 8.8 9.5 359.4
ru Russian Slavic 3047 - - - - - - - - - - - - - - - - - - - - - - - - 31.2 10.4 13.0 440.7
tr Turkish Turkic 1382 - - - - - - - - - - - - - - - - - - - - - - - - 2.5 10.4 10.0 195.3
uk Ukrainian Slavic 110 - - - - - - - - - - - - - - - - - - - - - - - - - 0.2 - 83.6
vi Vietnamese Vietic 1172 - - - - - - - - - - - - - - - - - - - - - - - - - - 9.1 179.6
zh Chinese Chinese 2512 - - - - - - - - - - - - - - - - - - - - - - - - - - - 201.7
Table 3: CCMatrix: number of extracted parallel sentences for each language pair (all numbers in millions) for a margin threshold of 1.06, e.g. we have 33.2 million German/Dutch sentences. The column “Size” gives the number of unique sentences in the monolingual texts after deduplication and LID.

The general tendency is of course that mining in large monolingual corpora leads to larger extracted bitexts. This is however not systematically true. Let us consider for examples Polish and Dutch which have both about 500 million unique sentences. When aligned with Czech, a Slavic language, there are slightly more bitexts with Polish than Dutch (13.2M in comparison to 11.6M). When aligned with German, a Germanic language like Dutch, there are substantially more bitexts for Dutch than Polish, 33.2M and 20.5M respectively. Finally, both Polish and Dutch have much smaller bitexts with Indonesian although there are more than 360M sentences for that language.

One one hand, a possible explanation could be that LASER alignments are more reliable for languages which are very similar, i.e. in the same language family. On the other hand, it may also be that people which live in nearby countries have similar interests which increases the chance to find translations on the Web.

6 Qualitative result evaluation

In order to asses the quality of the extracted parallel sentences, we trained NMT systems on the extracted parallel sentences and evaluated them on several public test sets. A test set for many languages, based on the TED tasks, is provided in Qi et al. (2018). Our results on this test set are given in the next section. The workshop on machine translation (WMT) has a long history of organizing evaluations of machine translation, and many comparative results are published for these tasks Barrault et al. (2019). We provide very competitive BLEU scores for several WMT’19 evaluation tasks in Section 6.2. Finally, we consider the task of translating between Russian and Japanese as proposed by the 2019 edition of the workshop on Asian translation (see Section 6.3).

6.1 TED corpus

ar bg cs da de el en es fa fi fr he hi id it ja ko ms nl no pl pt ru tr uk vi zh
ar - - 10.8 13.8 14.6 15.2 27.7 - 8.0 6.2 - 9.8 10.9 16.0 17.6 7.4 1.9 8.7 14.7 14.4 9.1 19.5 13.5 7.2 6.2 17.7 8.8
bg - - 15.9 21.3 19.1 20.2 32.3 - 8.4 9.5 25.8 11.4 12.6 18.7 19.8 - 2.2 9.7 19.0 19.0 12.4 22.4 16.5 8.7 10.8 19.4 -
cs 5.6 18.1 - 18.7 17.9 16.5 25.0 - 7.1 10.6 22.2 8.9 11.4 15.7 16.9 - 2.5 6.7 18.3 19.8 13.1 18.5 15.3 7.9 9.5 16.8 -
da 5.9 22.4 16.4 - - - 42.3 - 8.0 13.5 28.1 11.7 13.9 20.3 22.7 - 2.7 11.7 25.8 27.5 14.7 25.2 17.5 9.2 8.2 18.8 -
de 7.6 21.3 17.4 - - 18.9 31.6 - 8.7 11.8 26.8 12.2 16.1 19.9 21.7 8.9 2.9 10.6 24.4 18.6 13.7 23.4 16.6 10.0 10.8 19.8 10.0
el 8.1 21.1 13.4 - 18.3 - 31.6 - - 10.0 26.9 11.4 6.5 19.1 21.4 - 2.1 - 19.8 21.1 - 22.4 15.2 8.9 8.8 - -
en 15.7 33.9 23.1 41.2 30.5 32.8 - 39.7 15.2 16.0 41.2 23.1 24.9 32.5 33.4 11.3 4.1 23.4 31.9 41.2 17.0 38.8 20.1 15.8 17.9 28.9 13.9
es - - - - - - 38.6 - 10.0 11.7 - 13.8 15.9 22.7 28.6 - 3.2 - 24.2 22.4 14.1 31.5 17.0 11.2 12.3 23.2 -
fa 6.5 13.6 9.3 13.2 12.9 - 25.1 16.3 - 5.2 18.6 7.2 8.8 15.0 14.8 - 1.9 8.2 13.4 10.4 7.8 16.8 11.4 8.1 5.4 16.8 7.9
fi 3.2 10.2 9.6 12.7 10.9 9.4 15.7 12.5 3.0 - - 5.6 8.7 10.0 10.0 - 1.8 2.2 11.6 9.2 7.1 10.9 8.6 5.6 5.0 12.1 -
fr - 24.2 18.8 27.0 23.7 24.6 39.0 - 10.0 - - 13.8 18.3 23.9 - 10.0 3.5 12.5 25.2 24.1 15.2 29.4 18.5 11.8 12.4 23.6 9.3
he 8.5 17.0 12.8 18.2 17.4 17.4 32.5 22.7 6.9 8.1 24.5 - 11.7 17.6 19.1 7.2 2.1 8.3 17.5 16.5 10.5 21.2 14.4 7.7 6.6 17.2 -
hi 3.5 9.8 7.7 11.2 14.3 10.3 24.2 15.8 3.4 5.0 19.0 6.5 - 12.7 13.3 6.2 1.6 5.4 12.0 8.7 7.1 15.1 12.0 6.2 3.5 15.1 6.6
id 7.7 19.9 14.6 20.8 18.9 18.4 32.5 23.8 9.4 9.7 25.3 11.2 16.1 - - 9.9 3.3 18.9 20.1 21.4 12.6 23.0 15.4 10.6 9.2 23.3 10.8
it 9.3 22.4 16.5 24.8 21.9 22.6 34.0 30.4 9.4 11.2 - 12.7 15.8 20.8 - - 3.1 13.8 22.8 23.7 13.7 28.7 16.4 10.7 11.0 21.7 -
ja 3.7 7.2 5.8 8.4 7.7 7.8 11.5 - 4.4 4.4 12.3 4.0 8.8 9.4 9.4 - - 5.2 8.3 7.8 5.5 - 7.3 - - - 6.7
ko 3.3 7.1 5.6 8.1 8.3 - 13.7 10.9 4.0 4.4 12.3 3.8 8.2 10.0 8.3 - - 3.9 8.3 7.8 5.3 9.5 7.1 5.0 2.7 12.0 6.3
ms 7.4 11.6 8.2 16.5 12.6 - 27.1 - 8.7 5.6 19.5 6.0 11.5 19.8 17.2 - 1.6 - 13.5 10.2 7.8 18.3 12.2 9.2 4.5 23.0 -
nl 7.8 19.9 16.7 26.8 23.7 - 33.0 25.4 8.7 12.1 28.0 11.5 15.8 20.9 21.9 - 2.9 10.7 - - 14.3 24.3 - 9.6 8.8 20.3 -
no 7.9 20.5 18.8 30.4 19.7 21.6 42.9 24.0 5.2 10.4 26.6 11.4 11.3 20.2 24.0 9.4 2.8 10.8 - - 11.4 23.6 16.9 8.3 9.4 14.0 -
pl 5.0 13.8 13.0 16.0 13.2 - 17.8 15.6 5.3 8.4 18.4 7.0 10.6 13.5 14.0 - 2.0 7.1 14.3 11.0 - 14.6 12.2 6.3 8.5 14.4 -
pt 10.0 24.7 17.7 27.1 23.1 24.9 40.6 33.6 10.1 11.4 32.3 13.9 17.4 24.1 29.1 - 3.4 12.6 24.6 22.5 14.6 - - 11.1 11.8 23.6 -
ru 6.0 16.9 12.8 15.9 15.3 14.6 20.1 17.6 7.0 7.8 20.6 9.3 12.7 14.5 15.5 7.9 2.0 9.3 - 14.4 11.2 16.8 - - 17.0 16.1 8.0
tr 5.2 11.8 8.7 12.2 12.1 11.2 18.9 14.6 6.1 7.2 17.1 6.5 12.1 13.0 12.6 - 2.2 7.1 12.5 9.9 7.4 13.7 - - 4.7 14.2 -
uk 4.0 14.2 10.0 12.2 12.2 10.7 18.6 15.0 4.4 6.4 16.8 4.8 6.5 10.6 12.7 - 1.2 5.2 11.3 10.4 9.3 13.7 19.2 4.5 - 11.7 -
vi 7.6 16.9 12.9 17.3 17.0 - 27.5 21.8 8.6 9.4 23.3 9.9 15.8 21.4 18.9 - 3.2 16.2 18.1 16.6 11.1 20.7 14.2 10.0 8.7 - -
zh 6.3 - 9.3 11.6 12.2 - 19.6 - - 7.1 16.7 7.0 12.0 14.9 13.7 9.7 2.9 - - - - - 11.7 - - - -
Table 4: BLEU scores on the TED test set as proposed in Qi et al. (2018). NMT systems were trained on bitexts mined in CCMatrix only, using a threshold of 1.06. No other resources were used.

In this set of experiments, we are interested in the performance of NMT systems trained on our bitexts only. Following Gottschalk and Demidova (2017) and Schwenk et al. (2019), we evaluate on the test sets of the TED dataset Qi et al. (2018). This dataset contains parallel TED talk transcripts in 50 languages.888https://github.com/neulab/word-embeddings-for-nmt The TED datasets are tokenized and we first detokenize them using Moses, with the exception of pairs involving Korean because it creates artifacts. As we do not include the training set provided with the TED dataset, we are not guaranteed that our bitexts cover the same domains.

In the current version of CCMatrix, we consider different languages, resulting in NMT systems to train. Although the size of bitexts vary for the different language pairs, we used the same pipeline for each pair. In paraticular, we limit the bitext size to 15M sentences to avoid very long training times. We tokenize the dataset with Moses, with the exception of Chinese where we use Jieba and Japanese where we use Mecab. We compute a BPE vocabulary Sennrich et al. (2016) of size

k on the resulting tokenized training bitext. Then, for all the pairs, we train the same architecture, that is a Transformer network with

layers for both the encoder and decoder. We use a dimension of for the word embeddings and FFN=. We train each model for epochs with an initial learning rate of . We keep the model with the best BLEU score on the validation set of TED.

In Table 4, we report tokenized BLEU scores on the test set (using Moses, jieba and mecab tokenization). The average BLEU is for all the pairs and for pairs with English. In comparision with Wikimatrix Schwenk et al. (2019), we have pairs out of with a BLEU above while they had only out of language pairs. Their best pair reached BLEU (for Brazilian Portuguese into English), while we have pairs that surpasses , with our best pairs reaching BLEU (Norwegian to English). These results should not be considered as the state-of-the-art on the TED corpus since we did not attempt to optimize the Transformer architecture for each language pair. We believe that they give a good indication of the quality of the mined parallel sentences, and suggest that our bitext mining approach is robust to the noise and difference in domains that exist in a large corpora like Common Crawl.

6.2 WMT’19 evaluation

System de-en en-de en-ru ru-en zh-en en-zh de-fr fr-de
Single
systems
NT’18 WMT bitext 46.2 45.9 33.5 33.4 25.8 39.2 - -
NT’18 CCMatrix 47.4 49.7 35.4 35.3 25.8 41.3 - -
NT’19 WMT bitext 41.0 40.4 31.4 38.1 - - - -
NT’19 CCMatrix 40.7 44.7 34.8 39.5 29.2 34.8 37.0 33.0
Ensembles
+ BT
+ Reranking
NT’19 best 42.8 44.9 36.3 40.1 39.3 44.6 37.3 35.0
Table 5: BLEU scores on the Newstest’18 (NT’18) and Newstest’19 (NT’19) test set. Newstest’18 WMT bitext and Newstest’19 WMT bitext are published results for single models trained on parallel WMT’19 data, for En-De and En-Ru results are from (Ng et al., 2019), for En-Zh results are from (Sun et al., 2019)

. Newstest’19 best are the best BLEU scores achieved by ensembles of models trained on both parallel and back-translated WMT’19 data as of the moment of writing, according to

http://matrix.statmt.org/

We also evaluate our bitexts on the WMT’19 news translation task. We only consider high resource directions for this comparison as they constitute the biggest challenge, because the existing baseline systems perform very strongly and achieving superior performance with mined data only is very challenging. We are following the setup described in (Ng et al., 2019) to train systems on En-De, En-Ru, En-Zh and De-Fr. We used Transformer Big architecture with increased FFN size (8192), we trained these models for 500k updates on 8 GPUs with batch size of 3500 tokens. Given the large amounts of mined bitexts for the considered language pairs (see Table 3), we limit the sentence pairs to those with score higher than or equal to 1.07 except for En-Zh where we apply a margin threshold of 1.06. This gives us: 40.6M En-De, 39.5M En-Ru, 32.6M De-Fr and 17.6M En-Zh sentence pairs. For each direction we learned joined source-target BPE encoding Sennrich et al. (2016) and used shared input/output embeddings. For En-De and En-Ru models, we increased model size even further to 9 encoder and decoder layers, used layer dropout Fan et al. (2019) and increased embed dimensions to 2048. We tuned training parameters on Newstest 2014-2016 when available and on the WMT’19 dev set for De-Fr. We compare performance of a single model for each direction with the performance of published single models trained on bitext data only. We found that systems trained on CCMatrix outperform systems trained on bitext data (see Table 5). This can be seen as a clear indicator of the quality of the mined data.

To answer another question of how does this data combine with real human translated data we train a system using a combination of CCMatrix and bitexts provided by WMT’19, at the example of En-De. We found that this system outperforms the system trained on CCMatrix data only by 0.8 BLEU points in average, achieving an BLEU score of 50.9 on newstest2018 and of 45.1 on newstest2019.

6.3 WAT’19 evaluation

System Ja / Ru Ru / Ja
CCMatrix dev 16.15 19.06
CCMatrix test 14.48 18.20
WAT’19 test best 14.26111111http://lotus.kuee.kyoto-u.ac.jp/WAT/evaluation/list.php?t=67&o=1 16.41121212http://lotus.kuee.kyoto-u.ac.jp/WAT/evaluation/list.php?t=66&o=4
Table 6: BLEU scores on the WAT’19 evaluation.

Finally, we have evaluated the translation between Russian and Japanese as proposed in the 2019 Workshop on Asian Translation (WAT) Nakazawa et al. (2019).999http://lotus.kuee.kyoto-u.ac.jp/WAT/WAT2019/index.html According to the organizers of the WAT workshop, this language pairs represents “an extremely low resource situation for distant language pairs”. The organizers provide only a tiny amount of parallel data from the Global Voices domain for training (12,356 sentences), and a development (486) and test set (600 sentences) from News Commentary domain, respectively.101010https://github.com/aizhanti/JaRuNC The participants in the WAT’19 Russian/Japanese evaluation were encouraged to use provided Russian/English and Japanese/English bitexts and train multilingual NMT systems.

We trained an NMT system on CCMatrix Russian/Japanese bitexts only, without using other resources or texts aligned with English. We applied a threshold of 1.06 on the margin. We use the same NMT architecture than in Section 6.2, without layer dropout. We report tokenized BLEU scores using multi-bleu.perl using Moses tokenization for Russian, and Mecab for Japanese (see Table 6). We were able to outperform the best performing system at the WAT’19 evaluation, in particular when translating into Japanese (see Table 6). The participant in the WAT translation task were constraint to only use the provided resources, which included alignments with English. Therefore, our results are not directly comparable, but we argue that they are still a good indicator of the alignment quality of our mined bitexts.

7 Conclusion

We have shown that margin-based mining in a joint multilingual sentence embedding space can be scaled to monolingual texts of more than 36 billions unique sentences in 38 languages. Our approach is generic and simply compares all sentences among each other, without requiring any document alignment. We tackled the computational complexity by parallelizing all processing steps. This procedure yielded 661 million sentences aligned with English, and 3.5 billion for pairwise alignments of 28 languages. To the best of our knowledge, this is by far the largest collection of high quality parallel sentences.

We have performed an extensive evaluation of the quality of the mined bitexts by training NMT systems for many language pairs. The mined bitexts seem to be of high quality. Training only on our mined data, we are able to outperform the best reported single NMT system at the WMT’19 evaluations for the translation between German, Russian and Chinese and English, as well as between German and French. We also achieve state-of-the-art BLEU scores for the translation between Russian and Japanese on the WAT’19 test set. We provide a script to reproduce our results on the LASER github.111111[13]https://github.com/facebookresearch/LASER

In the next version of the CCMatrix corpus, we will increase the number of common crawl snapshots and focus on low-resource languages. The mined data can be used to train improved multilingual LASER sentence embeddings. The large amount of parallel data also raises interesting questions, namely how to use it best, for instance, how to efficiently train NMT systems on more than fifty million high quality bitexts?

8 Acknowledgments

We would like to thank Matthijs Douze for support with the use of FAISS and Vishrav Chaudhary for helpful comments on this work.

References

  • S. Abdul-Rauf and H. Schwenk (2009) On the Use of Comparable Corpora to Improve SMT performance. In EACL, pp. 16–23. External Links: Link Cited by: §2.
  • S. F. Adafre and M. de Rijke (2006) Finding similar sentences across multiple languages in Wikipedia. In Proceedings of the Workshop on NEW TEXT Wikis and blogs and other dynamic text sources, Cited by: §2.
  • A. Aghaebrahimian (2018)

    Deep neural networks at the service of multilingual parallel sentence extraction

    .
    In Coling, Cited by: §2.
  • M. Artetxe and H. Schwenk (2018a) Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings. https://arxiv.org/abs/1811.01136. Cited by: §1, §2, §4.1, §4.1.
  • M. Artetxe and H. Schwenk (2018b) Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. In https://arxiv.org/abs/1812.10464, Cited by: §1, §2, §4.2, §4.2, Table 1.
  • A. Azpeitia, T. Etchegoyhen, and E. Martínez Garcia (2017) Weighted Set-Theoretic Alignment of Comparable Sentences. In BUCC, pp. 41–45. External Links: Link Cited by: §2.
  • A. Azpeitia, T. Etchegoyhen, and E. Martínez Garcia (2018) Extracting Parallel Sentences from Comparable Corpora with STACC Variants. In BUCC, Cited by: §2.
  • L. Barrault, O. Bojar, M. R. Costa-jussà, C. Federmann, M. Fishel, Y. Graham, B. Haddow, M. Huck, P. Koehn, S. Malmasi, C. Monz, M. Müller, S. Pal, M. Post, and M. Zampieri (2019) Findings of the 2019 conference on machine translation (wmt19). In WMT, pp. 1–61. External Links: Link Cited by: §1, §6.
  • H. Bouamor and H. Sajjad (2018) H2@BUCC18: Parallel Sentence Extraction from Comparable Corpora Using Multilingual Sentence Embeddings. In BUCC, Cited by: §2, §2.
  • C. Buck and P. Koehn (2016) Findings of the wmt 2016 bilingual document alignment shared task. In Proceedings of the First Conference on Machine Translation, Berlin, Germany, pp. 554–563. External Links: Link Cited by: §2.
  • V. Chaudhary, Y. Tang, F. Guzmán, H. Schwenk, and P. Koehn (2019) Low-resource corpus filtering using multilingual sentence embeddings. In Proceedings of the Fourth Conference on Machine Translation (WMT), Cited by: §2.
  • C. España-Bonet, Á. C. Varga, A. Barrón-Cedeño, and J. van Genabith (2017) An Empirical Analysis of NMT-Derived Interlingual Embeddings and their Use in Parallel Sentence Identification. IEEE Journal of Selected Topics in Signal Processing, pp. 1340–1348. Cited by: §2, §4.2.
  • M. Esplà-Gomis and M. L. Forcada (2010)

    Combining content-based and url-based heuristics to harvest aligned bitexts from multilingual sites with bitextor

    .
    The Prague Bulletin of Mathematical Linguistics 9, pp. 77–86. Cited by: §2.
  • T. Etchegoyhen and A. Azpeitia (2016) Set-Theoretic Alignment for Comparable Corpora. In ACL, pp. 2009–2018. External Links: Document, Link Cited by: §2.
  • A. Fan, E. Grave, and A. Joulin (2019) Reducing transformer depth on demand with structured dropout. External Links: 1909.11556 Cited by: §6.2.
  • S. Gottschalk and E. Demidova (2017) MultiWiki: Interlingual text passage alignment in Wikipedia. ACM Transactions on the Web (TWEB) 11 (1), pp. 6. Cited by: §2, §6.1.
  • E. Grave, P. Bojanowski, P. Gupta, A. Joulin, and T. Mikolov (2018) Learning word vectors for 157 languages. https://arxiv.org/abs/1802.06893. Cited by: §3, §4.3.
  • F. Grégoire and P. Langlais (2017)

    BUCC 2017 Shared Task: a First Attempt Toward a Deep Learning Framework for Identifying Parallel Sentences in Comparable Corpora

    .
    In BUCC, pp. 46–50. External Links: Link Cited by: §2.
  • M. Guo, Q. Shen, Y. Yang, H. Ge, D. Cer, G. H. Abrego, K. Stevens, N. Constant, Y. Sung, B. Strope, and R. Kurzweil (2018) Effective Parallel Corpus Mining using Bilingual Sentence Embeddings. arXiv:1807.11906. Cited by: §2, §4.2, §4.
  • H. Hassan, A. Aue, C. Chen, V. Chowdhary, J. Clark, C. Federmann, X. Huang, M. Junczys-Dowmunt, W. Lewis, M. Li, S. Liu, T. Liu, R. Luo, A. Menezes, T. Qin, F. Seide, X. Tan, F. Tian, L. Wu, S. Wu, Y. Xia, D. Zhang, Z. Zhang, and M. Zhou (2018) Achieving Human Parity on Automatic Chinese to English News Translation. arXiv:1803.05567. Cited by: §2, §4.2.
  • H. Jégou, M. Douze, and C. Schmid (2011) Product quantization for nearest neighbor search. IEEE Trans. PAMI 33 (1), pp. 117–128. Cited by: §4.3.
  • J. Johnson, M. Douze, and H. Jégou (2017) Billion-scale similarity search with GPUs. arXiv preprint arXiv:1702.08734. Cited by: §4.3.
  • A. Joulin, E. Grave, P. Bojanowski, and T. Mikolov (2016) Bag of tricks for efficient text classification. https://arxiv.org/abs/1607.01759. Cited by: §4.3.
  • P. Koehn, F. Guzmán, V. Chaudhary, and J. M. Pino (2019) Findings of the wmt 2019 shared task on parallel corpus filtering for low-resource conditions. In Proceedings of the Fourth Conference on Machine Translation, Volume 2: Shared Task Papers, Florence, Italy. Cited by: §2, §2.
  • P. Koehn, H. Khayrallah, K. Heafield, and M. L. Forcada (2018) Findings of the wmt 2018 shared task on parallel corpus filtering. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, Belgium, Brussels, pp. 726–739. External Links: Link Cited by: §2.
  • P. Koehn (2005) Europarl: a parallel corpus for statistical machine translation. In MT summit, Cited by: §1.
  • P. Lison and J. Tiedemann (2016) OpenSubtitles2016: extracting large parallel corpora from movie and tv subtitles. In LREC, Cited by: §1.
  • M. Z. Mohammadi and N. GhasemAghaee (2010) Building bilingual parallel corpora based on Wikipedia. In 2010 Second International Conference on Computer Engineering and Applications, pp. 264–268. Cited by: §2.
  • D. S. Munteanu and D. Marcu (2005) Improving Machine Translation Performance by Exploiting Non-Parallel Corpora. Computational Linguistics 31 (4), pp. 477–504. External Links: Link Cited by: §2, §2.
  • T. Nakazawa, N. Doi, S. Higashiyama, C. Ding, R. Dabre, H. Mino, I. Goto, W. P. Pa, A. Kunchukuttan, S. Parida, O. Bojar, and S. Kurohashi (2019) Overview of the 6th workshop on Asian translation. In Proceedings of the 6th Workshop on Asian Translation, pp. 1–35. External Links: Link, Document Cited by: §1, §6.3.
  • N. Ng, K. Yee, A. Baevski, M. Ott, M. Auli, and S. Edunov (2019) Facebook fair’s wmt19 news translation task submission. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), Florence, Italy, pp. 314–319. External Links: Link Cited by: §6.2, Table 5.
  • P. Otero, I. López, S. Cilenis, and S. de Compostela (2011) Measuring comparability of multilingual corpora extracted from Wikipedia. Iberian Cross-Language Natural Language Processings Tasks (ICL), pp. 8. Cited by: §2.
  • P. G. Otero and I. G. López (2010) Wikipedia as multilingual source of comparable corpora. In Proceedings of the 3rd Workshop on Building and Using Comparable Corpora, LREC, pp. 21–25. Cited by: §2.
  • A. Patry and P. Langlais (2011) Identifying parallel documents from a large bilingual collection of texts: application to parallel article extraction in Wikipedia. In Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web, pp. 87–95. Cited by: §2.
  • Y. Qi, D. Sachan, M. Felix, S. Padmanabhan, and G. Neubig (2018)

    When and why are pre-trained word embeddings useful for neural machine translation?

    .
    In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), New Orleans, Louisiana, pp. 529–535. External Links: Link Cited by: §1, §1, §6.1, Table 4, §6.
  • P. Resnik and N. A. Smith (2003) The Web as a Parallel Corpus. Computational Linguistics 29 (3), pp. 349–380. External Links: Link Cited by: §2.
  • P. Resnik (1999) Mining the Web for Bilingual Text. In ACL, External Links: Link Cited by: §2.
  • H. Schwenk, V. Chaudhary, S. Sun, H. Gong, and F. Guzmán (2019) WikiMatrix: mining 135m parallel sentences in 1620 language pairs from wikipedia. In http://arxiv.org/abs/1907.05791, Cited by: §1, §2, §4.1, §4.1, §4.2, §4.3, §4.3, §4.3, §5.1, §6.1, §6.1.
  • H. Schwenk (2018) Filtering and mining parallel data in a joint multilingual space. In ACL, pp. 228–234. Cited by: §1, §2, §4.
  • R. Sennrich, B. Haddow, and A. Birch (2016) Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 1715–1725. Cited by: §6.1, §6.2.
  • J. R. Smith, C. Quirk, and K. Toutanova (2010) Extracting parallel sentences from comparable corpora using document level alignment. In NAACL, pp. 403–411. Cited by: §2.
  • M. Sun, B. Jiang, H. Xiong, Z. He, H. Wu, and H. Wang (2019) Baidu neural machine translation systems for wmt19. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), Florence, Italy, pp. 374–381. External Links: Link Cited by: Table 5.
  • J. Tiedemann (2012) Parallel data, tools and interfaces in OPUS. In LREC, Cited by: §1.
  • C. Tsai and D. Roth (2016) Cross-lingual wikification using multilingual embeddings. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 589–598. Cited by: §2.
  • D. Tufis, R. Ion, Ș. Daniel, Dumitrescu, and D. Ștefănescu (2013) Wikipedia as an smt training corpus. In RANLP, pp. 702–709. Cited by: §2.
  • M. Utiyama and H. Isahara (2003) Reliable Measures for Aligning Japanese-English News Articles and Sentences. In ACL, External Links: Link Cited by: §2.
  • G. Wenzek, M. Lachaux, A. Conneau, V. Chaudhary, F. Guzmán, A. Joulin, and E. Grave (2019) CCNet: extracting high quality monolingual datasets from web crawl data. https://arxiv.org/abs/1911.00359. Cited by: CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB, §1, §3, 1st item.
  • Y. Yang, G. H. Ábrego, S. Yuan, M. Guo, Q. Shen, D. Cer, Y. Sung, B. Strope, and R. Kurzweil (2019) Improving multilingual sentence embedding using bi-directional dual encoder with additive margin softmax. In https://arxiv.org/abs/1902.08564, Cited by: §2, §4.2.
  • M. Ziemski, M. Junczys-Dowmunt, and B. Pouliquen (2016) The United Nations Parallel Corpus v1.0. In LREC, Cited by: §1.
  • P. Zweigenbaum, S. Sharoff, and R. Rapp (2018) Overview of the Third BUCC Shared Task: Spotting Parallel Sentences in Comparable Corpora. In Proceedings of the 11th Workshop on Building and Using Comparable Corpora, External Links: Link Cited by: §4.1.