Log In Sign Up

CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus

by   Changhan Wang, et al.

Spoken language translation has recently witnessed a resurgence in popularity, thanks to the development of end-to-end models and the creation of new corpora, such as Augmented LibriSpeech and MuST-C. Existing datasets involve language pairs with English as a source language, involve very specific domains or are low resource. We introduce CoVoST, a multilingual speech-to-text translation corpus from 11 languages into English, diversified with over 11,000 speakers and over 60 accents. We describe the dataset creation methodology and provide empirical evidence of the quality of the data. We also provide initial benchmarks, including, to our knowledge, the first end-to-end many-to-one multilingual models for spoken language translation. CoVoST is released under CC0 license and free to use. We also provide additional evaluation data derived from Tatoeba under CC licenses.


CoVoST 2 and Massively Multilingual Speech-to-Text Translation

Speech translation has recently become an increasingly popular topic of ...

Kosp2e: Korean Speech to English Translation Corpus

Most speech-to-text (S2T) translation studies use English speech as a so...

Multilingual Byte2Speech Text-To-Speech Models Are Few-shot Spoken Language Learners

We present a multilingual end-to-end Text-To-Speech framework that maps ...

Don't Discard Fixed-Window Audio Segmentation in Speech-to-Text Translation

For real-life applications, it is crucial that end-to-end spoken languag...

Mix and Match: An Empirical Study on Training Corpus Composition for Polyglot Text-To-Speech (TTS)

Training multilingual Neural Text-To-Speech (NTTS) models using only mon...

CVSS Corpus and Massively Multilingual Speech-to-Speech Translation

We introduce CVSS, a massively multilingual-to-English speech-to-speech ...

FST: the FAIR Speech Translation System for the IWSLT21 Multilingual Shared Task

In this paper, we describe our end-to-end multilingual speech translatio...

1 Introduction

End-to-end speech-to-text translation (ST) has attracted much attention recently [6, 11, 24, 3, 5]

given its simplicity against cascading automatic speech recognition (ASR) and machine translation (MT) systems. The lack of labeled data, however, has become a major blocker for bridging the performance gaps between end-to-end models and cascading systems. Several corpora have been developed in recent years. post2013improved introduced a 38-hour Spanish-English ST corpus by augmenting the transcripts of the Fisher and Callhome corpora with English translations. di-gangi-etal-2019-must created the largest ST corpus to date from TED talks but the language pairs involved are out of English only. beilharz2019librivoxdeen created a 110-hour German-English ST corpus from LibriVox audiobooks. godard-etal-2018-low created a Moboshi-French ST corpus as part of a rare language documentation effort. woldeyohannis provided an Amharic-English ST corpus in the tourism domain. boito2019mass created a multilingual ST corpus involving 8 languages from a multilingual speech corpus based on Bible readings 

[8]. Previous work either involves language pairs out of English, very specific domains, very low resource languages or a limited set of language pairs. This limits the scope of study, including the latest explorations on end-to-end multilingual ST [14, 12]. Our work is mostly similar and concurrent to iranzosnchez2019europarlst who created a multilingual ST corpus from the European Parliament proceedings. The corpus we introduce has larger speech durations and more translation tokens. It is diversified with multiple speakers per transcript/translation. Finally, we provide additional out-of-domain test sets.

In this paper, we introduce CoVoST, a multilingual ST corpus based on Common Voice [1] for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. It includes a total 708 hours of French (Fr), German (De), Dutch (Nl), Russian (Ru), Spanish (Es), Italian (It), Turkish (Tr), Persian (Fa), Swedish (Sv), Mongolian (Mn) and Chinese (Zh) speeches, with French and German ones having the largest durations among existing public corpora. We also collect an additional evaluation corpus from Tatoeba111 for French, German, Dutch, Russian and Spanish, resulting in a total of 9.3 hours of speech. Both corpora are created at the sentence level and do not require additional alignments or segmentation. Using the official Common Voice train-development-test split, we also provide baseline models, including, to our knowledge, the first end-to-end many-to-one multilingual ST models. CoVoST is released under CC0 license and free to use. The Tatoeba evaluation samples are also available under friendly CC licenses. All the data can be acquired at

Sentences Speaker Tokens Average Length Word Vocab
Hours All Unique Count Accents Source Target Source Target Source Target
Train 87.1 78.9K 27.5K 436 9 787.7K 800.8K 10.0 10.1 29.7K 25.3K
Fr Dev 38.3 34.1K 10.4K 1,001 17 336.0K 339.0K 9.8 9.9 14.6K 12.8K
Test 46.3 39.2K 10.4K 2,884 24 391.6K 392.0K 10.0 10.0 14.9K 13.2K
TT 1.6 4.5K 4.5K 3 N/A 25.6K 24.4K 5.7 5.4 3.4K 2.2K
Train 71.0 60.3K 8.5K 1,109 7 549.5K 605.5K 9.1 10.0 16.3K 11.8K
De Dev 88.1 77.3K 5.6K 2,337 11 690.8K 759.2K 8.9 9.8 12.0K 9.3K
Test 168.3 145.8K 5.6K 4,781 13 1.31M 1.43M 9.0 9.8 12.3K 9.5K
TT 4.0 9.1K 9.1K 5 N/A 45.8K 47.0K 5.0 5.1 4.9K 3.2K
Train 4.4 4.3K 1.9K 35 2 39.9K 41.5K 9.4 9.8 9.2K 7.7K
Nl Dev 5.3 5.0K 1.7K 126 2 48.0K 50.0K 9.4 9.8 4.3K 4.0K
Test 8.2 7.7K 1.7K 461 3 73.6K 76.5K 9.5 9.9 4.3K 3.9K
TT 0.3 0.6K 0.6K 1 N/A 2.9K 3.2K 5.1 5.5 0.7K 0.7K
Train 10.2 7.1K 2.1K 6 N/A 75.2K 91.2K 10.6 12.8 7.4K 4.8K
Ru Dev 9.0 6.4K 1.7K 9 N/A 66.3K 80.5K 10.4 12.7 6.5K 4.3K
Test 8.2 5.8K 1.7K 61 N/A 59.6K 72.3K 10.3 12.5 6.2K 4.1K
TT 1.5 2.7K 2.7K 5 N/A 15.2K 18.4K 5.7 6.9 4.2K 2.7K
Train 20.9 18.3K 6.9K 319 11 162.8K 177.3K 8.9 9.7 5.6K 4.5K
Es Dev 3.2 2.7K 2.6K 89 10 24.5K 26.6K 9.0 9.8 5.2K 4.2K
Test 3.5 2.7K 2.6K 457 10 24.2K 26.4K 8.8 9.6 5.2K 4.1K
TT 1.9 2.8K 2.8K 2 N/A 22.2K 23.6K 7.8 8.3 4.2K 3.3K
Train 13.4 10.0K 6.4K 28 1 116.7K 127.8K 11.8 12.9 12.8K 9.9K
It Dev 10.6 8.3K 4.6K 93 1 92.8K 103.1K 11.2 12.4 10.6K 8.1K
Test 12.8 8.9K 4.6K 577 1 100.8K 110.3K 11.4 12.5 10.4K 8.1K
Train 2.6 2.5K 1.8K 14 1 18.5K 24.6K 7.3 9.7 4.7K 3.4K
Tr Dev 3.0 2.9K 1.6K 58 1 21.0K 28.1K 7.2 9.6 4.3K 3.1K
Test 3.8 3.4K 1.6K 323 1 24.7K 33.2K 7.2 9.7 4.2K 3.1K
Train 19.9 16.2K 2.4K 352 N/A 133.8K 164.9K 8.3 10.2 5.5K 3.9K
Fa Dev 22.8 18.4K 2.1K 677 N/A 150.8K 185.0K 8.2 10.1 5.1K 3.7K
Test 23.9 19.1K 2.1K 1,210 N/A 157.9K 193.5K 8.3 10.2 5.1K 3.7K
Train 1.2 1.6K 1.6K 2 N/A 10.9K 12.2K 6.8 7.6 2.3K 2.0K
Sv Dev 1.1 1.2K 1.2K 4 N/A 8.0K 8.9K 6.4 7.2 1.7K 1.6K
Test 1.0 1.1K 1.1K 41 N/A 7.8K 8.6K 6.8 7.5 1.7K 1.6K
Train 3.0 2.1K 2.1K 4 N/A 23.0K 27.2K 11.0 13.0 8.2K 4.4K
Mn Dev 2.5 1.6K 1.4K 22 N/A 17.9K 21.6K 11.1 13.3 6.2K 3.5K
Test 2.9 1.8K 1.6K 204 N/A 20.2K 24.1K 11.0 13.1 6.8K 3.8K
Train 4.0 2.3K 2.3K 9 6 50.8K 37.9K 22.1 16.5 2.6K 8.2K
Zh Dev 3.5 2.0K 2.0K 24 13 44.0K 33.6K 22.5 17.2 2.6K 7.6K
Test 3.7 2.0K 2.0K 244 22 43.6K 33.0K 22.1 16.7 2.6K 7.5K
Table 1: Basic statistics of CoVoST and TT evaluation set. Token statistics are based on Moses-tokenized sentences. Speaker demographics is partially available.

2 Data Collection and Processing

2.1 Common Voice (CoVo)

Common Voice [1, CoVo] is a crowdsourcing speech recognition corpus with an open CC0 license. Contributors record voice clips by reading from a bank of donated sentences. Each voice clip was validated by at least two other users. Most of the sentences are covered by multiple speakers, with potentially different genders, age groups or accents.

Raw CoVo data contains samples that passed validation as well as those that did not. To build CoVoST, we only use the former one and reuse the official train-development-test partition of the validated data. As of January 2020, the latest CoVo 2019-06-12 release includes 29 languages. CoVoST is currently built on that release and covers the following 11 languages: French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian and Chinese.

Validated transcripts were sent to professional translators. Note that the translators had access to the transcripts but not the corresponding voice clips since clips would not carry additional information. Since transcripts were duplicated due to multiple speakers, we deduplicated the transcripts before sending them to translators. As a result, different voice clips of the same content (transcript) will have identical translations in CoVoST for train, development and test splits.

In order to control the quality of the professional translations, we applied various sanity checks to the translations [13]. 1) For German-English, French-English and Russian-English translations, we computed sentence-level BLEU [9] with the NLTK [7] implementation between the human translations and the automatic translations produced by a state-of-the-art system [17] (the French-English system was a Transformer big [22] separately trained on WMT14). We applied this method to these three language pairs only as we are confident about the quality of the corresponding systems. Translations with a score that was too low were manually inspected and sent back to the translators when needed. 2) We manually inspected examples where the source transcript was identical to the translation. 3) We measured the perplexity of the translations using a language model trained on a large amount of clean monolingual data [17]. We manually inspected examples where the translation had a high perplexity and sent them back to translators accordingly. 4) We computed the ratio of English characters in the translations. We manually inspected examples with a low ratio and sent them back to translators accordingly. 5) Finally, we used VizSeq [23] to calculate similarity scores between transcripts and translations based on LASER cross-lingual sentence embeddings [2]. Samples with low scores were manually inspected and sent back for translation when needed.

We also sanity check the overlaps of train, development and test sets in terms of transcripts and voice clips (via MD5 file hashing), and confirm they are totally disjoint.

2.2 Tatoeba (TT)

Tatoeba (TT) is a community built language learning corpus having sentences aligned across multiple languages with the corresponding speech partially available. Its sentences are on average shorter than those in CoVoST (see also Table 1) given the original purpose of language learning. Sentences in TT are licensed under CC BY 2.0 FR and part of the speeches are available under various CC licenses.

We construct an evaluation set from TT (for French, German, Dutch, Russian and Spanish) as a complement to CoVoST development and test sets. We collect (speech, transcript, English translation) triplets for the 5 languages and do not include those whose speech has a broken URL or is not CC licensed. We further filter these samples by sentence lengths (minimum 4 words including punctuations) to reduce the portion of short sentences. This makes the resulting evaluation set closer to real-world scenarios and more challenging.

We run the same quality checks for TT as for CoVoST but we do not find poor quality translations according to our criteria. Finally, we report the overlap between CoVo transcripts and TT sentences in Table 2. We found a minimal overlap, which makes the TT evaluation set a suitable additional test set when training on CoVoST.

CoVo split Fr De Nl Ru Es
Train 1.7% 0.2% 0.2% 0.1% 0.1%
Dev 1.0% 0.1% 0.3% 0.0% 0.1%
Test 0.9% 0.3% 0.3% 0.0% 0.4%
Table 2: TT-CoVo transcript overlapping rate.
Figure 1: CoVoST transcript distribution by number of speakers.
Figure 2: CoVoST transcript distribution by number of speaker accents.
Figure 3: CoVoST transcript distribution by speaker age groups.

3 Data Analysis

Basic Statistics

Basic statistics for CoVoST and TT are listed in Table 1 including (unique) sentence counts, speech durations, speaker demographics (partially available) as well as vocabulary and token statistics (based on Moses-tokenized sentences by sacreMoses222 on both transcripts and translations. We see that CoVoST has over 327 hours of German speeches and over 171 hours of French speeches, which, to our knowledge, corresponds to the largest corpus among existing public ST corpora (the second largest is 110 hours [4] for German and 38 hours [15] for French). Moreover, CoVoST has a total of 18 hours of Dutch speeches, to our knowledge, contributing the first public Dutch ST resource. CoVoST also has around 27-hour Russian speeches, 37-hour Italian speeches and 67-hour Persian speeches, which is 1.8 times, 2.5 times and 13.3 times of the previous largest public one [8]. Most of the sentences (transcripts) in CoVoST are covered by multiple speakers with potentially different accents, resulting in a rich diversity in the speeches. For example, there are over 1,000 speakers and over 10 accents in the French and German development / test sets. This enables good coverage of speech variations in both model training and evaluation.

Speaker Diversity

As we can see from Table 1, CoVoST is diversified with a rich set of speakers and accents. We further inspect the speaker demographics in terms of sample distributions with respect to speaker counts, accent counts and age groups, which is shown in Figure 1, 2 and 3. We observe that for 8 of the 11 languages, at least 60% of the sentences (transcripts) are covered by multiple speakers. Over 80% of the French sentences have at least 3 speakers. And for German sentences, even over 90% of them have at least 5 speakers. Similarly, we see that a large portion of sentences are spoken in multiple accents for French, German, Dutch and Spanish. Speakers of each language also spread widely across different age groups (below 20, 20s, 30s, 40s, 50s, 60s and 70s).

4 Baseline Results

We provide baselines using the official train-development-test split on the following tasks: automatic speech recognition (ASR), machine translation (MT) and speech translation (ST).

4.1 Experimental Settings

Data Preprocessing

We convert raw MP3 audio files from CoVo and TT into mono-channel waveforms, and downsample them to 16,000 Hz. For transcripts and translations, we normalize the punctuation, we tokenize the text with sacreMoses and lowercase it. For transcripts, we further remove all punctuation markers except for apostrophes. We use character vocabularies on all the tasks, with 100% coverage of all the characters. Preliminary experimentation showed that character vocabularies provided more stable training than BPE. For MT, the vocabulary is created jointly on both transcripts and translations. We extract 80-channel log-mel filterbank features, computed with a 25ms window size and 10ms window shift using torchaudio333

. The features are normalized to 0 mean and 1.0 standard deviation. We remove samples having more than 3,000 frames or more than 256 characters for GPU memory efficiency (less than 25 samples are removed for all languages).

Model Training

Our ASR and ST models follow the architecture in berard2018end, but have 3 decoder layers like that in pino2019harnessing. For MT, we use a Transformer base architecture [22], but with 3 encoder layers, 3 decoder layers and 0.3 dropout. We use a batch size of 10,000 frames for ASR and ST, and a batch size of 4,000 tokens for MT. We train all models using Fairseq [18] for up to 200,000 updates. We use SpecAugment [20] for ASR and ST to alleviate overfitting.

Inference and Evaluation

We use a beam size of 5 for all models. We use the best checkpoint by validation loss for MT, and average the last 5 checkpoints for ASR and ST. For MT and ST, we report case-insensitive tokenized BLEU [19] using sacreBLEU [21]. For ASR, we report word error rate (WER) and character error rate (CER) using VizSeq.

4.2 Automatic Speech Recognition (ASR)

CoVoST Test TT
Fr 24.3 10.8 45.4 25.2
De 41.5 17.8 32.2 15.1
Nl 98.8 75.2 114.7 91.9
Ru 101.2 79.3 111.5 93.5
Es 99.8 74.6 107.1 79.9
It 98.3 72.1
Tr 106.1 81.3
Fa 100.2 75.3
Sv 111.2 86.3
Mn 105.2 82.7
Zh 99.1 59.7
Table 3: WER and CER scores for ASR models.

For simplicity, we use the same model architecture for ASR and ST, although we do not leverage ASR models to pretrain ST model encoders later. Table 3 shows the word error rate (WER) and character error rate (CER) for ASR models. We see that French and German perform the best given they are the two highest resource languages in CoVoST. The other languages are relatively low resource (especially Turkish and Swedish) and the ASR models are having difficulties to learn from this data.

4.3 Machine Translation (MT)

Fr 29.8 25.4
De 8.0 8.1
Nl 3.2 5.3
Ru 3.0 0.7
Es 11.0 2.3
It 8.7
Tr 0.9
Fa 0.5
Sv 5.0
Mn 0.2
Zh 5.5
Table 4: BLEU scores for MT models.

MT models take transcripts (without punctuation) as inputs and outputs translations (with punctuation). For simplicity, we do not change the text preprocessing methods for MT to correct this mismatch. Moreover, this mismatch also exists in cascading ST systems, where MT model inputs are the outputs of an ASR model. Table 4 shows the BLEU scores of MT models. We notice that the results are consistent with what we see from ASR models. For example thanks to abundant training data, French has a decent BLEU score of 29.8/25.4. German doesn’t perform well, because of less richness of content (transcripts). The other languages are low resource in CoVoST and it is difficult to train decent models without additional data or pre-training techniques.

4.4 Speech Translation (ST)

Fr De Nl Ru Es
Fr 19.1/9.0
De 4.0/1.6
Nl 0.9/0.6
Ru 2.0/0.2
Es 0.8/0.3
De+Fr 19.2/8.7 6.4/4.6
Nl+Fr 19.1/8.6 1.4/1.7
Ru+Fr 18.9/8.7 6.1/0.4
Es+Fr 19.8/9.4 3.6/1.9
First 5 19.0/8.1 6.1/4.2 1.9/3.5 5.9/0.8 3.2/1.6
All 11 18.1/7.6 5.9/4.6 1.5/2.1 5.1/0.8 2.6/1.7
Table 5: BLEU scores for end-to-end ST models. The rows indicate the languages used for training, the columns indicate the CoVoST test / TT BLEU scores on corresponding languages. The best results for each language are indicated in bold. French (Fr) is highest resource among all 11 languages. Fr, De, Nl, Ru and Es.
Fr It Tr Fa Sv Mn Zh
Fr 19.1/9.0
It 0.4
Tr 0.8
Fa 0.5
Sv 0.3
Mn 0.2
Zh 3.7
It+Fr 19.2/8.4 4.7
Tr+Fr 19.8/9.9 2.0
Fa+Fr 18.5/7.6 1.6
Sv+Fr 18.8/8.8 0.4
Mn+Fr 19.5/9.2 0.2
Zh+Fr 19.1/8.3 6.1
All 11 18.1/7.6 3.7 1.2 1.5 0.4 0.3 4.9
Table 6: BLEU scores for end-to-end ST models (continuation of Table 5).

CoVoST is a many-to-one multilingual ST corpus. While end-to-end one-to-many and many-to-many multilingual ST models have been explored very recently [14, 12], many-to-one multilingual models, to our knowledge, have not. We hence use CoVoST to examine this setting. Table 5 and 6 show the BLEU scores for both bilingual and multilingual end-to-end ST models trained on CoVoST. We observe that combining speeches from multiple languages is consistently bringing gains to low-resource languages (all besides French and German). This includes combinations of distant languages, such as Ru+Fr, Tr+Fr and Zh+Fr. Moreover, some combinations do bring gains to high-resource language (French) as well: Es+Fr, Tr+Fr and Mn+Fr. We simply provide the most basic many-to-one multilingual baselines here, and leave the full exploration of the best configurations to future work. Finally, we note that for some language pairs, absolute BLEU numbers are relatively low as we restrict model training to the supervised data. We encourage the community to improve upon those baselines, for example by leveraging semi-supervised training.

Avg. Sent. BLEU
Fr 11.56 11.20 0.87
De 1.61 1.94 2.20
Nl 0.13 0.14 1.59
Ru 1.18 1.12 0.89
Es 0.14 0.18 1.41
It 0.02 0.02 1.13
Tr 0.12 0.13 1.15
Fa 0.11 0.13 2.20
Sv 0.04 0.04 N/A
Mn 0.0 0.0 N/A
Zh 1.80 1.81 N/A
De+Fr 4.85 7.94 1.11
Nl+Fr 9.70 9.64 0.87
Ru+Fr 10.35 9.94 0.88
Es+Fr 9.97 9.45 0.87
It+Fr 9.65 8.14 0.85
Tr+Fr 11.16 10.24 0.85
Fa+Fr 7.45 8.81 0.85
Sv+Fr 10.90 9.80 0.88
Mn+Fr 11.24 9.90 0.87
Zh+Fr 11.05 9.85 0.86
First 5 4.44 6.01 1.12
All 11 3.63 3.84 1.10
Table 7: Average per-group mean and average coefficient of variation for ST sentence BLEU scores on CoVoST test set (groups correspond to one transcript and multiple speakers). The latter is unavailable for Swedish, Mongolian and Chinese because models are unable to acheive non-zero scores on multi-speaker samples.

4.5 Multi-Speaker Evaluation

In CoVoST, large portion of transcripts are covered by multiple speakers with different genders, accents and age groups. Besides the standard corpus-level BLEU scores, we also want to evaluate model output variance on the same content (transcript) but different speakers. We hence propose to group samples (and their sentence BLEU scores) by transcript, and then calculate average per-group mean and average coefficient of variation defined as follows:


where is the set of sentence BLEU scores grouped by transcript and .

provides a normalized quality score as oppose to corpus-level BLEU or unnormalized average of sentence BLEU. And is a standardized measure of model stability against different speakers (the lower the better). Table 7 shows the and of our ST models on CoVoST test set. We see that German and Persian have the worst (least stable) given their rich speaker diversity in the test set and relatively small train set (see also Figure 1 and Table 1). Dutch also has poor because of the lack of training data. Multilingual models are consistantly more stable on low-resource languages. Ru+Fr, Tr+Fr, Fa+Fr and Zh+Fr even have better than all individual languages.

5 Conclusion

We introduce a multilingual speech-to-text translation corpus, CoVoST, for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. We also provide baseline results, including, to our knowledge, the first end-to-end many-to-one multilingual model for spoken language translation. CoVoST is free to use with a CC0 license, and the additional Tatoeba evaluation samples are also CC-licensed.

6 Bibliographical References


  • [1] R. Ardila, M. Branson, K. Davis, M. Henretty, M. Kohler, J. Meyer, R. Morais, L. Saunders, F. M. Tyers, and G. Weber (2019) Common voice: a massively-multilingual speech corpus. External Links: 1912.06670 Cited by: §1, §2.1.
  • [2] M. Artetxe and H. Schwenk (2019) Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics 7, pp. 597–610. Cited by: §2.1.
  • [3] S. Bansal, H. Kamper, A. Lopez, and S. Goldwater (2017) Towards speech-to-text translation without speech recognition. Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. External Links: Link, Document Cited by: §1.
  • [4] B. Beilharz, X. Sun, S. Karimova, and S. Riezler (2019) LibriVoxDeEn: a corpus for german-to-english speech translation and speech recognition. External Links: 1910.07924 Cited by: §3.
  • [5] A. Bérard, L. Besacier, A. C. Kocabiyikoglu, and O. Pietquin (2018) End-to-end automatic speech translation of audiobooks. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6224–6228. Cited by: §1.
  • [6] A. Berard, O. Pietquin, C. Servan, and L. Besacier (2016) Listen and translate: a proof of concept for end-to-end speech-to-text translation. External Links: 1612.01744 Cited by: §1.
  • [7] S. Bird, E. Klein, and E. Loper (2009) Natural language processing with python: analyzing text with the natural language toolkit. ” O’Reilly Media, Inc.”. Cited by: §2.1.
  • [8] A. W. Black (2019-05) CMU wilderness multilingual speech dataset. In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vol. , pp. 5971–5975. External Links: Document, ISSN 1520-6149 Cited by: §1, §3.
  • [9] B. Chen and C. Cherry (2014-06) A systematic comparison of smoothing techniques for sentence-level BLEU. In Proceedings of the Ninth Workshop on Statistical Machine Translation, Baltimore, Maryland, USA, pp. 362–367. External Links: Link, Document Cited by: §2.1.
  • [10] M. A. Di Gangi, R. Cattoni, L. Bentivogli, M. Negri, and M. Turchi (2019-06) MuST-C: a Multilingual Speech Translation Corpus. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 2012–2017. External Links: Link, Document Cited by: CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus.
  • [11] L. Duong, A. Anastasopoulos, D. Chiang, S. Bird, and T. Cohn (2016-06)

    An attentional model for speech translation without transcription

    In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, California, pp. 949–959. External Links: Link, Document Cited by: §1.
  • [12] M. A. D. Gangi, M. Negri, and M. Turchi (2019) One-to-many multilingual end-to-end speech translation. External Links: 1910.03320 Cited by: §1, §4.4.
  • [13] F. Guzmán, P. Chen, M. Ott, J. Pino, G. Lample, P. Koehn, V. Chaudhary, and M. Ranzato (2019-11) The FLORES evaluation datasets for low-resource machine translation: Nepali–English and Sinhala–English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 6097–6110. External Links: Link, Document Cited by: §2.1.
  • [14] H. Inaguma, K. Duh, T. Kawahara, and S. Watanabe (2019) Multilingual end-to-end speech translation. External Links: 1910.00254 Cited by: §1, §4.4.
  • [15] J. Iranzo-Sanchez, J. A. Silvestre-Cerda, J. Jorge, N. Rosello, A. Gimenez, A. Sanchis, J. Civera, and A. Juan (2019) Europarl-st: a multilingual corpus for speech translation of parliamentary debates. External Links: 1911.03167 Cited by: §3.
  • [16] A. C. Kocabiyikoglu, L. Besacier, and O. Kraif (2018) Augmenting librispeech with french translations: a multimodal corpus for direct speech translation evaluation. External Links: 1802.03142 Cited by: CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus.
  • [17] N. Ng, K. Yee, A. Baevski, M. Ott, M. Auli, and S. Edunov (2019-08) Facebook FAIR’s WMT19 news translation task submission. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), Florence, Italy, pp. 314–319. External Links: Link, Document Cited by: §2.1.
  • [18] M. Ott, S. Edunov, A. Baevski, A. Fan, S. Gross, N. Ng, D. Grangier, and M. Auli (2019) Fairseq: a fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations, Cited by: §4.1.
  • [19] K. Papineni, S. Roukos, T. Ward, and W. Zhu (2002) BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311–318. Cited by: §4.1.
  • [20] D. S. Park, W. Chan, Y. Zhang, C. Chiu, B. Zoph, E. D. Cubuk, and Q. V. Le (2019) Specaugment: a simple data augmentation method for automatic speech recognition. arXiv preprint arXiv:1904.08779. Cited by: §4.1.
  • [21] M. Post (2018-10) A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, Brussels, Belgium, pp. 186–191. External Links: Link, Document Cited by: §4.1.
  • [22] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §2.1, §4.1.
  • [23] C. Wang, A. Jain, D. Chen, and J. Gu (2019)

    VizSeq: a visual analysis toolkit for text generation tasks

    EMNLP-IJCNLP 2019, pp. 253. Cited by: §2.1.
  • [24] R. J. Weiss, J. Chorowski, N. Jaitly, Y. Wu, and Z. Chen (2017-08) Sequence-to-sequence models can directly translate foreign speech. Interspeech 2017. External Links: Link, Document Cited by: §1.