Very detailed information about social venues such as restaurants is available from user-generated reviews in applications like Google Maps, TripAdvisor or Foursquare.111https://foursquare.com/ Most of these reviews are written in the local language and are not directly exploitable by foreign visitors: an analysis of the Foursquare database shows that, in Paris, only 49% of the restaurants have at least one review in English. It can be much worse for other cities and languages (e.g., only 1% of Seoul restaurants for a French-only speaker).
Machine Translation of such user-generated content can improve the situation and make the data available for direct display or for downstream NLP tasks (e.g., cross-lingual information retrieval, sentiment analysis, spam or fake review detection), provided its quality is sufficient.
We asked professionals to translate 11.5k French Foursquare reviews (18k sentences) to English. We believe that this resource222https://europe.naverlabs.com/research/natural-language-processing/machine-translation-of-restaurant-reviews/ will be valuable to the community for training and evaluating MT systems addressing challenges posed by user-generated content, which we discuss in detail in this paper.
We conduct extensive experiments and combine techniques that seek to solve these challenges (e.g., factored case, noise generation, domain adaptation with tags) on top of a strong Transformer baseline. In addition to BLEU evaluation and human evaluation, we use targeted metrics that measure how well polysemous words are translated, or how well sentiments expressed in the original review can still be recovered from its translation.
2 Related work
Translating restaurant reviews written by casual customers presents several difficulties for NMT, in particular robustness to non-standard language and adaptation to a specific style or domain (see Section 3.2 for details).
Concerning robustness to noisy user generated content, Michel and Neubig (2018) stress differences with traditional domain adaptation problems, and propose a typology of errors, many of which we also detected in the Foursquare data. They also released a dataset (MTNT), whose sources were selected from a social media (Reddit) on the basis of being especially noisy (see Appendix for a comparison with Foursquare). These sources were then translated by humans to produce a parallel corpus that can be used to engineer more robust NMT systems and to evaluate them. This corpus was the basis of the WMT 2019 Robustness Task Li et al. (2019), in which Berard et al. (2019) ranked first. We use the same set of robustness and domain adaptation techniques, which we study more in depth and apply to our review translation task.
Sperber et al. (2017), Belinkov and Bisk (2018) and Karpukhin et al. (2019) propose to improve robustness by training models on data-augmented corpora, containing noisy sources obtained by random word or character deletions, insertions, substitutions or swaps. Recently, Vaibhav et al. (2019) proposed to use a similar technique along with noise generation through replacement of a clean source by one obtained by back-translation.
We employ several well-known domain adaptation techniques: back-translation of large monolingual corpora close to the domain Sennrich et al. (2016b); Edunov et al. (2018), fine-tuning with in-domain parallel data Luong and Manning (2015); Freitag and Al-Onaizan (2016); Servan et al. (2016), domain tags for knowledge transfer between domains Kobus et al. (2017); Berard et al. (2019).
Addressing the technical issues of robustness and adaptation of an NMT system is decisive for real-world deployment, but evaluation is also critical. This aspect is stressed by Levin et al. (2017) (NMT of curated hotel descriptions), who point out that automatic metrics like BLEU tend to neglect semantic differences that have a small textual footprint, but may be seriously misleading in practice, for instance by interpreting available parking as if it meant free parking. To mitigate this, we conduct additional evaluations of our models: human evaluation, translation accuracy of polysemous words, and indirect evaluation with sentiment analysis.
3 Task description
We present a new task of restaurant review translation, which combines domain adaptation and robustness challenges.
3.1 Corpus description
We sampled 11.5k French reviews from Foursquare, mostly in the food category,333https://developer.foursquare.com/docs/resources/categories split them into 18k sentences, and grouped them into train, valid and test sets (see Table 1). The French reviews contain on average 1.5 sentences and 17.9 words. Then, we hired eight professional translators to translate them to English. Two of them created the training set by post-editing (PE) the outputs of baseline NMT systems.444ConvS2S or Transformer Big trained on the “UGC” corpus described in Section 6, without domain adaptation or robustness tricks. The other six translated the valid and test sets from scratch. They were asked to translate (or post-edit) the reviews sentence-by-sentence (to avoid any alignment problem), but they could see the full context. We manually filtered the test set to remove translations that were not satisfactory. The full reviews and additional metadata (e.g., location and type of the restaurant) are also available as part of this resource, to encourage research on contextual machine translation.
Foursquare-HT was translated from scratch by the same translators who post-edited Foursquare-PE. While we did not use it in this work, it can be used as extra training or development data. We also release a human translation of the French-language test set (668 sentences) of the Aspect-Based Sentiment Analysis task at SemEval 2016 Pontiki et al. (2016).
|PE (train)||12 080||8 004||141 958|
|HT||2 784||1 625||29 075|
|valid||1 243||765||13 976|
|test||1 838||1 157||21 525|
|(1)||é qd g vu sa …||(source)|
|and when I saw that …||(reference)|
|é qd g seen his …||(online MT)|
|(2)||c’est trooop bon !|
|it’s toooo good!|
|it’s good trooop!|
|(3)||le cadre est nul|
|the setting is lousy|
|the frame is null|
|(4)||le garçon a pété un cable|
|the waiter went crazy|
|the boy farted a cable|
|(5)||pizza nickel, tres bonnes pattes|
|great pizza, very good pasta|
|nickel pizza, very good legs|
Translating restaurant reviews presents two main difficulties compared to common tasks in MT. First, the reviews are written in a casual style, close to spoken language. Some liberty is taken w.r.t. spelling, grammar, and punctuation. Slang is also very frequent. MT should be robust to these variations. Second, they generally are reactions, by clients of a restaurant, about its food quality, service or atmosphere, with specific words relating to these aspects or sentiments. These require some degree of domain adaptation. The table above illustrates these issues, with outputs from an online MT system. Examples of full reviews from Foursquare-PE along with metadata are shown in Appendix.
Examples 1 and 2 fall into the robustness category: 1 is an extreme form of SMS-like, quasi-phonetic, language (et quand j’ai vu ça); 2 is a literal transcription of a long-vowel phonetic stress (trop trooop). Example 3 falls into the domain category: in a restaurant context, cadre typically refers to the setting. Examples 4 and 5 involve both robustness and domain adaptation: pété un cable is a non-compositional slang expression and garçon is not a boy in this domain; nickel is slang for great, très is missing an accent, and pâtes is misspelled as pattes, which is another French word.
Regarding robustness, we found many of the same errors listed by Michel and Neubig (2018) as noise in social media text: SMS language (é qd g vu sa), typos and phonetic spelling (pattes), repeated letters (trooop, merciiii), slang (nickel, bof, mdr), missing or wrong accents (tres), emoticons (‘:-)’) and emojis, missing punctuation, wrong or non-standard capitalization (lowercase proper names, capitalized words for emphasis). Regarding domain aspects, there are polysemous words with typical specific meaning carte map, menu; cadre frame, executive, setting), idiomatic expressions (à tomber par terre to die for), and venue-related named entities (La Boîte à Sardines).
4 Robustness to noise
We propose solutions for dealing with non-standard case, emoticons, emojis and other issues.
4.1 Rare character placeholder
We segment our training data into subwords with BPE (Sennrich et al., 2016c), implemented in SentencePiece (Kudo and Richardson, 2018). BPE can deal with rare or unseen words by splitting them into more frequent subwords, but cannot deal with unseen characters.555Unless actually doing BPE at the byte level, as suggested by Radford et al. (2019). While this is not a problem in most tasks, Foursquare contains many emojis, and sometimes symbols in other scripts (e.g., Arabic). Unicode now defines around 3k emojis, most of which are likely to be out-of-vocabulary.
We replace rare characters on both sides of the training corpus by a placeholder (<x>). A model trained on this data is typically able to copy the placeholder at the correct position. Then, at inference time, we replace the output tokens <x> by the rare source-side characters, in the same order. This approach is similar to that of Jean et al. (2015), who used the attention mechanism to replace UNK symbols with the aligned word in the source. Berard et al. (2019) used the same technique to deal with emojis in the WMT robustness task.
4.2 Capital letters
As shown in Table 2, capital letters are another source of confusion. HONTE and honte are considered as two different words. The former is out-of-vocabulary and is split very aggressively by BPE. This causes the MT model to hallucinate.
|Input||UNE HONTE !||une honte !|
|Pre-proc||UN E _H ON TE _!||une _honte _!|
|MT output||A _H ON E Y !||A _dis gra ce !|
|Post-proc||A HONEY!||A disgrace!|
A solution is to lowercase the input, both at training and at test time. However, when doing so, some information may be lost (e.g., named entities, acronyms, emphasis) which may result in lower translation quality.
Levin et al. (2017) do factored machine translation Sennrich and Haddow (2016); Garcia-Martinez et al. (2016) where a word and its case are split in two different features. For instance, HONTE becomes honte + upper.
We implement this with two embedding matrices, one for words and one for case, and represent a token as the sum of the embeddings of its factors. For the target side, we follow Garcia-Martinez et al. (2016) and have two softmax operations. We first predict the word in its lowercase form and then predict its case.666Like the “dependency model” of Garcia-Martinez et al. (2016), we use the current state of the decoder and the embedding of the output word to predict its case. The embeddings of the case and word are then summed and used as input for the next decoder step.
Berard et al. (2019) propose another approach, inline casing, which does not require any change in the model. We insert the case as a regular token into the sequence right after the word. Special tokens <U>, <L> and <T> (upper, lower and title) are used for this purpose and appended to the vocabulary. Contrary to the previous solution, there is only one embedding matrix and one softmax.
In practice, words are assumed to be lowercase by default and the <L> tokens are dropped to keep the factored sequences as short as possible. “Best fries EVER" becomes “best <T> _f ries _ever <U>". Like Berard et al. (2019), we force SentencePiece to split mixed-case words like MacDonalds into single-case subwords (Mac and Donalds).
Synthetic case noise
Another solution that we experiment with (see Section 6) is to inject noise on the source side of the training data by changing random source words to upper (5% chance), title (10%) or lower case (20%).
4.3 Natural noise
One way to make an NMT system more robust is to train it with some of the most common errors that can be found in the in-domain data. Like Berard et al. (2019), we detect the errors that occur naturally in the in-domain data and then apply them to our training corpus, while respecting their natural distribution. We call this “natural noise generation” in opposition to what is done in Sperber et al. (2017); Belinkov and Bisk (2018); Vaibhav et al. (2019) or in Section 4.2, where the noise is more synthetic.
We compile a general-purpose French lexicon as a transducer,777In Tamgu: https://github.com/naver/tamgu implemented to be traversed with extended edit distance flags, similar to Mihov and Schulz (2004). Whenever a word is not found in the lexicon (which means that it is a potential spelling mistake), we look for a French word in the lexicon within a maximum edit distance of 2, with the following set of edit operations:
|(1)||deletion (e.g., apelle instead of appelle)|
|(2)||insertion (e.g., appercevoir instead of apercevoir)|
|(3)||constrained substitution on diacritics (e.g., mangè instead of mangé)|
|(4)||swap counted as one operation: (e.g., mnager instead of manger)|
|(5)||substitution (e.g., menger instead of manger)|
|(6)||repetitions (e.g., Merciiiii with a threshold of max 10 repetitions)|
We apply the transducer to the French monolingual Foursquare data (close to 1M sentences) to detect and count noisy variants of known French words. This step produces a dictionary mapping the correct spelling to the list of observed errors and their respective frequencies.
In addition to automatically extracted spelling errors, we extract a set of common abbreviations from Seddah et al. (2012) and we manually identify a list of common errors in French:
|(7)||Wrong verb endings (e.g., il a manger instead of il a mangé)|
|(8)||Wrong spacing around punctuation symbols (e.g., Les.plats … instead of Les plats…)|
|(9)||Upper case/mixed case words (e.g., manQue de place instead of manque de place)|
|(10)||SMS language (e.g., bcp instead of beaucoup)|
|(11)||Phonetic spelling (e.g., sa instead of ça)|
With this dictionary, describing the real error distribution in Foursquare text, we take our large out-of-domain training corpus, and randomly replace source-side words with one of their variants (rules 1 to 6), while respecting the frequency of this variant in the real data. We also manually define regular expressions to randomly apply rules 7 to 11 (e.g., "er ""é ").
We obtain a noisy parallel corpus (which we use instead of the “clean” training data), where about 30% of all source sentences have been modified, as shown below:
|Error type||Examples of sentences with injected noise|
|(1) (6) (9)||L’Union eUropéene espere que la réunion de suiviii entre le Président […]|
|(2) (3) (10)||Le Comité notte avec bcp d’interet k les projets d’articles […]|
|(4) (7) (8)||Réunoin sur.la comptabiliter nationale […]|
5 Domain Adaptation
To adapt our models to the restaurant review domain we apply the following types of techniques: back-translation of in-domain English data, fine-tuning with small amounts of in-domain parallel data, and domain tags.
Back-translation (BT) is a popular technique for domain adaptation when large amounts of in-domain monolingual data are available (Sennrich et al., 2016b; Edunov et al., 2018). While our in-domain parallel corpus is small (12k pairs), Foursquare contains millions of English-language reviews. Thus, we train an NMT model888Like the “UGC” model with rare character handling and inline case described in Section 6.3. in the reverse direction (ENFR) and translate all the Foursquare English reviews to French.999This represents 15M sentences. This corpus is not available publicly, but the Yelp dataset (https://www.yelp.com/dataset) could be used instead. This gives a large synthetic parallel corpus.
This in-domain data is concatenated to the out-of-domain parallel data and used for training.
Edunov et al. (2018) show that doing back-translation with sampling instead of beam search brings large improvements due to increased diversity. Following this work, we test several settings:
|BT-B||Back-translation with beam search.|
|BT-S||Back-translation with sampling.|
|BT-S 3||Three different FR samplings for each EN sentence. This brings the size of the back-translated Foursquare closer to the out-of-domain corpus.|
No oversampling, but we sample a new version of the corpus for each training epoch.
We use a temperature101010with of to avoid the extremely noisy output obtained with and strike a balance between quality and diversity.
When small amounts of in-domain parallel data are available, fine-tuning (FT) is often the preferred solution for domain adaptation (Luong and Manning, 2015; Freitag and Al-Onaizan, 2016). It consists in training a model on out-of-domain data, and then continuing its training for a few epochs on the in-domain data only.
5.3 Corpus tags
Kobus et al. (2017) propose a technique for multi-domain NMT, which consists in inserting a token in each source sequence specifying its domain. The system can learn the particularities of multiple domains (e.g., polysemous words that have a different meaning depending on the domain), which we can control at test time by manually setting the tag. Sennrich et al. (2016a) also use tags to control politeness in the model’s output.
As our corpus (see Section 6.1) is not clearly divided into domains, we apply the same technique as Kobus et al. (2017) but use corpus tags (each sub-corpus has its own tag: TED, Paracrawl, etc.) which we add to each source sequence. Like in Berard et al. (2019), the Foursquare post-edited and back-translated data also get their own tags (PE and BT). Figure 1 gives an example where using the PE corpus tag at test time helps the model pick a more adequate translation.
|Corpus tag||SRC: La carte est trop petite.|
|TED||The map is too small.|
|Multi-UN||The card is too small.|
|PE||The menu is too small.|
6.1 Training data
After some initial work with the WMT 2014 data, we built a new training corpus named UGC (User Generated Content), closer to our domain, by combining: Multi UN, OpenSubtitles, Wikipedia, Books, Tatoeba, TED talks, ParaCrawl111111Corpora available at http://opus.nlpl.eu/ and Gourmet1212123k translations of dishes and other food terminology http://www.gourmetpedia.eu/ (See Table 3). UGC does not include Common Crawl (which contains many misaligned sentences and caused hallucinations), but it includes OpenSubtitles Lison and Tiedemann (2016) (spoken-language, possibly closer to Foursquare). We observed an improvement of more than 1 BLEU on newstest2014 when switching to UGC, and almost 6 BLEU on Foursquare-valid.
|Corpus||Lines||Words (FR)||Words (EN)|
|UGC||51.39M||1 125M||1 041M|
We use langid.py (Lui and Baldwin, 2012) to filter sentence pairs from UGC. We also remove duplicate sentence pairs, and lines longer than words or with a length ratio greater than (see Table 3). Then we apply SentencePiece and our rare character handling strategy (Section 4.1). We use a joined BPE model of size 32k, trained on the concatenation of both sides of the corpus, and set SentencePiece’s vocabulary threshold to . Finally, unless stated otherwise, we always use the inline casing approach (see Section 4.2).
6.3 Model and settings
For all experiments, we use the Transformer Big Vaswani et al. (2017)
as implemented in Fairseq, with the hyperparameters ofOtt et al. (2018). Training is done on 8 GPUs, with accumulated gradients over 10 batches (Ott et al., 2018), and a max batch size of tokens per GPU. We train for epochs, while saving a checkpoint every updates ( epoch on UGC) and average the 5 best checkpoints according to their perplexity on a validation set (a held-out subset of UGC).
For fine-tuning, we use a fixed learning rate, and a total batch size of 3500 tokens (training on a single GPU without delayed updates). To avoid overfitting on Foursquare-PE, we do early stopping according to perplexity on Foursquare-valid.131313The best perplexity was achieved after 1 to 3 epochs. For each fine-tuned model we test all 16 combinations of dropout in and learning rate in . We keep the model with the best perplexity on Foursquare-valid.141414The best dropout rate was always , and the best learning rate was either or .
6.4 Evaluation methodology
During our work, we used BLEU (Papineni et al., 2002) on newstest[2012, 2013] to ensure that our models stayed good on a more general domain, and on Foursquare-valid to measure performance on the Foursquare domain.
For sake of brevity, we only give the final BLEU scores on newstest2014 and Foursquare-test. Scores on Foursquare-valid, and MTNT-test (for comparison with Michel and Neubig, 2018; Berard et al., 2019) are given in Appendix. We evaluate “detokenized” MT outputs151515Outputs of our models are provided with the Foursquare corpus. against raw references using SacreBLEU (Post, 2018).161616SacreBLEU signature: BLEU+case.mixed+numrefs.1
In addition to BLEU, we do an indirect evaluation on an Aspect-Based Sentiment Analysis (ABSA) task, a human evaluation, and a task-related evaluation based on polysemous words.
6.5 BLEU evaluation
|Model||BLEU||Case insensitive BLEU|
|LC to cased||30.70||33.03||33.03||33.03|
Table 4 compares the case handling techniques presented in Section 4.2. To better evaluate the robustness of our models to changes of case, we built 3 synthetic test sets from Foursquare-test, with the same target, but all source words in upper, lower or title case.
Inline and factored case perform equally well, significantly better than the default (cased) model, especially on all-uppercase inputs. Lowercasing the source is a good option, but gives a slightly lower score on regular Foursquare-test.171717The “LC to cased” and “Noised case” models are not able to preserve capital letters for emphasis (as in Table 2), and the “Cased” model often breaks on such examples. Finally, synthetic case noise added to the source gives surprisingly good results. It could also be combined with factored or inline case.
Table 5 compares the baseline “inline case” model with the same model augmented with natural noise (Section 4.3). Performance is the same on Foursquare-test, but significantly better on newstest2014 artificially augmented with Foursquare-like noise.
|UGC (Inline case)||40.68||35.59||31.46|
|+ natural noise||40.43||40.35||31.66|
Table 6 shows the results of the back-translation (BT) techniques. Surprisingly, BT with beam search (BT-B) deteriorates BLEU scores on Foursquare-test, while BT with sampling gives a consistent improvement. BLEU scores on newstest2014 are not significantly impacted, suggesting that BT can be used for domain adaptation without hurting quality on other domains.
|UGC (Inline case)||40.68||31.46|
|UGC BT-S 3||40.63||32.80|
|UGC (Inline case)||–||40.68||31.46|
|UGC + FT||–||39.78||34.97|
|UGC PE + tags||–||40.71||32.15|
|UGC BT + tags||–||40.67||33.44|
Concatenating the small Foursquare-PE corpus to the 50M general domain corpus does not help much, unless using corpus tags.
Foursquare-PE + tags is not as good as fine-tuning with Foursquare-PE. However, fine-tuned models get slightly worse results on news.
Using no tag at test time works fine, even though all training sentences had tags.191919We tried keeping a small percentage of UGC with no tag, or with an ANY tag, but this made no difference.
|UGC (Inline case)||40.68||31.46|
|Google Translate (Feb 2019)||36.31||29.63|
|DeepL (Feb 2019)||?||32.82|
|UGC BT + FT||39.55||35.93|
|UGC BT PE + tags||40.99||35.60|
|Nat noise BT + FT||39.91||36.25|
|Nat noise BT PE + tags||40.72||35.54|
As shown in Table 8, these techniques can be combined to achieve the best results. The natural noise does not have a significant effect on BLEU scores. Back-translation combined with fine-tuning gives the best performance on Foursquare (+4.5 BLEU vs UGC). However, using tags instead of fine-tuning strikes a better balance between general domain and in-domain performance.
6.6 Targeted evaluation
In this section we propose two metrics that target specific aspects of translation adequacy: translation accuracy of domain-specific polysemous words and Aspect-Based Sentiment Analysis performance on MT outputs.
Translation of polysemous words
We propose to count polysemous words specific to our domain, similarly to Lala and Specia (2018), to measure the degree of domain adaptation. TER between the translation hypotheses and the post-edited references in Foursquare-PE reveals the most common substitutions (e.g., “card” is often replaced with “menu”, suggesting that “card” is a common mistranslation of the polysemous word “carte”). We filter this list manually to only keep words that are polysemous and that have a high frequency in the test set. Table 9 gives the 3 most frequent ones.202020Rarer ones are: adresse (place, address), café (coffee, café), entrée (starter, entrance), formule (menu, formula), long (slow, long), moyen (average, medium), correct (decent, right), brasserie (brasserie, brewery) and coin (local, corner).
Table 10 shows the accuracy of our models when translating these words. We see that the domain-adapted model is better at translating domain-specific polysemous words.
|Cadre||setting, frame, executive|
|Carte||menu, card, map|
|UGC (Inline case)||22||27||18||80%|
|UGC PE + tags||23||31||29||99%|
Indirect evaluation with sentiment analysis
We also measure adequacy by how well the translation preserves the polarity of the sentence regarding various aspects. To evaluate this, we perform an indirect evaluation on the SemEval 2016 Aspect-Based Sentiment Analysis (ABSA) task Pontiki et al. (2016). We use our internal ABSA systems trained on English or French SemEval 2016 data. The evaluation is done on the SemEval 2016 French test set: either the original version (ABSA French), or its translation (ABSA English). As shown in Table 11, translations obtained with domain-adapted models lead to significantly better scores on ABSA than the generic models.
|ABSA English on MT outputs|
|UGC (Inline case)||58.1||70.7|
|UGC BT PE + tags||60.2||72.0|
|Nat noise BT PE + tags||60.8||73.3|
6.7 Human Evaluation
We conduct a human evaluation to confirm the observations with BLEU and to overcome some of the limitations of this metric.
We select 4 MT models for evaluation (see Table 12) and show their 4 outputs at once, sentence-by-sentence, to human judges, who are asked to rank them given the French source sentence in context (with the full review). For each pair of models, we count the number of wins, ties and losses, and apply the Wilcoxon signed-rank test.
We took the first 300 test sentences to create 6 tasks of 50 sentences each. Then we asked bilingual colleagues to rank the output of 4 models by their translation quality. They were asked to do one or more of these tasks. The judge did not know about the list of models, nor the model that produced any given translation. We got 12 answers. The inter-judge Kappa coefficient ranged from 0.29 to 0.63, with an average of 0.47, which is a good value given the difficulty of the task. Table 12 gives the results of the evaluation, which confirm our observations with BLEU.
We also did a larger-scale monolingual evaluation using Amazon Mechanical Turk (see Appendix), which lead to similar conclusions.
|Tags Tags + noise||82||453||63|
|Tags + noise Baseline||178||232||97|
|Tags + noise GT||218||315||65|
We presented a new parallel corpus of user reviews of restaurants, which we think will be valuable to the community. We proposed combinations of multiple techniques for robustness and domain adaptation, which address particular challenges of this new task. We also performed an extensive evaluation to measure the improvements brought by these techniques.
According to BLEU, the best single technique for domain adaptation is fine-tuning. Corpus tags also achieve good results, without degrading performance on a general domain. Back-translation helps, but only with sampling or tags. The robustness techniques (natural noise, factored case, rare character placeholder) do not improve BLEU.
While our models are promising, they still show serious errors when applied to user-generated content: missing negations, hallucinations, unrecognized named entities, insensitivity to context.212121See additional examples in Appendix. This suggests that this task is far from solved.
We hope that this corpus, our natural noise dictionary, model outputs and human rankings will help better understand and address these problems. We also plan to investigate these problems on lower resource languages, where we expect the task to be even harder.
- Belinkov and Bisk (2018) Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and Natural Noise Both Break Neural Machine Translation. In ICLR.
- Berard et al. (2019) Alexandre Berard, Calapodescu Ioan, and Claude Roux. 2019. NAVER LABS Europe’s Systems for the WMT19 Machine Translation Robustness Task. In WMT.
- Caswell et al. (2019) Isaac Caswell, Ciprian Chelba, and David Grangier. 2019. Tagged Back-Translation. In WMT.
- Edunov et al. (2018) Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding Back-Translation at Scale. In EMNLP.
- Freitag and Al-Onaizan (2016) Markus Freitag and Yaser Al-Onaizan. 2016. Fast Domain Adaptation for Neural Machine Translation. arXiv.
- Garcia-Martinez et al. (2016) Mercedes Garcia-Martinez, Loic Barrault, and Fethi Bougares. 2016. Factored Neural Machine Translation. arXiv.
- Jean et al. (2015) Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On Using Very Large Target Vocabulary for Neural Machine Translation. NAACL-HLT.
- Karpukhin et al. (2019) Vladimir Karpukhin, Omer Levy, Jacob Eisenstein, and Marjan Ghazvininejad. 2019. Training on Synthetic Noise Improves Robustness to Natural Noise in Machine Translation. arXiv.
- Kobus et al. (2017) Catherine Kobus, Josep Crego, and Jean Senellart. 2017. Domain Control for Neural Machine Translation. In RANLP.
- Kudo and Richardson (2018) Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing. In EMNLP.
- Lala and Specia (2018) Chiraag Lala and Lucia Specia. 2018. Multimodal lexical translation. In LREC.
- Levin et al. (2017) Pavel Levin, Nishikant Dhanuka, Talaat Khalil, Fedor Kovalev, and Maxim Khalilov. 2017. Toward a full-scale neural machine translation in production: the Booking.com use case. In MT Summit XVI.
- Li et al. (2019) Xian Li, Paul Michel, Antonios Anastasopoulos, Yonatan Belinkov, Nadir K. Durrani, Orhan Firat, Philipp Koehn, Graham Neubig, Juan M. Pino, and Hassan Sajjad. 2019. Findings of the First Shared Task on Machine Translation Robustness. In WMT.
- Lison and Tiedemann (2016) Pierre Lison and Jörg Tiedemann. 2016. OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In LREC.
- Lui and Baldwin (2012) Marco Lui and Timothy Baldwin. 2012. langid.py: An Off-the-shelf Language Identification Tool. In Proceedings of the ACL 2012 System Demonstrations, ACL.
- Luong and Manning (2015) Minh-Thang Luong and Christopher D. Manning. 2015. Stanford Neural Machine Translation Systems for Spoken Language Domain. In IWSLT.
- Michel and Neubig (2018) Paul Michel and Graham Neubig. 2018. MTNT: A Testbed for Machine Translation of Noisy Text. In EMNLP.
- Mihov and Schulz (2004) Stoyan Mihov and Klaus U. Schulz. 2004. Fast Approximate Search in Large Dictionaries. Computational Linguistics.
- Ott et al. (2018) Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling Neural Machine Translation. In WMT.
- Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wj Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In ACL.
- Pontiki et al. (2016) Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammed AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphée De Clercq, Veronique Hoste, Marianna Apidianaki, Xavier Tannier, Natalia Loukachevitch, Evgeniy Kotelnikov, Núria Bel, Salud Maria Jiménez-Zafra, and Gülşen Eryiğit. 2016. SemEval-2016 Task 5: Aspect Based Sentiment Analysis. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval).
- Post (2018) Matt Post. 2018. A Call for Clarity in Reporting BLEU Scores. In WMT.
- Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners.
- Seddah et al. (2012) Djamé Seddah, Benoît Sagot, Marie Candito, Virginie Mouilleron, and Vanessa Combet. 2012. Building a treebank of noisy user-generated content: The French Social Media Bank. In The 11th International Workshop on Treebanks and Linguistic Theories (TLT).
- Sennrich and Haddow (2016) Rico Sennrich and Barry Haddow. 2016. Linguistic Input Features Improve Neural Machine Translation. In WMT.
- Sennrich et al. (2016a) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Controlling Politeness in Neural Machine Translation via Side Constraints. In NAACL-HLT.
- Sennrich et al. (2016b) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Improving Neural Machine Translation Models with Monolingual Data. In ACL.
- Sennrich et al. (2016c) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016c. Neural Machine Translation of Rare Words with Subword Units. In ACL.
- Servan et al. (2016) Christophe Servan, Josep Crego, and Jean Senellart. 2016. Domain specialization: a post-training domain adaptation for neural machine translation. arXiv.
- Sperber et al. (2017) Matthias Sperber, Jan Niehues, and Alex Waibel. 2017. Toward Robust Neural Machine Translation for Noisy Input Sequences. In IWSLT.
- Vaibhav et al. (2019) Vaibhav, Sumeet Singh, Craig Stewart, and Graham Neubig. 2019. Improving Robustness of Machine Translation with Synthetic Noise. In NAACL.
- Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In NIPS.
|Berard et al. (2019)|
|WMT (Inline case)||–||39.1|
|+ MTNT domain adaptation||–||44.3|
|Our models (single)|
|UGC (Inline case)||29.3||41.6|
|UGC BT + FT||33.7||44.5|
|UGC BT PE + tags||33.7||44.9|
|Nat noise BT + FT||33.8||44.6|
|Nat noise BT PE + tags||33.4||44.9|
Large-Scale monolingual evaluation
We conducted a larger scale monolingual evaluation using Amazon Mechanical Turk (AMT), as reported in Table 15. We evaluated the translations of 1800 test sentences. To filter poor quality work, which occurs frequently in our experience, we also created gold questions by selecting 40 additional sentences for which we built 3 fake translations each, whose ranking was intentionally unambiguous and easy. We created HITs (Human Intelligence Tasks) of 10 sentences each, of which 3 sentences were gold questions. Workers were also required to have at least 98% task approval rate on AMT and 1000 tasks approved. We aimed for 6 submissions per HIT from 6 different workers. Compared to the in-house evaluation, the inter-judge agreement was low (Kappa of 0.15).
|Tags + noise Tags||1939||7414||1667|
|Tags + noise Base||2718||6108||2178|
|Tags + noise GT||3008||5801||2173|
Both human evaluations agree and are consistent with the BLEU evaluation, except for the impact of natural noise, where the AMT evaluation found a significant improvement.
|Evaluation||# Tasks||# Ties||% Ties||Kappa|
|SRC||On s’y sent comme a la maison ! <s> Équipe de serveurs très
|REF||It feels like home!! <s> Team of waiters very nice! <s> Taste the burger LE Retour d’Hervé, it’s to die for :-)|
|SRC||Je conseille le crumble fraise/rhubarbe CHAUD. <s> C’est délicieux !!|
|REF||I recommend the strawberry/rhubard crumble HOT. <s> It’s delicious!!|
|Type||Bakery, Breakfast Spot|
|SRC||Très bons burgers, cheesecake à tomber par terre.... <s> Sans oublier <NAME>, <NAME> et <NAME> en un mot CHAR-MANTS!|
|REF||Very good burgers, cheesecake to die for... <s> Not to mention <NAME>, <NAME> and <NAME>: in a word CHAR-MING!|
|SRC||Friterie sympathique collée au Grand Boulevards. <s> On retrouve les incontournables frites belges. <s> Elle sont DELICIEUSESEMENT grosses comme on aiment :) a tester. <s> Ouverture tardive le we.|
|REF||Friendly chip shop stuck to Grand Boulevards. <s> We find the essential Belgian fries. <s> They are DELICIOUSLY big as we like them :) to test. <s> Late opening on the weekend.|
|Type||Belgian Restaurant, Fast Food Restaurant|
|SRC||Que de bon souvenir , fillet de boeuf au patte. <s> Merci pour l accueille Mr <NAME>|
|REF||Great memories, beef fillet with pasta. <s> Thank you for being so welcoming Mr <NAME>|
|Type||Café, Pizza Place|
|SRC||La carte est souvent enrichie. <s> La gérance est top.|
|REF||The menu is often supplemented. <s> The management is top notch.|
|Location||Sid’Bou Said, TN|
|SRC||Le meilleur resto de Belleville, DE LOIN!|
|REF||The best restaurant in Belleville, BY FAR!|
|Cased||Best restaurant in Belleville, DE LOIN!|
|Inline case||The best restaurant in Belleville, BY FAR!|
|SRC||ESCALOPE DE VEAU MONTAGNARDE à tomber, et à ne plus pouvoir se lever de sa chaise|
|REF||ESCALOPE DE VEAU MONTAGNARDE is an absolute knock out and you’ll have difficulty recovering|
|Cased||Falling down and not being able to get up from his chair|
|Inline case||ESCALOPE OF MOUNTAIN CALF to fall, and not be able to rise from his chair|
|SRC||Bcp de choix, peut-être Trop :-)|
|REF||Plenty of choice, maybe too much of it :-)|
|Inline case||Bcp of choice, maybe Too much :-)|
|Natural noise||A lot of choices, maybe Too much :-)|
|Inline case||Service loooooonnnng.|
|Natural noise||Long service.|
|SRC||Carte attractive et pas excessive.|
|REF||Nice menu and not over the top.|
|Inline case||Attractive and not excessive card.|
|BT + FT||Attractive menu and not excessive.|
|SRC||Cuisine pas originale, service passable, mais l’endroit est joli !|
|REF||Not very original food, acceptable service, but the place itself is beautiful!|
|Inline case||Not an original kitchen, fair service, but the place is nice!|
|BT + FT||Food not original, service passable, but the place is nice!|
|SRC||Les frittes boff mais leurs burger, une tuerie!||Typo and slang (“bof”)|
|REF||The fries are meh, but the burgers, to die for!|
|MT||The fries are great but their burgers are to die for!|
|SRC||Le merveilleux du Merveilleux c’est merveilleux...||“merveilleux” is a pastry, “Merveilleux” is a pastry shop (named entity).|
|REF||The merveilleux at Merveilleux is marvelous...|
|MT||The wonderful of the Wonderful it’s wonderful...|
|SRC||La souris d’agneau est délicieuse !||Dish name (translated literally)|
|REF||The lamb shank is delicious!|
|MT||The lamb mouse is delicious!|
|SRC||La quantité 5 raviolis qui se battent pour 12.70 euros.||Idiomatic expression (“qui se battent en duel”)|
|REF||Poor quantity, 5 raviolis or so for 12.70 Euros.|
|MT||The quantity 5 dumplings that fight for 12.70 euros.|
|SRC||Après le palais du facteur nous voici à la halte qui est un restaurant correct.||Named entities (“Palais Idéal du Facteur Cheval” and “La Halte du Facteur”)|
|REF||After the Palais du Facteur we stopped at La Halte, which is a reasonable restaurant.|
|MT||After the mailman’s palace here we are at the rest stop which is a decent restaurant.|