Neural machine translation (NMT) (Sutskever et al., 2014; Kalchbrenner and Blunsom, 2013; Bahdanau et al., 2014) has achieved great success, arguably reaching the levels of human parity (Hassan et al., 2018) on Chinese to English news translation that led to its popularity and adoption in academia and industry. These models are predominantly trained and evaluated on sentence-level parallel corpora. Document-level machine translation that requires capturing the context to accurately translate sentences has been recently gaining more popularity and was selected as one of the main tasks in the premier machine translation conference WMT19 (Barrault et al., 2019).
A straightforward solution to translate documents by translating sentences in isolation leads to inconsistent but syntactically valid text. The inconsistency is the result of the model not being able to resolve ambiguity with consistent choices across the document. For example, the recent NMT system that achieved human parity Hassan et al. (2018) inconsistently used three different names "Twitter Move Car", "WeChat mobile", "WeChat move" when referring to the same entity (Sennrich, 2018).
To tackle this issue, the majority of the previous approaches (Jean et al., 2017; Wang et al., 2017; Kuang et al., 2017; Tiedemann and Scherrer, 2017; Maruf and Haffari, 2018; Agrawal et al., 2018; Zhang et al., 2018; Xiong et al., 2018; Miculicich et al., 2018; Voita et al., 2019a, b; Jean et al., 2019; Junczys-Dowmunt, 2019) proposed context-conditional NMT models trained on document-level data. However, none of the previous approaches are able to exploit trained NMT models on sentence-level parallel corpora and require training specialized context-conditional NMT models for document-level machine translation.
We propose a way of incorporating context into a trained sentence-level neural machine translation model at decoding time. We process each document monotonically from left to right one sentence at a time and self-train the sentence-level NMT model on its own generated translation. This procedure reinforces choices made by the model and hence increases the chance of making the same choices in the remaining sentences in the document. Our approach does not require training a separate context-conditional model on parallel document-level data and allows us to capture context in documents using a trained sentence-level model.
We make the key contribution in the paper by introducing the first document-level neural machine translation approach that does not require training a context-conditional model on document data. We show how to adapt a trained sentence-level neural machine translation model to capture context in the document during decoding. We evaluate and demonstrate improvements of our proposed approach measured by BLEU score and preferences of human annotators on several document-level machine translation tasks including NIST Chinese-English, WMT19 Chinese-English and OpenSubtitles English-Russian datasets. We qualitatively analyze the decoded sentences produced using our approach and show that they indeed capture the context.
2 Proposed Approach
We translate a document consisting of source sentences into the target language, given a well-trained sentence-level neural machine translation model . The sentence-level model parametrizes a conditional distribution of each target word given the preceding words and the source sentence . Decoding is done by approximately finding using greedy decoding or beam-search.
is typically a recurrent neural network with attention(Bahdanau et al., 2014) or a Transformer model (Vaswani et al., 2017) with parameters .
We start by translating a first source sentence in the document into the target sentence . We then self-train the model on the sentence pair
, which maximizes the log probabilities of each word in the generated sentencegiven source sentence . The self-training procedure runs gradient descent steps for a fixed number of steps with a weight decay. Weight decay keeps the updated values of weights closer to original values. We repeat the same update process for the remaining sentences in the document. The detailed implementation of self-training procedure during decoding is shown in Algorithm 1.
2.2 Multi-pass self-training
Since the document is processed in the left-to-right, monotonic order, our self-training procedure does not incorporate the choices of the model yet to be made on unprocessed sentences. In order to leverage global information from the full document and to further reinforce the choices made by the model across all generated sentences, we propose multi-pass document decoding with self-training. Specifically, we process the document multiple times monotonically from left to right while continuing self-training of the model.
2.3 Oracle self-training to upper bound performance
Since generated sentences are likely to contain some errors, our self-training procedure can reinforce those errors and thus potentially hurt the performance of the model on unprocessed sentences in the document. In order to isolate the effect of imperfect translations and estimate the upper bound of performance, we evaluate our self-training procedure with ground-truth translations as targets, which we calloracle self-training. Running oracle self-training makes it similar to the dynamic evaluation approach introduced in language modeling (Mikolov, 2012; Graves, 2013; Krause et al., 2018), where input text to the language model is the target used to train the neural language model during evaluation. We do not use the oracle in multi-pass self-training since this would make it equivalent to memorizing the correct translation for each sentence in the document and regenerating it again.
3 Related Work
Although there have been some attempts at tackling document-level neural machine translation (for example see proceedings of discourse in machine translation workshop (Popescu-Belis et al., 2019)), it has largely received less attention compared to sentence-level neural machine translation. Prior document-level NMT approaches (Jean et al., 2017; Wang et al., 2017; Kuang et al., 2017; Tiedemann and Scherrer, 2017; Maruf and Haffari, 2018; Agrawal et al., 2018; Zhang et al., 2018; Miculicich et al., 2018) proposed different ways of conditioning NMT models on several source sentences in the document. Perhaps closest of those document NMT approaches to our work is the approach by Kuang et al. (2017), where they train a NMT model with a separate non-parametric cache (Kuhn and Mori, 1990) that incorporates topic information about the document. Recent approaches (Jean et al., 2019; Junczys-Dowmunt, 2019; Voita et al., 2019a) use only partially available parallel document data or monolingual document data. These approaches proposed to fill in missing context in the documents with random or generated sentences. Another line of document-level NMT work (Xiong et al., 2018; Voita et al., 2019b) proposed a two-pass document decoding model inspired by the deliberation network (Xia et al., 2017) in order to incorporate target side document context. Recently, Yu et al. (2019) proposed a novel beam search method that incorporates document context inside noisy channel model (Shannon, 1948; Yu et al., 2017; Yee et al., 2019). Similar to our work, their approach doesn’t require training context-conditional models on parallel document corpora, but relies on separate target-to-source NMT model and unconditional language model to re-rank hypotheses of the source-to-target NMT model.
Closest to our work is the dynamic evaluation approach proposed by Mikolov (2012) and further extended by Graves (2013); Krause et al. (2018), where a neural language model is trained at evaluation time. However unlike language modeling where inputs are ground-truth targets used both during training and evaluation, in machine translation ground-truth translation are not available at decoding time in practical settings. The general idea of storing memories in the weights of the neural network rather than storing memories as copies of neural network activations, that is behind our approach and dynamic evaluation, goes back to 1970s and 1980s work on associative memory models (Willshaw et al., 1969; Kohonen, 1972; Anderson and Hinton, 1981; Hopfield, 1982) and to more recent work on fast weights (Ba et al., 2016).
proposed to annotate the unlabeled data to train supervised classifiers. Self-training has been successfully applied to NLP tasks such as word-sense disambiguation(Yarowsky, 1995) and parsing (McClosky et al., 2006; Reichart and Rappoport, 2007; Huang and Harper, 2009). Self-training has also been used to label monolingual data to improve the performance of sentence-level statistical and neural machine translation models (Ueffing, 2006; Zhang and Zong, 2016). Recently, He et al. (2019)
proposed noisy version of self-training and showed improvement over classical self-training on machine translation and text summarization tasks. Backtranslation(Sennrich et al., 2016a) is another popular pseudo-labelling technique that utilizes target-side monolingual data to improve performance of NMT models.
|(Wang et al., 2017)||RNNSearch||37.76||-||-||36.89||27.57|
|(Kuang et al., 2017)||RNNSearch||-||-||38.40||32.90||31.86|
|(Kuang et al., 2017)||Transformer||48.14||48.05||47.91||48.53||38.38|
|(Zhang et al., 2018)||Doc Transformer||49.69||50.21||49.73||49.46||39.69|
|Transformer + self-train||49.17||49.46||50.12||48.67||41.18|
|Transformer + self-train||52.30||53.36||52.83||53.67||43.68|
We use the NIST Chinese-English (Zh-En), the WMT19 Chinese-English (Zh-En) and the OpenSubtitles English-Russian (En-Ru) datasets in our experiments.
The NIST training set consists of 1.5M sentence pairs from LDC-distributed news. We use MT06 set as validation set. We use MT03, MT04, MT05 and MT08 sets as held out test sets. The MT06 validation set consists of sentences with sentences per document. MT03, MT04, MT05 and MT08 consist of , , and sentences with , , and sentences on average per document respectively. We follow previous work (Zhang et al., 2018) when preprocessing NIST dataset. We preprocess the NIST dataset with punctuation normalization, tokenization, and lower-casing. Sentences are encoded using byte-pair encoding (Sennrich et al., 2016b) with source and target vocabularies of roughly 32K tokens. We use the case-insensitive multi-bleu.perl script with reference files to evaluate the model.
The WMT19 dataset includes the UN corpus, CWMT, and news commentary. We filter the training data by removing duplicate sentences and sentences longer than 250 words. The training dataset consits of 18M sentence pairs. We use newsdev2017 as a validation set and use newstest2017, newstest2018 and newstest2019 as held out test sets. newsdev2017, newstest2017, newstest2018 and newstest2019 consist of total of , , and sentences with average of , , and sentences per document respectively. We similarly follow previous work (Xia et al., 2019) when preprocessing the dataset. Chinese sentences are preprocessed by segmenting and normalizing punctuation. English sentences are preprocessed by tokenizing and true casing. We learn a byte-pair encoding (Sennrich et al., 2016b) with source and target vocabularies of roughly 32K tokens. We use sacreBLEU (Post, 2018) for evaluation.
The OpenSubtitles English-Russian dataset, consisting of movie and TV subtitles, was prepared by Voita et al. (2019b).111https://github.com/lena-voita/good-translation-wrong-in-context The training dataset consists of 6M parallel sentence pairs. We use the context aware sets provided by the authors consisting of documents both in validation and test sets. Due to the way the dataset is processed, each document only contains sentences. The dataset is preprocessed by tokenizing and lower casing. We use byte-pair encoding (Sennrich et al., 2016b) to prepare source and target vocabularies of roughly 32K tokens. We use multi-bleu.perl script for evaluation.
|(Xia et al., 2019)||Transformer Big||-||-||24.2||24.5||-||-|
We train a Transformer (Vaswani et al., 2017) on all datasets. Following previous (Zhang et al., 2018; Voita et al., 2019b; Xia et al., 2019) work we use the Transformer base configuration (transformer_base) on the NIST Zh-En and the OpenSubtitles En-Ru datasets and use the Transformer big configuration (transformer_big) on the WMT19 Zh-En dataset. Transformer base consists of layers, hidden units and attention heads. Transformer big consists of layers, hidden units and attention heads. We use a dropout rate (Srivastava et al., 2014) of and label smoothing to regularize our models. We train our models with the Adam optimizer (Kingma and Ba, 2014) using the same warm-up learning rate schedule as in Vaswani et al. (2017). During decoding we use beam search with beam size and length penalty . We additionally train backtranslated models (Sennrich et al., 2016a) on the NIST Zh-En and the OpenSubtitles En-Ru datasets. We use the publicly available English gigaword dataset (Graff et al., 2003) to create synthetic parallel data for the NIST Zh-En dataset and use synthetic parallel data provided by (Voita et al., 2019a) for the OpenSubtitles En-Ru dataset. When training backtranslated models, we oversample the original parallel data to make the ratio of synthetic data to original data equal to (Edunov et al., 2018). We tune the number of update steps, learning rate, decay rate, and number of passes over the document of our self-training approach with a random search on a validation set. We use the range of for learning rate, range of for decay rate, number of update steps () and number of passes over the document () for random search. We found that best performing models required a small number of update steps (either or ) with a relatively large learning rate ( and small decay rate (). We use the Tensor2Tensor library (Vaswani et al., 2018) to train baseline models and to implement our method.
We present translation quality results measured by BLEU on NIST dataset on Table 1. The self-training procedure improves the results of our sentence-level baseline by the average of BLEU for non-backtranslated model and by BLEU for backtranslated model for all evaluation sets. Our baseline sentence-level Transformer model trained without backtranslation outperforms previous document-level models by Wang et al. (2017) and Kuang et al. (2017) and is comparable to the document-level model proposed by Zhang et al. (2018). Backtranslation further improves the results of our sentence-level model leading to higher BLEU score compared to the Document Transformer Zhang et al. (2018).
In Table 2, we show a detailed study of effects of multi-pass self-training and oracle self-training on BLEU scores on NIST evaluation sets. First, multiple decoding passes over the document give an additional average improvement of BLEU points compared to the single decoding pass over the document. Using oracle self-training procedure gives an average of and BLEU improvement over our non-backtranslated and backtranslated sentence-level baseline models respectively. Compared to using generated translations by the model, oracle self-training gives an improvement of and BLEU points for non-backtranslated and backtranslated models respectively.
The results on the WMT19 evaluation sets are presented on Table 3. Compared to the NIST dataset our self-training procedure shows an improvement of BLEU over a sentence-level baseline model. Oracle self-training outperforms sentence-level baselines by a significant margin of BLEU. We hypothesize that such a large gap between performance of oracle and non-oracle self-training is due to the more challenging nature of the WMT dataset which is reflected in the worse performance of sentence-level baseline on WMT compared to NIST. We investigate this claim by measuring the relationship between BLEU achieved by self-training and the relative quality of the sentence-level model on the NIST dataset. Figure 1 shows that the BLEU difference between self-training and sentence-level models monotonically increases as the quality of the sentence-level model gets better on the NIST dataset. This implies that we can expect a larger improvement from applying self-training as we improve the sentence-level model on the WMT dataset. Preliminary experiments on training back-translated models didn’t improve results on the WMT dataset. We leave further investigation of ways to improve the sentence-level model on the WMT dataset for future work.
The results on OpenSubtitles evaluation sets are in Table 4. Our self-training and oracle self-training approaches give the performance improvement of and BLEU respectively. We hypothesize that the small improvement of self-training is due to relatively small number of sentences in the documents in the OpenSubtitles dataset. We validate this claim by varying the number of sentences in the document used for self-training on NIST dataset. Figure 2 shows that the self-training approach achieves higher BLEU improvement as we increase the number of sentences in documents used for self-training.
6 Human Evaluation
|Ref||мы с эйприл развелись . как я и сказал … игра в ожидание . будь сильным . и всё получится .|
|Baseline||мы с эйприл развелись . ну , как я уже сказал … игра ожидания . будь сильной . ты справишься .|
|Ours||мы с эйприл развелись . ну , как я уже сказал … игра ожидания . будь сильным . ты справишься .|
|Ref||сёрен устраивает вечеринку по поводу своего дня рождения в субботу , а я не знаю , пойду ли я .|
|почему бы тебе не пойти ? просто всё пошло не так . - и я поссорился с кнудом .|
|Baseline||в субботу день рождения сёрена и я не знаю , приглашена ли я .|
|почему тебя не пригласили ? все просто пошло не так . - и я поругался с кнудом .|
|Ours||в субботу день рождения сёрена и я не знаю , приглашена ли я .|
|почему тебя не пригласили ? все просто пошло не так . - и я поссорилась с кнудом .|
|Ref||we are actively seeking a local partner to set up a joint fund company , " duchateau said .|
|duchateau said that the chinese market still has ample potentials .|
|Baseline||we are actively looking for a local partner to establish a joint venture fund company , " doyle said .|
|du said that there is still a lot of room for the chinese market .|
|Ours||we are actively looking for a local partner to establish a joint venture fund company , " doyle said .|
|doyle said that there is still great room for the chinese market .|
|Ref||in may this year , 13 pilots with china eastern airlines wuhan company|
|in succession handed in their resignations , which were rejected by the company .|
|soon afterwards , the pilots applied one after another at the beginning of june|
|to the labor dispute arbitration commission of hubei province for labor arbitration ,|
|requesting for a ruling that their labor relationship with china eastern airlines wuhan company be terminated .|
|Baseline||in may this year , 13 pilots of china eastern ’s wuhan company|
|submitted their resignations one after another , but the company refused .|
|the pilot then applied for labor arbitration with the hubei province|
|labor dispute arbitration committee in early june , requesting the ruling|
|to terminate the labor relationship with the wuhan company of china eastern airlines .|
|Ours||in may this year , 13 pilots of china eastern ’s wuhan company|
|submitted their resignations one after another , but the company refused .|
|subsequently , in early june , the pilots successively applied for labor arbitration|
|with the hubei province labor dispute arbitration committee ,|
|requesting that the labor relationship with china eastern airlines be terminated .|
We conduct a human evaluation study on the NIST Zh-En and the OpenSubtitles En-Ru datasets. For both datasets we sample 50 documents from the test set where translated documents generated by the self-training approach are not exact copies of the translated documents generated by the sentence-level baseline model. For the NIST Zh-En dataset we present reference documents, translated documents generated by the sentence-level baseline, and translated documents generated by self-training approach to 4 native English speakers. For the OpenSubtitles En-Ru dataset we follow a similar setup, where we present reference documents, translated documents generated by sentence-level baseline, and translated documents generated by self-training approach to 4 native Russian speakers. All translated documents are presented in random order with no indication of which approach was used to generate them. We highlight the differences between translated documents when presenting them to human evaluators. The human evaluators are asked to pick one of two translations as their preferred option for each document. We ask the human evaluators to consider fluency, idiomaticity and correctness of the translation relative to the reference when entering their preferred choices.
We collect a total of 200 annotations for 50 documents from all 4 human evaluators and show results in Table 5. For both datasets, human evaluators prefer translated documents generated by the self-training approach to translated documents generated by the sentence-level model. For NIST Zh-En, 122 out of 200 annotations indicate a preference towards translations generated by self-training approach. For OpenSubtitles En-Ru, 118 out of 200 annotations similarly show a preference towards translations generated by our self-training approach. This is a statistically significant preference according to two-sided Binomial test. When aggregated for each document by majority vote, for NIST Zh-En, translations generated by the self-training approach are considered better in 25 documents, worse in 12 documents, and the same in 13 documents. For OpenSubtitles En-Ru, translations generated by self-training approach are considered better in 23 documents, worse in 15 documents, and the same in 12 documents. The agreement between annotators for NIST Zh-En and OpenSubtitles En-Ru is and according to Fleiss’ kappa (Fleiss, 1971). For both datasets, the inter-annotator agreement rate is considered fair.
7 Qualitative Results
In Table 6, we show four reference document pairs together with translated documents generated by the baseline sentence-level model and by our self-training approach. We emphasize the underlined words in all documents.
In the first two examples we emphasize the gender of the person marked on verbs and adjectives in translated Russian sentences. In the first example, the baseline sentence-level model inconsistenly produces different gender markings on the underlined verb сказал (masculine told) and underlined adjective сильной (feminine strong). The self-training approach correctly generates a translation with consistent male gender markings on both the underlined verb сказал and the underlined adjective сильным. Similarly, in the second example, the baseline model inconsistenly produces different gender markings on the underlined verbs приглашена (feminine invited) and поругался (masculine fought). Self-training consistently generates female gender markings on both the underlined verbs приглашена (feminine invited) and поссорилась (feminine fought).
In the third example, we emphasize the underlined named entity in reference and generated translations. The baseline sentence-level model inconsistently generates the names "doyle" and "du" when referring to the same entity across two sentences in the same document. The self-training approach consistently uses the name "doyle" across two sentences when referring to the same entity. In the fourth example, we emphasize the plurality of the underlined words. The baseline model inconsistenly generates both singular and plural forms when referring to same noun in consecutive sentences. Self-training generates the noun "pilots" in correct plural form in both sentences.
In this paper, we propose a way of incorporating the document context inside a trained sentence-level neural machine translation model using self-training. We process documents from left to right multiple times and self-train the sentence-level NMT model on the pair of source sentence and generated target sentence. This reinforces the choices made by the NMT model thus making it more likely that the choices will be repeated in the rest of the document.
We demonstrate the feasibility of our approach on three machine translation datasets: NIST Zh-En, WMT’19 Zh-En and OpenSubtitles En-Ru. We show that self-training improves sentence-level baselines by up to BLEU. We also conduct a human evaluation study and show a strong preference of the annotators to the translated documents generated by our self-training approach. Our analysis demonstrates that self-training achieves higher improvement on longer documents and using better sentence-level models.
In this work, we only use self-training on source-to-target NMT models in order to capture the target side document context. One extension could investigate the application of self-training on both target-to-source and source-to-target sentence-level models to incorporate both source and target document context into generated translations. Overall, we hope that our work would motivate novel approaches of making trained sentence-level models better suited for document translation at decoding time.
We would like to thank Phil Blunsom, Kris Cao, Kyunghyun Cho, Chris Dyer, Wojciech Stokowiec and members of the Language team for helpful suggestions.
- Agrawal et al. (2018) Ruchit Agrawal, Marco Turchi, and Matteo Negri. 2018. Contextual handling in neural machine translation: Look behind, ahead and on both sides.
- Anderson and Hinton (1981) James A Anderson and Geoffrey E Hinton. 1981. Models of information processing in the brain. Parallel models of associative memory.
- Ba et al. (2016) Jimmy Ba, Geoffrey E. Hinton, Volodymyr Mnih, Joel Z. Leibo, and Catalin Ionescu. 2016. Using fast weights to attend to the recent past. In NIPS.
- Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
- Barrault et al. (2019) Loïc Barrault, Ondřej Bojar, Marta R. Costa-jussà, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Müller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). In ACL.
- Edunov et al. (2018) Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In EMNLP.
- Fleiss (1971) J.L. Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin.
- Graff et al. (2003) David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2003. English gigaword. Linguistic Data Consortium, Philadelphia, 4(1):34.
- Graves (2013) Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850.
- Hassan et al. (2018) Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, Shujie Liu, Tie-Yan Liu, Renqian Luo, Arul Menezes, Tao Qin, Frank Seide, Xu Tan, Fei Tian, Lijun Wu, Shuangzhi Wu, Yingce Xia, Dongdong Zhang, Zhirui Zhang, and Ming Zhou. 2018. Achieving human parity on automatic chinese to english news translation. arXiv preprint arXiv:1803.05567.
- He et al. (2019) Junxian He, Jiatao Gu, Jiajun Shen, and Marc’Aurelio Ranzato. 2019. Revisiting self-training for neural sequence generation. arXiv preprint arXiv:1909.13788.
- Hopfield (1982) J J Hopfield. 1982. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences.
- Huang and Harper (2009) Zhongqiang Huang and Mary Harper. 2009. Self-training pcfg grammars with latent annotations across languages. In EMNLP.
- Jean et al. (2019) Sebastien Jean, Ankur Bapna, and Orhan Firat. 2019. Fill in the blanks: Imputing missing sentences for larger-context neural machine translation. arXiv preprint arXiv:1910.14075.
- Jean et al. (2017) Sebastien Jean, Stanislas Lauly, Orhan Firat, and Kyunghyun Cho. 2017. Does neural machine translation benefit from larger context? arXiv preprint arXiv:1704.05135.
- Junczys-Dowmunt (2019) Marcin Junczys-Dowmunt. 2019. Microsoft translator at wmt 2019: Towards large-scale document-level neural machine translation. In WMT.
- Kalchbrenner and Blunsom (2013) Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In EMNLP.
- Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
- Kohonen (1972) Teuvo Kohonen. 1972. Correlation matrix memories. IEEE Transactions on Computers.
- Krause et al. (2018) Ben Krause, Emmanuel Kahembwe, Iain Murray, and Steve Renals. 2018. Dynamic evaluation of neural sequence models. In ICML.
- Kuang et al. (2017) Shaohui Kuang, Deyi Xiong, Weihua Luo, and Guodong Zhou. 2017. Modeling coherence for neural machine translation with dynamic and topic caches. arXiv preprint arXiv:1711.11221.
- Kuhn and Mori (1990) Roland Kuhn and Renato De Mori. 1990. A cache-based natural language model for speech recognition. In PAMI.
Dong-Hyun Lee. 2013.
Pseudo-label : The simple and efficient semi-supervised learning method for deep neural networks.ICML 2013 Workshop : Challenges in Representation Learning (WREPL).
- Maruf and Haffari (2018) Sameen Maruf and Gholamreza Haffari. 2018. Document context neural machine translation with memory networks. In ACL.
- McClosky et al. (2006) David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In ACL.
- Miculicich et al. (2018) Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural machine translation with hierarchical attention networks. In EMNLP.
- Mikolov (2012) Tomas Mikolov. 2012. Statistical language models based on neural networks. Ph.D. thesis, Brno University of Technology.
- Popescu-Belis et al. (2019) Andrei Popescu-Belis, Sharid Loáiciga, Christian Hardmeier, and Deyi Xiong. 2019. Proceedings of the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019). https://www.aclweb.org/anthology/D19-65.pdf.
- Post (2018) Matt Post. 2018. A call for clarity in reporting BLEU scores. In WMT.
- Reichart and Rappoport (2007) Roi Reichart and Rai Rappoport. 2007. Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets. In ACL.
H. Scudder. 1965.
Probability of error of some adaptive pattern-recognition machines.IEEE Trans. Inf. Theor.
- Sennrich (2018) Rico Sennrich. 2018. Why the Time Is Ripe for Discourse in Machine Translation. http://homepages.inf.ed.ac.uk/rsennric/wnmt2018.pdf.
- Sennrich et al. (2016a) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In ACL.
- Sennrich et al. (2016b) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In ACL.
- Shannon (1948) Claude E. Shannon. 1948. A mathematical theory of communication. Bell Syst. Tech. J.
- Srivastava et al. (2014) Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting.
- Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS.
- Tiedemann and Scherrer (2017) Jörg Tiedemann and Yves Scherrer. 2017. Neural machine translation with extended context. In Proceedings of the Third Workshop on Discourse in Machine Translation.
- Ueffing (2006) Nicola Ueffing. 2006. Using monolingual source-language data to improve mt performance. In IWSLT.
- Vaswani et al. (2018) Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. 2018. Tensor2tensor for neural machine translation. CoRR, abs/1803.07416.
- Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.
- Voita et al. (2019a) Elena Voita, Rico Sennrich, and Ivan Titov. 2019a. Context-aware monolingual repair for neural machine translation. In EMNLP.
- Voita et al. (2019b) Elena Voita, Rico Sennrich, and Ivan Titov. 2019b. When a good translation is wrong in context: Context-aware machine translation improves on deixis, ellipsis, and lexical cohesion. In ACL.
- Wang et al. (2017) Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2017. Exploiting cross-sentence context for neural machine translation. In EMNLP.
- Willshaw et al. (1969) David J Willshaw, O Peter Buneman, and Hugh Christopher Longuet-Higgins. 1969. Non-holographic associative memory. Nature.
- Xia et al. (2019) Yingce Xia, Xu Tan, Fei Tian, Fei Gao, Di He, Weicong Chen, Yang Fan, Linyuan Gong, Yichong Leng, Renqian Luo, Yiren Wang, Lijun Wu, Jinhua Zhu, Tao Qin, and Tie-Yan Liu. 2019. Microsoft research asia’s systems for WMT19. In WMT.
- Xia et al. (2017) Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2017. Deliberation networks: Sequence generation beyond one-pass decoding. In NIPS.
- Xiong et al. (2018) Hao Xiong, Zhongjun He, Hua Wu, and Haifeng Wang. 2018. Modeling coherence for discourse neural machine translation. arXiv preprint arXiv:1811.05683.
- Yarowsky (1995) David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In ACL.
- Yee et al. (2019) Kyra Yee, Nathan Ng, Yann N. Dauphin, and Michael Auli. 2019. Simple and effective noisy channel modeling for neural machine translation. In EMNLP.
- Yu et al. (2017) Lei Yu, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Tomas Kocisky. 2017. The neural noisy channel. In ICLR.
- Yu et al. (2019) Lei Yu, Laurent Sartran, Wojciech Stokowiec, Wang Ling, Lingpeng Kong, Phil Blunsom, and Chris Dyer. 2019. Putting machine translation in context with the noisy channel model. arXiv preprint arXiv:1910.00553.
- Zhang et al. (2018) Jiacheng Zhang, Huanbo Luan, Maosong Sun, Feifei Zhai, Jingfang Xu, Min Zhang, and Yang Liu. 2018. Improving the transformer translation model with document-level context. In EMNLP.
- Zhang and Zong (2016) Jiajun Zhang and Chengqing Zong. 2016. Exploiting source-side monolingual data in neural machine translation. In EMNLP.