Aligning Very Small Parallel Corpora Using Cross-Lingual Word Embeddings and a Monogamy Objective

10/31/2018 ∙ by Nina Poerner, et al. ∙ Universität München 0

Count-based word alignment methods, such as the IBM models or fast-align, struggle on very small parallel corpora. We therefore present an alternative approach based on cross-lingual word embeddings (CLWEs), which are trained on purely monolingual data. Our main contribution is an unsupervised objective to adapt CLWEs to parallel corpora. In experiments on between 25 and 500 sentences, our method outperforms fast-align. We also show that our fine-tuning objective consistently improves a CLWE-only baseline.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Some parallel corpora, such as the Universal Declaration of Human Rights, are too small to apply count-based word alignment algorithms.

sabet2016improving show that integrating monolingual word embeddings into IBM Model 1 Brown et al. (1990) decreases word alignment error rate on a parallel corpus of 1000 sentences. pourdamghani2018using exploit monolingual embedding similarity scores to create synthetic training data for Statistical Machine Translation (SMT), and report an increase in alignment F1.

Recent advances have made it possible to create cross-lingual word embeddings (CLWEs) from purely monolingual data (zhang2017adversarial, zhang2017earth, conneau2017word, artetxe2018robust). We propose to leverage such CLWEs for a similarity-based word alignment method, which works on corpora as small as 25 sentences. Like sabet2016improving, our method relies only on monolingual data (to train the embeddings) and on the small parallel corpus itself.

Our CLWE-only baseline

aligns source and target words in a parallel corpus if their CLWEs have maximum cosine similarity. This approach is independent from the size of the parallel corpus, but has the following problems:

  • Semantics may differ between the embedding training domain and the parallel corpus.

  • CLWEs sometimes fail to discriminate between words with similar contexts, e.g., antonyms.

We therefore propose to fine-tune the CLWEs on the small parallel corpus using an unsupervised embedding monogamy objective. To evaluate the proposed method, we simulate sparse data settings using Europarl sentences and Bible verses. Our method outperforms the count-based fast-align model Dyer et al. (2013)

for corpus sizes up to 500 (resp., 250) sentences. The proposed fine-tuning method improves over the CLWE-only baseline in terms of both precision and recall.

a)

b)

c)

d)
Figure 1: Schematic representation of the monogamy objective. a) one-to-one (“monogamous”) alignment: , b) many-to-many alignment: , c) one-to-many alignment: , d) minimizing means making the red nodes more similar to each other, and less similar to the white nodes.

2 Method

2.1 CLWE-only baseline

Our CLWE-only baseline uses a cross-lingual embedding space derived from purely monolingual data Artetxe et al. (2018a). Let be our small corpus, and let (source) and (target) be parallel sentences from . Let and

be the embedding vectors of tokens

and . We align to . Any ties are broken by proximity to the diagonal of the alignment matrix.

2.2 Fine-tuning method

Intuition.

Assume that we have the following sentence pair: aaa bbb xxx 111 000 222. Assume further that we know from CLWEs that aaa 111 and bbb 222, but we lack informative embeddings for 000 and xxx. We may hypothesize that , as they are the only tokens that lack translations. We may also hypothesize that , , as and already have translations of their own.

In the following, we will refer to this principle as embedding monogamy. We assume that in the absence of evidence to the contrary, a source embedding should have

  • high similarity to one target embedding

  • low similarity to other target embeddings111 Of course, this assumption is over-simplistic, as one-to-n alignments exist (e.g., English not should be similar to both French ne and pas).

This principle is related to the IBM Model Brown et al. (1990)

, where Expectation Maximization increases

if and co-occur in sentences where is not explained by other source words.

Embedding monogamy objective.

We define the probability of

given as:

(1)

where

is a temperature hyperparameter. This definition is similar to the definition of translation probability in artetxe2018unsupervised and lample2018phrase. But while they normalize over the vocabulary, we normalize over the target sentence. As a consequence, the probability of

depends not only on , but also on competitor tokens in .

With these translation probabilities, we model a two-step random walker that starts at , steps to a random target word and then to : . To maximize monogamy, we maximize the entries on the diagonal of , i.e., the probability of the walker returning to its origin. To avoid penalizing long sentences, we minimize the negative logarithm to the base of the source sentence length: . This loss has the following properties:

  • In a fully “monogamous” situation (see Figure 1 a), .

  • In a situation where all source words are equidistant from all target words (see Figure 1 b), .

Reversing the roles of source and target results in the following bidirectional loss: . Both terms are necessary, since a given alignment may appear highly monogamous from the perspective of one sentence but not the other (especially when there are left-over words due to a difference in length).

Adding position information.

At this point, our objective ignores word positions, which we know to be useful from count-based methods (e.g., dyer2013simple). Therefore, we add position embeddings inside the translation probability equation:

where is a sinusoid embedding vector for position Vaswani et al. (2017). As a result, word pairs near the diagonal have higher round trip probabilities initially. Since the monogamy objective aims to strengthen strong links, similar position embeddings act as attractors for non-positional embeddings. Note that we use only the non-positional embeddings for alignment, as the position prior is too strong at test time.

Alignment retention objective.

In initial experiments, we found that the monogamy objective increases recall but risks losing precision, relative to the CLWE-only baseline. Therefore, we add an additional objective that aims to increase round trip probability for alignments made by the baseline, but does not influence unaligned words:

where is the intersection of the -to- and -to- alignments made with the initial CLWEs (see Section 2.1

). Our final loss function is:

.

Figure 2: Alignment precision, recall and F1 as a function of corpus size.

3 Evaluation

We evaluate our model on subsets of different sizes from the English-German Europarl gold alignments222www-i6.informatik.rwth-aachen.de/goldAlignment/ and French-English Bible gold alignments Melamed (1998)333nlp.cs.nyu.edu/blinker/. We consider links with inter-annotator agreement as sure, others as possible.. We initialize CLWEs with the unsupervised algorithm of artetxe2018robust on monolingual FastText embeddings Bojanowski et al. (2017)444fasttext.cc, top-200000 words per language. Fine-tuning is done in keras, using the adam optimizer Kingma and Ba (2014). We set and , and apply dropout to the embeddings.

We use fast-align Dyer et al. (2013)

as a count-based baseline, since it outperformed the IBM models in initial experiments. We symmetrize alignments by either intersection or the grow-diag-final-and (GDFA) heuristic

Koehn et al. (2007). We train fast-align and our fine-tuning method for 500 iterations.

4 Discussion

4.1 Corpus size

The performance of fast-align is highly dependent on corpus size, which is not surprising, seeing that it has to infer word semantics from the small corpus alone. The CLWE-only baseline on the other hand is independent from corpus size, resulting in decent performance even on 25 parallel sentences. Importantly, the positive effect of our fine-tuning method seems to be robust to corpus size, as we see improvements in F1 for all sizes.

4.2 Benefits of fine-tuning

We find that the proposed fine-tuning method has a positive effect on alignment precision and recall, relative to the CLWE-only baseline. We assess some sentence pairs qualitatively to find reasons for this improvement:

Figure 3: Similarity matrices before (left) and after (right) fine-tuning. Red dots: our alignment (intersection). White squares: sure gold alignments. Empty white squares: possible gold alignments.

Resolution of ambiguities.

Word embeddings sometimes fail to differentiate between words with similar contexts, such as antonyms. In Figure 3 (top), our fine-tuning method resolves such an ambiguity: Here, the initial CLWE of answer is slightly more similar to German frage (= question) than to the true translation antwort. Since frage already has a round trip partner, the monogamy objective pushes answer away from frage, resulting in the addition of a correct alignment between answer and antwort.

In-domain word translations.

Since word embeddings are trained on general-purpose corpora, CLWEs can fail to reflect domain-specific word translations. One such example is the translation of lord as French éternel ( “eternal one”) in Figure 3 (bottom). While the translation is common in this particular Bible version, the CLWEs do not reflect it well (). Through fine-tuning, and due to their frequent coocurrence in the small corpus, the similarity between éternel and lord increases enough for a successful alignment.

5 Use case: Aligning the UDHR

In practice, our method would not be applied to English-German or English-French, as there is no lack of parallel data for these language pairs. For a more realistic use case, we align the 50 articles of the Universal Declaration of Human Rights555https://unicode.org/udhr/ in Macedonian and Afrikaans. While we do not have gold alignments for an evaluation, a preliminary qualitative analysis suggests that our method finds a reasonable semantic word alignment, while fast-align mainly predicts the diagonal (see Figure 4 for examples).

Figure 4: Articles 14(1) and 26(3) from the UDHR. Similarity matrices before (left) and after (right) fine-tuning. Red dots: our alignment (intersection). Red boxes: fast-align (intersection). White squares: sure gold alignments. Empty white squares: possible gold alignments (by the authors).

6 Related Work

Embeddings for word alignment.

sabet2016improving reformulate the IBM 1 model to predict the probability of monolingual target embedding vectors. They report improvements in AER for English-French on parallel corpora between 1K and 40K sentences, as well as improvements in precision on words with frequency 20.

pourdamghani2018using exploit similarity scores from monolingual embeddings to create synthetic training data for an SMT system. They report improvements for English-Chinese, English-Arabic and English-Farsi alignment (). Their smallest parallel corpus has 500K sentences, while we align a few dozen to hundred sentences.

Two-step round trip objective.

Our use of two-step round trips is inspired by haeusser2017associative. They optimize domain adaptation using a random walker that steps from image representations with known labels to image representations with unknown labels and back. While their target is a uniform distribution over images with the same label as the image of origin, ours is to have maximum probability mass on the word of origin.

Low resource CLWEs.

Our approach relies on the availability of high-quality CLWEs. wada2018unsupervised report that in settings with little monolingual data ( 1M sentences), mapping approaches like artetxe2018robust are not robust. Instead, they propose to learn CLWEs from a language model trained on the union of two small monolingual corpora. Their work is orthogonal to our fine-tuning method, since we make no assumptions about how the CLWEs are created.

7 Conclusion

We have presented a similarity-based method to produce word alignments for very small parallel corpora, using monolingual data and the corpus itself. Our CLWE-only baseline uses an unsupervised mapping of monolingual embeddings Artetxe et al. (2018a). Our main contribution is an unsupervised embedding monogamy objective, which adapts CLWEs to the small parallel corpus. Our model outperforms count-based fast-align Dyer et al. (2013) on parallel corpora up to 500 (resp., 250) sentences.

We expect that our method will be useful in low-resource settings, e.g., when aligning the Universal Declaration of Human Rights.

Acknowledgments.

We gratefully acknowledge funding for this work by the European Research Council (ERC #740516).

References

  • Artetxe et al. (2018a) Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In ACL, pages 789–798, Melbourne, Australia.
  • Artetxe et al. (2018b) Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. Unsupervised statistical machine translation. In EMNLP, pages 3632–3642, Brussels, Belgium.
  • Bojanowski et al. (2017) Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association of Computational Linguistics, 5(1):135–146.
  • Brown et al. (1990) Peter F Brown, John Cocke, Stephen A Della Pietra, Vincent J Della Pietra, Fredrick Jelinek, John D Lafferty, Robert L Mercer, and Paul S Roossin. 1990. A statistical approach to machine translation. Computational Linguistics, 16(2):79–85.
  • Conneau et al. (2017) Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087.
  • Dyer et al. (2013) Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A simple, fast, and effective reparameterization of IBM Model 2. In NAACL-HTL, pages 644–648, Atlanta, USA.
  • Haeusser et al. (2017) Philip Haeusser, Thomas Frerix, Alexander Mordvintsev, and Daniel Cremers. 2017. Associative domain adaptation. In ICCV, pages 2765–2773, Venice, Italy.
  • Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  • Koehn et al. (2007) Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In ACL, pages 177–180, Prague, Czech Republic.
  • Lample et al. (2018) Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine translation. arXiv preprint arXiv:1804.07755.
  • Melamed (1998) I Dan Melamed. 1998. Manual annotation of translational equivalence: The Blinker project. Technical report, University of Pennsylvania Institute for Research in Cognitive Science.
  • Pourdamghani et al. (2018) Nima Pourdamghani, Marjan Ghazvininejad, and Kevin Knight. 2018. Using word vectors to improve word alignments for low resource machine translation. In NAACL-HLT, pages 524–528, New Orleans, USA.
  • Sabet et al. (2016) Masoud Jalili Sabet, Heshaam Faili, and Gholamreza Haffari. 2016. Improving word alignment of rare words with word embeddings. In COLING 2016: Technical Papers, pages 3209–3215, Osaka, Japan.
  • Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 5998–6008, Long Beach, USA.
  • Wada and Iwata (2018) Takashi Wada and Tomoharu Iwata. 2018. Unsupervised cross-lingual word embedding by multilingual neural language models. arXiv preprint arXiv:1809.02306.
  • Zhang et al. (2017a) Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017a.

    Adversarial training for unsupervised bilingual lexicon induction.

    In ACL, pages 1959–1970, Vancouver, Canada.
  • Zhang et al. (2017b) Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017b. Earth mover’s distance minimization for unsupervised bilingual lexicon induction. In EMNLP, pages 1934–1945, Copenhagen, Denmark.