Cross-lingual modeling is especially interesting when generalizing from a “source” language with labeled data to a “target” language without fully supervised annotations. While POS taggers and dependency parsers are available in many languages Nivre et al. (2016), data for tasks like sentence compression Filippova et al. (2015) or classification Chen et al. (2017); Joty et al. (2017) is much harder to come by. Past success for cross-lingual transfer is mainly achieved by label projection Täckström et al. (2013); Wisniewski et al. (2014); Agić et al. (2016), which requires manual efforts, machine translation or parallel data.
In this work, we address the challenge of building models that learn cross-lingual regularities. Our goal is to avoid overfitting to the source language and to generalize to new languages through the use of adversarial training. Ideally, our models build internally language-agnostic representations which can then be applied to any new target language. Adversarial training has gained a lot of attention recently for domain adaptation by building domain-independent feature representations Ganin et al. (2016); Chen and Cardie (2018). As we show in this work, cross-lingual transfer can be treated as a language-specific variant of domain adaptation. This poses additional challenges to adversarial training: In contrast to most domain adaptation settings, the change from source to target involves not only the word distributions changing (as when adapting models from news to web data) but the entire vocabulary changing as well. To address this, we use bilingual word embeddings and universal POS tags as a common intermediate representation. The second difficulty is that, to the best of our knowledge, adversarial loss has only been applied to cross-lingual NLP classification tasks Chen et al. (2017); Joty et al. (2017); Chen and Cardie (2018)
in which a single output label is predicted. Filling the gap, we are the first to show that adversarial loss functions are also effective for cross-lingual sequence tagging in which multiple outputs are predicted for a given input sequence.
We show for both a syntactic task (dependency parsing) and a semantic task (extractive sentence compression) that adversarial training improves cross-lingual transfer when little or no data is available. For completeness, we also provide a negative result: for training POS taggers, bilingual word embeddings and adversarial training are not sufficient to produce useful cross-lingual models.
2 Related Work
Adversarial training Goodfellow et al. (2014), is receiving increased interest in the NLP community Gulrajani et al. (2017); Hjelm et al. (2017); Li et al. (2017); Press et al. (2017); Rajeswar et al. (2017); Yu et al. (2017); Zhao et al. (2017).
ganin2016 propose adversarial training with a gradient reversal layer for domain adaptation (for image classification and sentiment analysis, resp.). Similarly, chen2017 and joty2017 apply adversarial training to cross-lingual sentiment classification and community question answering, respectively. While most previous work in NLP has investigated adversarial domain adaptation for sentence-level classification tasks, we are the first to explore it for cross-lingual sequence tagging. Moreover, we provide a direct comparison of different adversarial loss functions in a cross-lingual training setting. li2016 also work on adversarial sequence tagging but treat sequence tag prediction and sequence labeling as adversarial model parts. This is very different in motivation than using adversarial training in cross-lingual settings. More related to our work is the paper by yasunaga2018 which applies adversarial training in the context of part-of-speech tagging. However, their model is different from ours in two crucial respects. They train single-language models whereas we train a cross-lingual model. They use adversarial training in the form of input perturbations whereas we use adversarial loss functions in order to derive language-independent representations to be able to transfer knowledge between source and target languages.
3.1 Model Architecture
Our model consists of three main components: a feature generator, a domain discriminator and a sequence tagger: see Figure 1.
The feature generator is a bi-directional LSTM (bi-LSTM) with one layer Hochreiter and Schmidhuber (1997) which uses the embeddings of words, their POS tags and Brown clusters as input (see Section 4 for more details). Its output is consumed by the domain discriminator and the target tagger , which are both implemented as feed-forward networks and predict the language id (lid) of the input sequence and the target token label at each step, respectively.111For dependency parsing, the tagger is replaced by the arc-standard model from Kong et al. (2017) which predicts two labels per token, but the feature generator architecture is the same. We found that predicting language id at the token level was more effective than predicting it on the sentence level. The tagger objective maximizes the log-likelihood of the target tag sequence:
updating w.r.t. parameters of the tagger . The objectives for the discriminator and feature generator depend on the adversarial techniques, which we describe next.
3.2 Training with Gradient Reversal
The first adversarial architecture we investigate is gradient reversal training as proposed by ganin2016. In this setting, the discriminator is a classifier, which identifies the input domain given a single feature vector. Thus, its objective is .
The goal of the generator is to fool the discriminator which is achieved by updating the generator weights in the opposite direction w.r.t. the discriminator gradient:
where is used to scale the gradient from the discriminator.
3.3 Training with GAN and WGAN
The other two adversarial training objectives we investigate is the GAN Goodfellow et al. (2014) and its variant Wasserstein GAN (WGAN) objective Arjovsky et al. (2017). In contrast to gradient reversal, the discriminator inputs are sampled from source and target distributions and , resp. The adversarial objective for GAN is:
The objective of the feature generator is to act as an adversary w.r.t. the discriminator while being collaborative w.r.t. the target tagger:
To stabilize the adversarial training, arjovsky2017 have proposed WGAN, which trains the discriminator as:
with restricting the range of the discriminator weights for Lipschitz continuity. The objective for the feature generator has the same form as in Eq. 4.
Thus, the feature generator is incentivized to extract language-agnostic representations in order to fool the discriminator while helping the tagger. As a result, the tagger is forced to rely more on language-agnostic features. As we discuss later, this works well for higher-level syntactic or semantic tasks but does not work for low-level, highly lexicalized tasks like POS tagging.
Note that the updates to the target tagger are affected by the discriminator only indirectly through the shared feature generator.
We address two cross-lingual sequential prediction tasks: dependency parsing and extractive sentence compression. For both tasks, we evaluate four different settings: (i) No ADA: no adversarial training: the feature-generator and tagger are trained on the source language and then tested on the target language, (ii) GR: gradient-reversal training, (iii) GAN: using GAN loss, (iv): WGAN: using WGAN loss. In (ii)-(iv), a discriminator is trained in order to achieve language-agnostic representations in the feature generator. In all setups, we use bilingual word embeddings (BWE), Brown clusters and universal POS tags as input representation that is common across languages. The BWEs are trained on unsupervised multi-lingual corpora as described in soricut2016. The POS tags are predicted by simple bi-LSTM taggers trained on the full datasets.
4.1 Data and Evaluation Measure
For parsing, we use the French (FR) and Spanish (ES) parts of the Universal Dependencies v1.3 Nivre et al. (2016). For sentence compression, in the absence of non-english datasets, we collect our own datasets for FR and ES from online sources (e.g., Wikipedia) and ask professionally trained linguists to label each token with KEPT or DROPPED (see Section 4.3) such that the compressed sentences are grammatical and informative. See Table 1 for statistics. In all our experiments, we use ES as source and FR as target.
We follow standard approaches for evaluation: token-level labeled attachment score (LAS) for dependency parsing and sentence-level accuracy for sentence compression Filippova et al. (2015). Note that for the latter, all token-level KEPT/DROPPED decisions need to be correct in order to get a positive score for a sentence.
Note that, for all experiments without target training data, we do not use target data for tuning the models. Thus, the sequence-tagger is fully unsupervised w.r.t. the target language.
|#sent. (train / dev / test)||2353 / 115 / 115||2760 / 115 / 115|
|avg sent. length||41 tokens||37 tokens|
|Training data||No ADA||GR||GAN||WGAN|
|ES + 0k FR||63.35||64.25||61.20||62.51|
|ES + 1k FR||67.33||67.43||66.69||66.97|
|ES + 2k FR||68.51||69.05||68.54||68.14|
|ES + all FR||80.17||80.46||80.68||80.22|
4.2 Dependency Parsing
We train the tagger on ES data with and without adversarial loss. Table 2 shows the results. Adversarial training gives consistent improvements over the conventionally trained baseline in all settings. It also outperforms a monolingual model trained on the full dataset of the target language FR which achieves a score of 80.21. When comparing the different adversarial loss functions, GR outperforms GAN and WGAN in most cases. One possible reason is a difference in the discriminator’s strength: During training, we observe that the discriminator of GAN and WGAN could easily predict the language id correctly (even after careful tuning of the update rate between generator and discriminator). This is a well-known problem with training GANs: When the discriminator becomes too strong it provides no useful signal for the feature generator. In contrast, GR training updates the feature generator by taking the inverse of the gradient of the discriminator cross-entropy loss. This simpler setup possibly results in a better training signal for the generator.
|1k FR + 1k MT-FR||17.09|
|Training data||No ADA||GR||GAN||WGAN|
|ES + 0k FR||0.00||9.17||0.00||1.71|
|ES + 1k FR||20.51||22.27||11.97||15.38|
|ES + 2k FR||29.06||24.89||15.38||19.66|
|ES + all FR||29.91||30.77||22.22||29.06|
4.3 Sentence Compression
Extractive sentence compression aims at generating shorter versions of a given sentence by deleting tokens Knight and Marcu (2000); Clarke and Lapata (2008); Berg-Kirkpatrick et al. (2011); Filippova et al. (2015); Klerke et al. (2016)
. This is useful for text summarization as well as for simplifying sentences or providing shorter answers to questions. We follow related work and treat it as a sequence-tagging problem: each token of the input sentence is tagged with either KEPT or DROPPED, indicating which words should occur in the compressed sentence. To solve the task, the model needs to consider the meaning of words and sentences. Thus, we consider it a semantic task although we treat it as a sequence-tagging problem.
Monolingual and MT models. Most work on sentence compression considers English corpora only. Studies on other languages either train different monolingual models Steinberger and Tesar (2007) or use translation or alignments to transfer compressions from English into another language Aziz et al. (2012); Takeno and Yamamoto (2015); Ive and Yvon (2016). In order to get baseline results, we follow these approaches and train monolingual models (on FR) and models on translated data (MT-FR).222We use the Google MT API to translate from ES into FR. Thus, the feature generator and tagger are monolingual models and there is no language discriminator. Table 3 shows the results. We find that MT can help to train first models in a new language. However, training data in the target language is better (see performance gap from 1k FR+1k MT-FR to 2k FR).
Cross-lingual models. Next, we train cross-lingual models (see Table 4). Even without adversarial training, the models perform better than the monolingual models. This shows that information can already be shared from ES to FR by using bilingual word embeddings. When adding adversarial training, we again notice a better performance of GR compared to GAN or WGAN: GR training boosts the results, especially for no or little FR training data. GAN and WGAN loss does not perform as good as for dependency parsing.
We compared tasks of different natures: a syntactic task (depedency parsing) and a semantic task (sentence compression). For completeness, we also report that language-agnostic POS taggers did not lead to promising results: Even though adversarial training improved a No-ADA baseline by several points, cross-lingual transfer still yielded only 45% POS accuracy in the target language, which is not accurate enough to be useful in downstream models. We assume that POS tagging requires seeing language-specific vocabulary more than other tasks. However, we showed that the more high-level tasks dependency parsing and sentence compression can benefit from language-agnostic representations. To the best of our knowledge, this is the first work showing the effectiveness of adversarial loss for cross-lingual sequence tagging. Therefore, we opted for a language pair from the same language family. However, our algorithm is applicable to language pairs from different language groups as well.
6 Conclusion and Future Work
In this paper, we study the utility of adversarial training for cross-lingual sequence-tagging. Our results show that the more higher-level structure the task required, the more gains we could achieve with cross-lingual models. Gradient reversal training outperformed GAN and WGAN loss in our experiments. In future work we plan to extend our study to other language pairs, including languages from different families.
- Agić et al. (2016) Željko Agić, Anders Johannsen, Barbara Plank, Héctor Alonso Martínez, Natalie Schluter, and Anders Søgaard. 2016. Multilingual projection for parsing truly low-resource languages. Transactions of the Association for Computational Linguistics, 4:301.
- Arjovsky et al. (2017) Martin Arjovsky, Soumith Chintala, and Léon Bottou. 2017. Wasserstein GAN. arXiv, abs/1701.07875.
- Aziz et al. (2012) Wilker Aziz, Sheila Castilho Monteiro de Sousa, and Lucia Specia. 2012. Cross-lingual sentence compression for subtitles. In The 16th Annual Conference of the European Association for Machine Translation (EAMT), pages 103– 110, Trento, Italy.
- Berg-Kirkpatrick et al. (2011) Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly learning to extract and compress. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 481–490, Portland, Oregon. Association for Computational Linguistics.
- Chen and Cardie (2018) Xilun Chen and Claire Cardie. 2018. Multinomial adversarial networks for multi-domain text classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, Louisiana. Association for Computational Linguistics.
- Chen et al. (2017) Xilun Chen, Yu Sun, Ben Athiwaratkun, Kilian Weinberger, and Claire Cardie. 2017. Adversarial deep averaging networks for cross-lingual sentiment classification. arXiv, abs/1606.01614.
Clarke and Lapata (2008)
James Clarke and Mirella Lapata. 2008.
Global inference for sentence compression an integer linear programming approach.
Journal of Artificial Intelligence Research, 31(1):399–429.
- Filippova et al. (2015) Katja Filippova, Enrique Alfonseca, Carlos A. Colmenares, Lukasz Kaiser, and Oriol Vinyals. 2015. Sentence compression by deletion with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 360–368, Lisbon, Portugal. Association for Computational Linguistics.
Ganin et al. (2016)
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo
Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky.
Domain-adversarial training of neural networks.
Journal of Machine Learning Research (JMLR), 17(1):1– 35.
- Goodfellow et al. (2014) Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative adversarial networks. In Advances in Neural Information Processing Systems (NIPS).
- Gulrajani et al. (2017) Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. 2017. Improved training of wasserstein gans. arXiv, abs/1704.00028.
- Hjelm et al. (2017) R Devon Hjelm, Athul Paul Jacob, Tong Che, Kyunghyun Cho, and Yoshua Bengio. 2017. Boundary-seeking generative adversarial networks. arXiv, abs/1702.08431.
- Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735–1780.
- Ive and Yvon (2016) Julia Ive and François Yvon. 2016. Parallel sentence compression. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1503– 1513, Osaka, Japan. The COLING 2016 Organizing Committee.
- Joty et al. (2017) Shafiq Joty, Preslav Nakov, Lluíis Marquèz, and Israa Jaradat. 2017. Cross-language learning with adversarial neural networks: Application to community question answering. arXiv, abs/1706.06749.
- Klerke et al. (2016) Sigrid Klerke, Yoav Goldberg, and Anders Søgaard. 2016. Improving sentence compression by learning to predict gaze. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1528–1533, San Diego, California. Association for Computational Linguistics.
- Knight and Marcu (2000) Kevin Knight and Daniel Marcu. 2000. Statistics-based summarization - step one: Sentence compression. In Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, pages 703–710. AAAI Press.
- Kong et al. (2017) Lingpeng Kong, Chris Alberti, Daniel Andor, Ivan Bogatyy, and David Weiss. 2017. DRAGNN: A transition-based framework for dynamically connected neural networks. arXiv 1703.04474.
- Li et al. (2016) Jia Li, Kaiser Asif, Hong Wang, Brian D. Ziebart, and Tanya Berger-Wolf. 2016. Adversarial sequence tagging. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pages 1690–1696, New York, New York, USA. AAAI Press.
- Li et al. (2017) Jiwei Li, Will Monroe, Tianlin Shi, Sébastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2157–2169, Copenhagen, Denmark. Association for Computational Linguistics.
- Nivre et al. (2016) Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan T McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’12).
- Press et al. (2017) Ofir Press, Amir Bar, Ben Bogin, Jonathan Berant, and Lior Wolf. 2017. Language generation with recurrent generative adversarial networks without pre-training. arXiv, abs/1706.01399.
- Rajeswar et al. (2017) Sai Rajeswar, Sandeep Subramanian, Francis Dutil, Christopher Pal, and Aaron Courville. 2017. Adversarial generation of natural language. arXiv, abs/1705.10929.
- Soricut and Ding (2016) Radu Soricut and Nan Ding. 2016. Multilingual word embeddings using multigraphs. arXiv, abs/1612.04732.
- Steinberger and Tesar (2007) Josef Steinberger and Roman Tesar. 2007. Knowledge-poor multilingual sentence compression. In 7th Conference on Language Engineering (SOLE), pages 369– 379, Cairo, Egypt.
- Täckström et al. (2013) Oscar Täckström, Dipanjan Das, Slav Petrov, Ryan McDonald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. Transactions of the Association for Computational Linguistics, pages 1–12.
- Takeno and Yamamoto (2015) Shunsuke Takeno and Kazuhide Yamamoto. 2015. Japanese sentence compression using simple english wikipedia. In International Conference on Asian Language Processing (IALP), pages 65– 68, Suzhou, China.
- Wisniewski et al. (2014) Guillaume Wisniewski, Nicolas Pécheux, Souhir Gahbiche-Braham, and François Yvon. 2014. Cross-lingual part-of-speech tagging through ambiguous learning. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1779–1785, Doha, Qatar. Association for Computational Linguistics.
- Yasunaga et al. (2018) Michihiro Yasunaga, Jungo Kasai, and Dragomir Radev. 2018. Robust multilingual part-of-speech tagging via adversarial training. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, Louisiana. Association for Computational Linguistics.
- Yu et al. (2017) Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 2852–2858.
- Zhao et al. (2017) Junbo (Jake) Zhao, Yoon Kim, Kelly Zhang, Alexander M. Rush, and Yann LeCun. 2017. Adversarially regularized autoencoders for generating discrete structures. arXiv, abs/1706.04223.