Cross-Lingual Adaptation Using Universal Dependencies

by   Nasrin Taghizadeh, et al.

We describe a cross-lingual adaptation method based on syntactic parse trees obtained from the Universal Dependencies (UD), which are consistent across languages, to develop classifiers in low-resource languages. The idea of UD parsing is to capture similarities as well as idiosyncrasies among typologically different languages. In this paper, we show that models trained using UD parse trees for complex NLP tasks can characterize very different languages. We study two tasks of paraphrase identification and semantic relation extraction as case studies. Based on UD parse trees, we develop several models using tree kernels and show that these models trained on the English dataset can correctly classify data of other languages e.g. French, Farsi, and Arabic. The proposed approach opens up avenues for exploiting UD parsing in solving similar cross-lingual tasks, which is very useful for languages that no labeled data is available for them.



There are no comments yet.


page 1

page 2

page 3

page 4


Cross-Lingual Relation Extraction with Transformers

Relation extraction (RE) is one of the most important tasks in informati...

GATE: Graph Attention Transformer Encoder for Cross-lingual Relation and Event Extraction

Prevalent approaches in cross-lingual relation and event extraction use ...

Cross-Lingual Syntactic Transfer through Unsupervised Adaptation of Invertible Projections

Cross-lingual transfer is an effective way to build syntactic analysis t...

Bridging the domain gap in cross-lingual document classification

The scarcity of labeled training data often prohibits the internationali...

On Learning Universal Representations Across Languages

Recent studies have demonstrated the overwhelming advantage of cross-lin...

Cross-Lingual Transfer of Semantic Roles: From Raw Text to Semantic Roles

We describe a transfer method based on annotation projection to develop ...

Nearest Neighbour Few-Shot Learning for Cross-lingual Classification

Even though large pre-trained multilingual models (e.g. mBERT, XLM-R) ha...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.