Cross-Lingual Adaptation Using Universal Dependencies

03/24/2020
by   Nasrin Taghizadeh, et al.
0

We describe a cross-lingual adaptation method based on syntactic parse trees obtained from the Universal Dependencies (UD), which are consistent across languages, to develop classifiers in low-resource languages. The idea of UD parsing is to capture similarities as well as idiosyncrasies among typologically different languages. In this paper, we show that models trained using UD parse trees for complex NLP tasks can characterize very different languages. We study two tasks of paraphrase identification and semantic relation extraction as case studies. Based on UD parse trees, we develop several models using tree kernels and show that these models trained on the English dataset can correctly classify data of other languages e.g. French, Farsi, and Arabic. The proposed approach opens up avenues for exploiting UD parsing in solving similar cross-lingual tasks, which is very useful for languages that no labeled data is available for them.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/16/2020

Cross-Lingual Relation Extraction with Transformers

Relation extraction (RE) is one of the most important tasks in informati...
10/06/2020

GATE: Graph Attention Transformer Encoder for Cross-lingual Relation and Event Extraction

Prevalent approaches in cross-lingual relation and event extraction use ...
06/06/2019

Cross-Lingual Syntactic Transfer through Unsupervised Adaptation of Invertible Projections

Cross-lingual transfer is an effective way to build syntactic analysis t...
09/16/2019

Bridging the domain gap in cross-lingual document classification

The scarcity of labeled training data often prohibits the internationali...
07/31/2020

On Learning Universal Representations Across Languages

Recent studies have demonstrated the overwhelming advantage of cross-lin...
04/05/2019

Cross-Lingual Transfer of Semantic Roles: From Raw Text to Semantic Roles

We describe a transfer method based on annotation projection to develop ...
09/06/2021

Nearest Neighbour Few-Shot Learning for Cross-lingual Classification

Even though large pre-trained multilingual models (e.g. mBERT, XLM-R) ha...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.