Phylogeny-Inspired Adaptation of Multilingual Models to New Languages

05/19/2022
by   Fahim Faisal, et al.
0

Large pretrained multilingual models, trained on dozens of languages, have delivered promising results due to cross-lingual learning capabilities on variety of language tasks. Further adapting these models to specific languages, especially ones unseen during pre-training, is an important goal towards expanding the coverage of language technologies. In this study, we show how we can use language phylogenetic information to improve cross-lingual transfer leveraging closely related languages in a structured, linguistically-informed manner. We perform adapter-based training on languages from diverse language families (Germanic, Uralic, Tupian, Uto-Aztecan) and evaluate on both syntactic and semantic tasks, obtaining more than 20 over strong commonly used baselines, especially on languages unseen during pre-training.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset