75 Languages, 1 Model: Parsing Universal Dependencies Universally

04/03/2019
by   Daniel Kondratyuk, et al.
0

We present UDify, a multilingual multi-task model capable of accurately predicting universal part-of-speech, morphological features, lemmas, and dependency trees simultaneously for all 124 Universal Dependencies treebanks across 75 languages. By leveraging a multilingual BERT self-attention model pretrained on 104 languages, we found that fine-tuning it on all datasets concatenated together with simple softmax classifiers for each UD task can result in state-of-the-art UPOS, UFeats, Lemmas, UAS, and LAS scores, without requiring any recurrent or language-specific components. We evaluate UDify for multilingual learning, showing that low-resource languages benefit the most from cross-linguistic annotations. We also evaluate for zero-shot learning, with results suggesting that multilingual training provides strong UD predictions even for languages that neither UDify nor BERT have ever been trained on. Code for UDify is available at https://github.com/hyperparticle/udify.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2019

Zero-shot Dependency Parsing with Pre-trained Multilingual Sentence Representations

We investigate whether off-the-shelf deep bidirectional sentence represe...
research
10/24/2022

Universal and Independent: Multilingual Probing Framework for Exhaustive Model Interpretation and Evaluation

Linguistic analysis of language models is one of the ways to explain and...
research
04/06/2022

ByT5 model for massively multilingual grapheme-to-phoneme conversion

In this study, we tackle massively multilingual grapheme-to-phoneme conv...
research
06/24/2022

DetIE: Multilingual Open Information Extraction Inspired by Object Detection

State of the art neural methods for open information extraction (OpenIE)...
research
08/31/2019

Small and Practical BERT Models for Sequence Labeling

We propose a practical scheme to train a single multilingual sequence la...
research
10/29/2021

Handshakes AI Research at CASE 2021 Task 1: Exploring different approaches for multilingual tasks

The aim of the CASE 2021 Shared Task 1 (Hürriyetoğlu et al., 2021) was t...
research
09/26/2021

On the Prunability of Attention Heads in Multilingual BERT

Large multilingual models, such as mBERT, have shown promise in crosslin...

Please sign up or login with your details

Forgot password? Click here to reset