Morphosyntactic probing of multilingual BERT models

06/09/2023
by   Judit Acs, et al.
0

We introduce an extensive dataset for multilingual probing of morphological information in language models (247 tasks across 42 languages from 10 families), each consisting of a sentence with a target word and a morphological tag as the desired label, derived from the Universal Dependencies treebanks. We find that pre-trained Transformer models (mBERT and XLM-RoBERTa) learn features that attain strong performance across these tasks. We then apply two methods to locate, for each probing task, where the disambiguating information resides in the input. The first is a new perturbation method that masks various parts of context; the second is the classical method of Shapley values. The most intriguing finding that emerges is a strong tendency for the preceding context to hold more information relevant to the prediction than the following context.

READ FULL TEXT

page 1

page 14

page 15

page 17

page 20

page 21

page 22

page 30

10/01/2020

Evaluating Multilingual BERT for Estonian

Recently, large pre-trained language models, such as BERT, have reached ...
04/06/2020

A Systematic Analysis of Morphological Content in BERT Models for Multiple Languages

This work describes experiments which probe the hidden representations o...
04/17/2021

A multilabel approach to morphosyntactic probing

We introduce a multilabel probing task to assess the morphosyntactic rep...
02/22/2021

Evaluating Contextualized Language Models for Hungarian

We present an extended comparison of contextualized language models for ...
08/19/2019

UDPipe at SIGMORPHON 2019: Contextualized Embeddings, Regularization with Morphological Categories, Corpora Merging

We present our contribution to the SIGMORPHON 2019 Shared Task: Crosslin...
12/19/2016

Neural Multi-Source Morphological Reinflection

We explore the task of multi-source morphological reinflection, which ge...

Please sign up or login with your details

Forgot password? Click here to reset