On the Branching Bias of Syntax Extracted from Pre-trained Language Models

10/06/2020
by   Huayang Li, et al.
0

Many efforts have been devoted to extracting constituency trees from pre-trained language models, often proceeding in two stages: feature definition and parsing. However, this kind of methods may suffer from the branching bias issue, which will inflate the performances on languages with the same branch it biases to. In this work, we propose quantitatively measuring the branching bias by comparing the performance gap on a language and its reversed language, which is agnostic to both language models and extracting methods. Furthermore, we analyze the impacts of three factors on the branching bias, namely parsing algorithms, feature definitions, and language models. Experiments show that several existing works exhibit branching biases, and some implementations of these three factors can introduce the branching bias.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/18/2021

Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models

Numerous works have analyzed biases in vision and pre-trained language m...
04/15/2021

Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models

While the prevalence of large pre-trained language models has led to sig...
10/16/2021

An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-Trained Language Models

Recent work has shown that pre-trained language models capture social bi...
01/30/2020

Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction

With the recent success and popularity of pre-trained language models (L...
12/15/2021

Few-shot Instruction Prompts for Pretrained Language Models to Detect Social Biases

Detecting social bias in text is challenging due to nuance, subjectivity...
08/24/2019

Release Strategies and the Social Impacts of Language Models

Large language models have a range of beneficial uses: they can assist i...
09/30/2020

CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models

Pretrained language models, especially masked language models (MLMs) hav...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.