Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction

01/30/2020
by   Taeuk Kim, et al.
2

With the recent success and popularity of pre-trained language models (LMs) in natural language processing, there has been a rise in efforts to understand their inner workings. In line with such interest, we propose a novel method that assists us in investigating the extent to which pre-trained LMs capture the syntactic notion of constituency. Our method provides an effective way of extracting constituency trees from the pre-trained LMs without training. In addition, we report intriguing findings in the induced trees, including the fact that pre-trained LMs outperform other approaches in correctly demarcating adverb phrases in sentences.

READ FULL TEXT

page 3

page 14

research
10/26/2022

Benchmarking Language Models for Code Syntax Understanding

Pre-trained language models have demonstrated impressive performance in ...
research
05/25/2022

Detecting Label Errors using Pre-Trained Language Models

We show that large pre-trained language models are extremely capable of ...
research
03/10/2023

Robotic Applications of Pre-Trained Vision-Language Models to Various Recognition Behaviors

In recent years, a number of models that learn the relations between vis...
research
03/12/2021

Improving Authorship Verification using Linguistic Divergence

We propose an unsupervised solution to the Authorship Verification task ...
research
02/16/2021

Have Attention Heads in BERT Learned Constituency Grammar?

With the success of pre-trained language models in recent years, more an...
research
05/12/2021

How Reliable are Model Diagnostics?

In the pursuit of a deeper understanding of a model's behaviour, there i...
research
04/15/2022

On the Role of Pre-trained Language Models in Word Ordering: A Case Study with BART

Word ordering is a constrained language generation task taking unordered...

Please sign up or login with your details

Forgot password? Click here to reset