Have Attention Heads in BERT Learned Constituency Grammar?

02/16/2021
by   Ziyang Luo, et al.
0

With the success of pre-trained language models in recent years, more and more researchers focus on opening the "black box" of these models. Following this interest, we carry out a qualitative and quantitative analysis of constituency grammar in attention heads of BERT and RoBERTa. We employ the syntactic distance method to extract implicit constituency grammar from the attention weights of each head. Our results show that there exist heads that can induce some grammar types much better than baselines, suggesting that some heads act as a proxy for constituency grammar. We also analyze how attention heads' constituency grammar inducing (CGI) ability changes after fine-tuning with two kinds of tasks, including sentence meaning similarity (SMS) tasks and natural language inference (NLI) tasks. Our results suggest that SMS tasks decrease the average CGI ability of upper layers, while NLI tasks increase it. Lastly, we investigate the connections between CGI ability and natural language understanding ability on QQP and MNLI tasks.

READ FULL TEXT
research
11/27/2019

Do Attention Heads in BERT Track Syntactic Dependencies?

We investigate the extent to which individual attention heads in pretrai...
research
05/30/2023

Grammar Prompting for Domain-Specific Language Generation with Large Language Models

Large language models (LLMs) can learn to perform a wide range of natura...
research
06/05/2023

Enhancing Language Representation with Constructional Information for Natural Language Understanding

Natural language understanding (NLU) is an essential branch of natural l...
research
01/30/2020

Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction

With the recent success and popularity of pre-trained language models (L...
research
04/10/2021

Not All Attention Is All You Need

Self-attention based models have achieved remarkable success in natural ...
research
04/04/2023

Can BERT eat RuCoLA? Topological Data Analysis to Explain

This paper investigates how Transformer language models (LMs) fine-tuned...
research
05/23/2023

Flexible Grammar-Based Constrained Decoding for Language Models

LLMs have shown impressive few-shot performance across many tasks. Howev...

Please sign up or login with your details

Forgot password? Click here to reset