Probing via Prompting

07/04/2022
by   Jiaoda Li, et al.
0

Probing is a popular method to discern what linguistic information is contained in the representations of pre-trained language models. However, the mechanism of selecting the probe model has recently been subject to intense debate, as it is not clear if the probes are merely extracting information or modeling the linguistic property themselves. To address this challenge, this paper introduces a novel model-free approach to probing, by formulating probing as a prompting task. We conduct experiments on five probing tasks and show that our approach is comparable or better at extracting information than diagnostic probes while learning much less on its own. We further combine the probing via prompting approach with attention head pruning to analyze where the model stores the linguistic information in its architecture. We then examine the usefulness of a specific linguistic property for pre-training by removing the heads that are essential to that property and evaluating the resulting model's performance on language modeling.

READ FULL TEXT
research
11/10/2022

LERT: A Linguistically-motivated Pre-trained Language Model

Pre-trained Language Model (PLM) has become a representative foundation ...
research
03/29/2022

Visualizing the Relationship Between Encoded Linguistic Information and Task Performance

Probing is popular to analyze whether linguistic information can be capt...
research
03/20/2022

How does the pre-training objective affect what large language models learn about linguistic properties?

Several pre-training objectives, such as masked language modeling (MLM),...
research
07/31/2019

What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models

Pre-training by language modeling has become a popular and successful ap...
research
09/13/2021

Old BERT, New Tricks: Artificial Language Learning for Pre-Trained Language Models

We extend the artificial language learning experimental paradigm from ps...
research
04/08/2021

Low-Complexity Probing via Finding Subnetworks

The dominant approach in probing neural networks for linguistic properti...
research
12/30/2020

Introducing Orthogonal Constraint in Structural Probes

With the recent success of pre-trained models in NLP, a significant focu...

Please sign up or login with your details

Forgot password? Click here to reset