Log In Sign Up

Low-Complexity Probing via Finding Subnetworks

by   Steven Cao, et al.

The dominant approach in probing neural networks for linguistic properties is to train a new shallow multi-layer perceptron (MLP) on top of the model's internal representations. This approach can detect properties encoded in the model, but at the cost of adding new parameters that may learn the task directly. We instead propose a subtractive pruning-based probe, where we find an existing subnetwork that performs the linguistic task of interest. Compared to an MLP, the subnetwork probe achieves both higher accuracy on pre-trained models and lower accuracy on random models, so it is both better at finding properties of interest and worse at learning on its own. Next, by varying the complexity of each probe, we show that subnetwork probing Pareto-dominates MLP probing in that it achieves higher accuracy given any budget of probe complexity. Finally, we analyze the resulting subnetworks across various tasks to locate where each task is encoded, and we find that lower-level tasks are captured in lower layers, reproducing similar findings in past work.


page 1

page 2

page 3

page 4


Designing and Interpreting Probes with Control Tasks

Probes, supervised models trained to predict properties (like parts-of-s...

Post-hoc analysis of Arabic transformer models

Arabic is a Semitic language which is widely spoken with many dialects. ...

Pareto Probing: Trading Off Accuracy for Complexity

The question of how to probe contextual word representations in a way th...

On the Pitfalls of Analyzing Individual Neurons in Language Models

While many studies have shown that linguistic information is encoded in ...

Probing via Prompting

Probing is a popular method to discern what linguistic information is co...

Visualizing the Relationship Between Encoded Linguistic Information and Task Performance

Probing is popular to analyze whether linguistic information can be capt...

A Non-Linear Structural Probe

Probes are models devised to investigate the encoding of knowledge – e.g...