A Non-Linear Structural Probe

05/21/2021
by   Jennifer C. White, et al.
0

Probes are models devised to investigate the encoding of knowledge – e.g. syntactic structure – in contextual representations. Probes are often designed for simplicity, which has led to restrictions on probe design that may not allow for the full exploitation of the structure of encoded information; one such restriction is linearity. We examine the case of a structural probe (Hewitt and Manning, 2019), which aims to investigate the encoding of syntactic structure in contextual representations through learning only linear transformations. By observing that the structural probe learns a metric, we are able to kernelize it and develop a novel non-linear variant with an identical number of parameters. We test on 6 languages and find that the radial-basis function (RBF) kernel, in conjunction with regularization, achieves a statistically significant improvement over the baseline in all languages – implying that at least part of the syntactic knowledge is encoded non-linearly. We conclude by discussing how the RBF kernel resembles BERT's self-attention layers and speculate that this resemblance leads to the RBF-based probe's stronger performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/04/2020

A Tale of a Probe and a Parser

Measuring what linguistic information is encoded in neural models of lan...
research
11/11/2022

The Architectural Bottleneck Principle

In this paper, we seek to measure how much information a component in a ...
research
10/05/2020

Pareto Probing: Trading Off Accuracy for Complexity

The question of how to probe contextual word representations in a way th...
research
04/13/2022

Probing for Constituency Structure in Neural Language Models

In this paper, we investigate to which extent contextual neural language...
research
05/15/2019

What do you learn from context? Probing for sentence structure in contextualized word representations

Contextualized representation models such as ELMo (Peters et al., 2018a)...
research
05/22/2023

GATology for Linguistics: What Syntactic Dependencies It Knows

Graph Attention Network (GAT) is a graph neural network which is one of ...
research
03/02/2022

Discontinuous Constituency and BERT: A Case Study of Dutch

In this paper, we set out to quantify the syntactic capacity of BERT in ...

Please sign up or login with your details

Forgot password? Click here to reset