Visualizing and Measuring the Geometry of BERT

06/06/2019
by   Andy Coenen, et al.
Google
13

Transformer architectures show significant promise for natural language processing. Given that a single pretrained model can be fine-tuned to perform well on many different tasks, these networks appear to extract generally useful linguistic features. A natural question is how such networks represent this information internally. This paper describes qualitative and quantitative investigations of one particularly effective model, BERT. At a high level, linguistic features seem to be represented in separate semantic and syntactic subspaces. We find evidence of a fine-grained geometric representation of word senses. We also present empirical descriptions of syntactic representations in both attention matrices and individual word embeddings, as well as a mathematical argument to explain the geometry of these representations.

READ FULL TEXT VIEW PDF
05/15/2021

The Low-Dimensional Linear Geometry of Contextualized Word Representations

Black-box probing models can reliably extract linguistic features like t...
04/03/2021

Exploring the Role of BERT Token Representations to Explain Sentence Probing Results

Several studies have been carried out on revealing linguistic features c...
01/27/2021

On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations

The adaptation of pretrained language models to solve supervised tasks h...
11/27/2018

Verb Argument Structure Alternations in Word and Sentence Embeddings

Verbs occur in different syntactic environments, or frames. We investiga...
07/09/2021

Learning Syntactic Dense Embedding with Correlation Graph for Automatic Readability Assessment

Deep learning models for automatic readability assessment generally disc...
04/24/2020

Syntactic Data Augmentation Increases Robustness to Inference Heuristics

Pretrained neural models such as BERT, when fine-tuned to perform natura...
03/26/2020

Cycle Text-To-Image GAN with BERT

We explore novel approaches to the task of image generation from their r...

Code Repositories

1 Introduction

Neural networks for language processing have advanced rapidly in recent years. A key breakthrough was the introduction of transformer architectures Vaswani (2017). One recent system based on this idea, BERT Devlin (2018), has proven to be extremely flexible: a single pretrained model can be fine-tuned to achieve state-of-the-art performance on a wide variety of NLP applications. This suggests the model is extracting a set of generally useful features from raw text. It is natural to ask, which features are extracted? And how is this information represented internally?

Similar questions have arisen with other types of neural nets. Investigations of convolutional neural networks

Lecun (1995); Krizhevsky (2012) have shown how representations change from layer to layer Zeiler (2014) ; how individual units in a network may have meaning Carter (2019); and that “meaningful” directions exist in the space of internal activation Kim (2017). These explorations have led to a broader understanding of network behavior.

Analyses on language-processing models (e.g., Blevins (2018); Hewitt (2019); Linzen (2016); Peters (2018); Tenney (2018)) point to the existence of similarly rich internal representations of linguistic structure. Syntactic features seem to be extracted by RNNs (e.g., Blevins (2018); Linzen (2016)) as well as in BERT Tenney (2018, 2019); Liu (2019); Peters (2018). Inspirational work from Hewitt and Manning Hewitt (2019) found evidence of a geometric representation of entire parse trees in BERT’s activation space.

Our work extends these explorations of the geometry of internal representations. Investigating how BERT represents syntax, we describe evidence that attention matrices contain grammatical representations. We also provide mathematical arguments that may explain the particular form of the parse tree embeddings described in Hewitt (2019). Turning to semantics, using visualizations of the activations created by different pieces of text, we show suggestive evidence that BERT distinguishes word senses at a very fine level. Moreover, much of this semantic information appears to be encoded in a relatively low-dimensional subspace.

2 Context and related work

Our object of study is the BERT model introduced in Devlin (2018)

. To set context and terminology, we briefly describe the model’s architecture. The input to BERT is based on a sequence of tokens (words or pieces of words). The output is a sequence of vectors, one for each input token. We will often refer to these vectors as

context embeddings because they include information about a token’s context.

BERT’s internals consist of two parts. First, an initial embedding for each token is created by combining a pre-trained wordpiece embedding with position and segment information. Next, this initial sequence of embeddings is run through multiple transformer layers, producing a new sequence of context embeddings at each step. (BERT comes in two versions, a 12-layer BERT-base model and a 24-layer BERT-large model.) Implicit in each transformer layer is a set of attention matrices

, one for each attention head, each of which contains a scalar value for each ordered pair

.

2.1 Language representation by neural networks

Sentences are sequences of discrete symbols, yet neural networks operate on continuous data–vectors in high-dimensional space. Clearly a successful network translates discrete input into some kind of geometric representation–but in what form? And which linguistic features are represented?

The influential Word2Vec system Mikolov (2013), for example, has been shown to place related words near each other in space, with certain directions in space correspond to semantic distinctions. Grammatical information such as number and tense are also represented via directions in space. Analyses of the internal states of RNN-based models have shown that they represent information about soft hierarchical syntax in a form that can be extracted by a one-hidden-layer network Linzen (2016)

. One investigation of full-sentence embeddings found a wide variety of syntactic properties could be extracted not just by an MLP, but by logistic regression

Conneau (2018).

Several investigations have focused on transformer architectures. Experiments suggest context embeddings in BERT and related models contain enough information to perform many tasks in the traditional “NLP pipeline” Tenney (2019)

–tagging part-of-speech, co-reference resolution, dependency labeling, etc.–with simple classifiers (linear or small MLP models)

Tenney (2018); Peters (2018). Qualitative, visualization-based work Vig (2019) suggests attention matrices may encode important relations between words.

A recent and fascinating discovery by Hewitt and Manning Hewitt (2019)

, which motivates much of our work, is that BERT seems to create a direct representation of an entire dependency parse tree. The authors find that (after a single global linear transformation, which they term a “structural probe”) the square of the distance between context embeddings is roughly proportional to tree distance in the dependency parse. They ask why squaring distance is necessary; we address this question in the next section.

The work cited above suggests that language-processing networks create a rich set of intermediate representations of both semantic and syntactic information. These results lead to two motivating questions for our research. Can we find other examples of intermediate representations? And, from a geometric perspective, how do all these different types of information coexist in a single vector?

3 Geometry of syntax

We begin by exploring BERT’s internal representation of syntactic information. This line of inquiry builds on the work by Hewitt and Manning in two ways. First, we look beyond context embeddings to investigate whether attention matrices encode syntactic features. Second, we provide a simple mathematical analysis of the tree embeddings that they found.

3.1 Attention probes and dependency representations

As in Hewitt (2019), we are interested in finding representations of dependency grammar relations De Marneffe (2006). While Hewitt (2019) analyzed context embeddings, another natural place to look for encodings is in the attention matrices. After all, attention matrices are explicitly built on the relations between pairs of words.

Figure 1: A model-wide attention vector for an ordered pair of tokens contains the scalar attention values for that pair in all attention heads and layers. Shown: BERT-base.

To formalize what it means for attention matrices to encode linguistic features, we use an attention probe, an analog of edge probing Tenney (2018). An attention probe is a task for a pair of tokens, where the input is a model-wide attention vector formed by concatenating the entries in every attention matrix from every attention head in every layer. The goal is to classify a given relation between the two tokens. If a linear model achieves reliable accuracy, it seems reasonable to say that the model-wide attention vector encodes that relation. We apply attention probes to the task of identifying the existence and type of dependency relation between two words.

3.1.1 Method

The data for our first experiment is a corpus of parsed sentences from the Penn Treebank Marcus (1993). This dataset has the constituency grammar for the sentences, which was translated to a dependency grammar using the PyStanfordDependencies library McClosky (2015). The entirety of the Penn Treebank consists of 3.1 million dependency relations; we filtered this by using only examples of the 30 dependency relations with more than 5,000 examples in the data set. We then ran each sentence through BERT-base, and obtained the model-wide attention vector (see Figure 1) between every pair of tokens in the sentence, excluding the and

tokens. This and subsequent experiments were conducted using PyTorch on MacBook machines.

With these labeled embeddings, we trained two L2 regularized linear classifiers via stochastic gradient descent, using

Pedregosa (2011). The first of these probes was a simple linear binary classifier to predict whether or not an attention vector corresponds to the existence of a dependency relation between two tokens. This was trained with a balanced class split, and 30% train/test split. The second probe was a multiclass classifier to predict which type of dependency relation exists between two tokens, given the dependency relation’s existence. This probe was trained with distributions outlined in table 2.

3.1.2 Results

The binary probe achieved an accuracy of 85.8%, and the multiclass probe achieved an accuracy of 71.9%. Our real aim, again, is not to create a state-of-the-art parser, but to gauge whether model-wide attention vectors contain a relatively simple representation of syntactic features. The success of this simple linear probe suggests that syntactic information is in fact encoded in the attention vectors.

3.2 Geometry of parse tree embeddings

Hewitt and Manning’s result that context embeddings represent dependency parse trees geometrically raises several questions. Is there a reason for the particular mathematical representation they found? Can we learn anything by visualizing these representations?

3.2.1 Mathematics of embedding trees in Euclidean space

Hewitt and Manning ask why parse tree distance seems to correspond specifically to the square of Euclidean distance, and whether some other metric might do better Hewitt (2019). We describe mathematical reasons why squared Euclidean distance may be natural.

First, one cannot generally embed a tree, with its tree metric , isometrically into Euclidean space (Appendix 6.1). Since an isometric embedding is impossible, motivated by the results of Hewitt (2019) we might ask about other possible representations.

Definition 1 (power- embedding).

Let be a metric space, with metric . We say is a power- embedding if for all , we have

In these terms, we can say Hewitt (2019) found evidence of a power-2 embedding for parse trees. It turns out that power-2 embeddings are an especially elegant mapping. For one thing, it is easy to write down an explicit model–a mathematical idealization–for a power-2 embedding for any tree111We have learned that a similar argument to the proof of 1 appears in Maehara (2013)..

Theorem 1.

Any tree with nodes has a power-2 embedding into .

Proof.

Let the nodes of the tree be , with being the root node. Let be orthogonal unit basis vectors for . Inductively, define an embedding such that:

Given two distinct tree nodes and , where is the tree distance , it follows that we can move from to using mutually perpendicular unit steps. Thus

Remark 1.

This embedding has a simple informal description: at each embedded vertex of the graph, all line segments to neighboring embedded vertices are unit-distance segments, orthogonal to each other and to every other edge segment. (It’s even easy to write down a set of coordinates for each node.) By definition any two power-2 embeddings of the same tree are isometric; with that in mind, we refer to this as the canonical power-2 embedding.

In the proof of Theorem 1, instead of choosing basis vectors in advance, one can choose random unit vectors. Because two random vectors will be nearly orthogonal in high-dimensional space, the power- embedding condition will approximately hold. This means that in space that is sufficiently high-dimensional (compared to the size of the tree) it is possible to construct an approximate power-2 embedding with essentially “local” information, where a tree node is connected to its children via random unit-length branches. We refer to this type of embedding as a random branch embedding. (See Appendix 6.2 for a visualization of these various embeddings.)

In addition to these appealing aspects of power- embeddings, it is worth noting that power- embeddings will not necessarily even exist when . (See Appendix 6.1 for the proof.)

Theorem 2.

For any , there is a tree which has no power- embedding.

Remark 2.

On the other hand, the existence result for power-2 embeddings, coupled with results of Schoenberg (1937), implies that power- tree embeddings do exist for any .

The simplicity of power-2 tree embeddings, as well as the fact that they may be approximated by a simple random model, suggests they may be a generally useful alternative to approaches to tree embeddings that require hyperbolic geometry Nickel (2017).

3.2.2 Visualization of parse tree embeddings

Figure 2: Visualizing embeddings of two sentences after applying the Hewitt-Manning probe. We compare the parse tree (left images) with a PCA projection of context embeddings (right images).

How do parse tree embeddings in BERT compare to exact power-2 embeddings? To explore this question, we created a simple visualization tool. The input to each visualization is a sentence from the Penn Treebank with associated dependency parse trees (see Section 3.1.1). We then extracted the token embeddings produced by BERT-large in layer 16 (following Hewitt (2019)), transformed by the Hewitt and Manning’s “structural probe” matrix , yielding a set of points in 1024-dimensional space. We used PCA to project to two dimensions. (Other dimensionality-reduction methods, such as t-SNE and UMAP McInnes (2018), were harder to interpret.)

To visualize the tree structure, we connected pairs of points representing words with a dependency relation. The color of each edge indicates the deviation from true tree distance. We also connected, with dotted line, pairs of words without a dependency relation but whose positions (before PCA) were far closer than expected. The resulting image lets us see both the overall shape of the tree embedding, and fine-grained information on deviation from a true power-2 embedding.

Two example visualizations are shown in Figure 6, next to traditional diagrams of their underlying parse trees. These are typical cases, illustrating some common patterns; for instance, prepositions are embedded unexpectedly close to words they relate to. (Figure 7 shows additional examples.)

Figure 3: The average squared edge length between two words with a given dependency.

A natural question is whether the difference between these projected trees and the canonical ones is merely noise, or a more interesting pattern. By looking at the average embedding distances of each dependency relation (see Figure 3) , we can see that they vary widely from around 1.2 (, ) to 2.5 (, , ). Such systematic differences suggest that BERT’s syntactic representation has an additional quantitative aspect beyond traditional dependency grammar.

4 Geometry of word senses

BERT seems to have several ways of representing syntactic information. What about semantic features? Since embeddings produced by transformer models depend on context, it is natural to speculate that they capture the particular shade of meaning of a word as used in a particular sentence. (E.g., is “bark” an animal noise or part of a tree?) We explored geometric representations of word sense both qualitatively and quantitatively.

4.1 Visualization of word senses

Our first experiment is an exploratory visualization of how word sense affects context embeddings. For data on different word senses, we collected all sentences used in the introductions to English-language Wikipedia articles. (Text outside of introductions was frequently fragmentary.) We created an interactive application, which we plan to make public. A user enters a word, and the system retrieves 1,000 sentences containing that word. It sends these sentences to BERT-base as input, and for each one it retrieves the context embedding for the word from a layer of the user’s choosing.

The system visualizes these 1,000 context embeddings using UMAP McInnes (2018), generally showing clear clusters relating to word senses. Different senses of a word are typically spatially separated, and within the clusters there is often further structure related to fine shades of meaning. In Figure 4, for example, we not only see crisp, well-separated clusters for three meanings of the word “die,” but within one of these clusters there is a kind of quantitative scale, related to the number of people dying.

See Appendix 6.4 for further examples. The apparent detail in the clusters we visualized raises two immediate questions. First, is it possible to find quantitative corroboration that word senses are well-represented? Second, how can we resolve a seeming contradiction: in the previous section, we saw how position represented syntax; yet here we see position representing semantics.

Figure 4: Embeddings for the word "die" in different contexts, visualized with UMAP. Sample points are annotated with corresponding sentences. Overall annotations (blue text) are added as a guide.

4.2 Measurement of word sense disambiguation capability

The crisp clusters seen in visualizations such as Figure 4 suggest that BERT may create simple, effective internal representations of word senses, putting different meanings in different locations. To test this hypothesis quantitatively, we test whether a simple classifier on these internal representations can perform well at word-sense disambiguation (WSD).

We follow the procedure described in Peters (2018), which performed a similar experiment with the ELMo model. For a given word with senses, we make a nearest-neighbor classifier where each neighbor is the centroid of a given word sense’s BERT-base embeddings in the training data. To classify a new word we find the closest of these centroids, defaulting to the most commonly used sense if the word was not present in the training data. We used the data and evaluation from Raganato (2017): the training data was SemCor Miller (1993) (33,362 senses), and the testing data was the suite described in Raganato (2017) (3,669 senses).

The simple nearest-neighbor classifier achieves an F1 score of 71.1, higher than the current state of the art (Table 1), with the accuracy monotonically increasing through the layers. This is a strong signal that context embeddings are representing word-sense information. Additionally, an even higher score of 71.5 was obtained using the technique described in the following section.

Method F1 score
Baseline (most frequent sense) 64.8
ELMo Peters (2018) 70.1
BERT 71.1
BERT (w/ probe) 71.5
Trained probe Random probe
768 (full) 71.26 70.74
512 71.52 70.51
256 71.29 69.92
128 71.21 69.56
64 70.19 68.00
32 68.01 64.62
16 65.34 61.01
Table 1: [Left] F1 scores for WSD task. [Right] Semantic probe % accuracy on final-layer BERT-base embeddings

4.2.1 An embedding subspace for word senses?

We hypothesized that there might also exist a linear transformation under which distances between embeddings would better reflect their semantic relationships–that is, words of the same sense would be closer together and words of different senses would be further apart.

To explore this hypothesis, we trained a probe following Hewitt and Manning’s methodology. We initialized a random matrix

, testing different values for

. Loss is, roughly, defined as the difference between the average cosine similarity between embeddings of words with different senses, and that between embeddings of the same sense. However, we clamped the cosine similarity terms to within

of the pre-training averages for same and different senses. (Without clamping, the trained matrix simply ended up taking well-separated clusters and separating them further. We tested values between and for the clamping range and had the best performance.)

Our training corpus was the same dataset from 4.1.2., filtered to include only words with at least two senses, each with at least two occurrences (for 8,542 out of the original 33,362 senses). Embeddings came from BERT-base (12 layers, 768-dimensional embeddings).

We evaluate our trained probes on the same dataset and WSD task used in 4.1.2 (Table 1). As a control, we compare each trained probe against a random probe of the same shape. As mentioned in 4.1.2, untransformed BERT embeddings achieve a state-of-the-art accuracy rate of 71.1%. We find that our trained probes are able to achieve slightly improved accuracy down to .

Though our probe achieves only a modest improvement in accuracy for final-layer embeddings, we note that we were able to more dramatically improve the performance of embeddings at earlier layers (see Appendix for details: Figure 10). This suggests there is more semantic information in the geometry of earlier-layer embeddings than a first glance might reveal.

Our results also support the idea that word sense information may be contained in a lower-dimensional space. This suggests a resolution to the seeming contradiction mentioned above: a vector encodes both syntax and semantics, but in separate complementary subspaces.

4.3 Embedding distance and context: a concatenation experiment

If word sense is affected by context, and encoded by location in space, then we should be able to influence context embedding positions by systematically varying their context. To test this hypothesis, we performed an experiment based on a simple and controllable context change: concatenating sentences where the same word is used in different senses.

4.3.1 Method

We picked 25,096 sentence pairs from SemCor, using the same keyword in different senses. E.g.:

A: "He thereupon went to London and spent the winter talking to men of wealth." went: to move from one place to another.
B: "He went prone on his stomach, the better to pursue his examination." went: to enter into a specified state.

We define a matching and an opposing sense centroid for each keyword. For sentence A, the matching sense centroid is the average embedding for all occurrences of “went” used with sense A. A’s opposing sense centroid is the average embedding for all occurrences of “went” used with sense B.

We gave each individual sentence in the pair to BERT-base and recorded the cosine similarity between the keyword embeddings and their matching sense centroids. We also recorded the similarity between the keyword embeddings and their opposing sense centroids. We call the ratio between the two similarities the individual similarity ratio. Generally this ratio is greater than one, meaning that the context embedding for the keyword is closer to the matching centroid than the opposing one.

We joined each sentence pair with the word "and" to create a single new sentence.

We gave these concatenations to BERT and recorded the similarities between the keyword embeddings and their matching/opposing sense centroids. Their ratio is the concatenated similarity ratio.

4.3.2 Results

Figure 5: Average ratio of similarity to sense A vs. similarity to sense B.

Our hypothesis was that the keyword embeddings in the concatenated sentence would move towards their opposing sense centroids. Indeed, we found that the average individual similarity ratio was higher than the average concatenated similarity ratio at every layer (see Figure 5). Concatenating a random sentence did not change the individual similarity ratios. If the ratio is less than one for any sentence, that means BERT has misclassified its keyword sense. We found that the misclassification rate was significantly higher for final-layer embeddings in the concatenated sentences compared to the individual sentences: 8.23% versus 2.43% respectively.

We also measured the effect of projecting the final-layer keyword embeddings into the semantic subspace discussed in 4.1.3. After multiplying each embedding by our trained semantic probe, we obtained an average concatenated similarity ratio of 1.578 and individual similarity ratio of 1.875, which suggests that the transformed embeddings are closer to their matching sense centroids than the original embeddings (the original concatenated similarity ratio is 1.284 and the individual similarity ratio is 1.430). We also measured lower average misclassification rates for the transformed embeddings: 7.31% for concatenated sentences and 2.27% for individual sentences.

5 Conclusion and future work

We have presented a series of experiments that shed light on BERT’s internal representations of linguistic information. We have found evidence of syntactic representation in attention matrices, with certain directions in space representing particular dependency relations. We have also provided a mathematical justification for the squared-distance tree embedding found by Hewitt and Manning.

Meanwhile, we have shown that just as there are specific syntactic subspaces, there is evidence for subspaces that represent semantic information. We also have shown how mistakes in word sense disambiguation may correspond to changes in internal geometric representation of word meaning. Our experiments also suggest an answer to the question of how all these different representations fit together. We conjecture that the internal geometry of BERT may be broken into multiple linear subspaces, with separate spaces for different syntactic and semantic information.

Investigating this kind of decomposition is a natural direction for future research. What other meaningful subspaces exist? After all, there are many types of linguistic information that we have not looked for.

A second important avenue of exploration is what the internal geometry can tell us about the specifics of the transformer architecture. Can an understanding of the geometry of internal representations help us find areas for improvement, or refine BERT’s architecture?

Acknowledgments: We would like to thank David Belanger, Tolga Bolukbasi, Jasper Snoek, and Ian Tenney for helpful feedback and discussions.

References

6 Appendix

6.1 Embedding trees in Euclidean space

Here we provide additional detail on the existence of various forms of tree embeddings.

Isometric embeddings of a tree (with its intrinsic tree metric) into Euclidean space are rare. Indeed, such an embedding is impossible even a four-point tree , consisting of a root node with three children . If is a tree isometry then , and . It follows that , , are collinear. The same can be said of , , and , meaning that .

Since this four-point tree cannot be embedded, it follows the only trees that can be embedded are simply chains.

Not only are isometric embeddings generally impossible, but power- embeddings may also be unavailable when , as the following argument shows.

Proof of Theorem 2

Proof.

We covered the case of above. When , even a tree of three points is impossible to embed without violating the triangle inequality. To handle the case when , consider a “star-shaped” tree of one root node with children; without loss of generality, assume the root node is embedded at the origin. Then in any power- embedding the other vertices will be sent to unit vectors, and for each pair of these unit vectors we have .

On the other hand, a well-known folk theorem (e.g., see [1]) says that given unit vectors at least one pair of distinct vectors has . By the law of cosines, it follows that . For any , there is a sufficiently large such that . Thus for any a large enough star-shaped tree cannot have a power- embedding. ∎

6.2 Ideal vs. actual parse tree embeddings

Figure 6: PCA projection of the context embeddings for the sentence “The field has reserves of 21 million barrels.” transformed by Hewitt and Manning’s “structural probe” matrix, compared to the canonical power-2 embedding, a random branch embedding, and a completely random embedding.

Figure 6

shows (left) a visualization of a BERT parse tree embedding (as defined by the context embeddings for individual words in a sentence). We compare with PCA projections of the canonical power-2 embedding of the same tree structure, as well as a random branch embedding. Finally, we display a completely randomly embedded tree as a control. The visualizations show a clear visual similarity between the BERT embedding and the two mathematical idealizations.

6.3 Additional BERT parse tree visualizations

Figure 7 shows four additional examples of PCA projections of BERT parse tree embeddings.

Figure 7: Additional examples of BERT parse trees. In each pair, at left is a drawing of the abstract tree; at right is a PCA view of the embeddings. Colors are the same as in Figure 6.

6.4 Additional word sense visualizations

We provide two additional examples of word sense visualizations, hand-annotated to show key clusters. See Figure 8 and Figure 9.

Figure 8: Context embeddings for “lie” as used in different sentences.
Figure 9: Context embeddings for “lie” as used in different sentences.

6.5 Dependency relation performance

Dependency precision recall n
advcl 0.34 0.08 1381
advmod 0.32 0.32 6653
amod 0.68 0.48 10830
aux 0.64 0.08 6914
auxpass 0.68 0.50 1501
cc 0.84 0.77 5041
ccomp 0.67 0.78 2792
conj 0.64 0.85 5146
cop 0.49 0.16 2053
det 0.81 0.95 15322
dobj 0.74 0.66 7957
mark 0.58 0.67 2160
neg 0.83 0.17 1265
nn 0.67 0.82 11650
npadvmod 0.53 0.23 580
nsubj 0.72 0.83 14084
nsubjpass 0.30 0.14 1255
num 0.82 0.55 3464
number 0.77 0.74 1182
pcomp 0.14 0.01 957
pobj 0.78 0.97 17146
poss 0.74 0.54 3567
possessive 0.83 0.86 1449
prep 0.79 0.92 17797
prt 0.67 0.33 593
rcmod 0.55 0.30 1516
tmod 0.55 0.15 672
vmod 0.84 0.07 1705
xcomp 0.72 0.40 2203
all 0.72 0.72 150000
Table 2: Per-dependency results of multiclass linear classifier trained on attention vectors, with 300,000 training examples and 150,000 test examples.

6.6 Semantic probe performance across layers

Figure 10: Change in classification accuracy by layer for different probe dimensionalities.