An information theoretic view on selecting linguistic probes

09/15/2020 ∙ by Zining Zhu, et al. ∙ UNIVERSITY OF TORONTO 0

There is increasing interest in assessing the linguistic knowledge encoded in neural representations. A popular approach is to attach a diagnostic classifier – or "probe" – to perform supervised classification from internal representations. However, how to select a good probe is in debate. Hewitt and Liang (2019) showed that a high performance on diagnostic classification itself is insufficient, because it can be attributed to either "the representation being rich in knowledge", or "the probe learning the task", which Pimentel et al. (2020) challenged. We show this dichotomy is valid information-theoretically. In addition, we find that the methods to construct and select good probes proposed by the two papers, *control task* (Hewitt and Liang, 2019) and *control function* (Pimentel et al., 2020), are equivalent – the errors of their approaches are identical (modulo irrelevant terms). Empirically, these two selection criteria lead to results that highly agree with each other.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recently, neural networks have shown substantial progress in NLP tasks

(Devlin et al., 2019; Radford et al., 2019). To understand and explain their behavior, a natural question emerge: how much linguistic knowledge is encoded in these neural network systems?

An efficient approach to reveal information encoded in internal representations uses diagnostic classifiers (Alain and Bengio, 2017). Referred to as “probes”, diagnostic classifiers are trained on pre-computed intermediate representations of neural NLP systems. The performance on tasks they are trained to predict are used to evaluate the richness of the linguistic representation in encoding the probed tasks. Such tasks include probing syntax (Hewitt and Manning, 2019; Lin et al., 2019; Tenney et al., 2019), semantics (Yaghoobzadeh et al., 2019), discourse features (Chen et al., 2019; Liu et al., 2019; Tenney et al., 2019), and commonsense knowledge (Petroni et al., 2019; Poerner et al., 2019).

However, appropriate criteria for selecting a good probe is under debate. The traditional view that high-accuracy probes are better is challenged by Hewitt and Liang (2019), who proposed that the high accuracy could be attributed to either (1) that the representation contains rich linguistic knowledge, or (2) that the probe learns the task. To circumvent this ambiguity, they proposed to use the improvement of probing task performance against a control task (predicting random labels from the same representations), i.e., the “selectivity” criterion. Recently, Pimentel et al. (2020), challenged this dichotomy from an information theoretic viewpoint. They proposed to use an “information gain” criterion, which empirically is the reduction in cross entropy from a “control function task” probing from randomized representation.

In this paper, we show the “non-exclusive-or” dichotomy raised by Hewitt and Liang (2019) is valid information-theoretically. There is a difference between the original NLP model learning the task and the probe learning the task.

In addition, we show that the “selectivity” criterion and the “control function” criterion are comparably accurate. Pimentel et al. (2020) formulated their errors with the difference in a pair of KL divergences. We show that the error of the “selectivity” criterion (Hewitt and Liang, 2019), if measured from cross entropy loss, can be formulated in the difference in a pair of KL divergences as well. When randomizations are perfect, these two criteria differ by only constant terms.

Empirically, on a POS tag probing task on English, French and Spanish translations, we show that the “selectivity” and the “control function” criteria highly agree with each other. We rank experiments with over 10,000 different hyperparameter settings using these criteria. The Spearman correlation of the

Hewitt and Liang (2019) vs. Pimentel et al. (2020) criteria are on par with the correlations of “accuracy vs. cross entropy loss” – two very strong baselines.

Overall, we recommend using control mechanisms to select probes, instead of relying on merely the probing task performance. When randomization is done well, controlling the target or representations are equivalent.

2 Related work

Diagnostic probes were originally intended to explain information encoded in intermediate representations (Adi et al., 2017; Alain and Bengio, 2017; Belinkov et al., 2017). Recently, various probing tasks have queried the representations of, e.g., contextualized word embeddings (Tenney et al., 2019, 2019) and sentence embeddings (Linzen et al., 2016; Chen et al., 2019; Alt et al., 2020; Kassner and Schütze, 2020; Maudslay et al., 2020; Chi et al., 2020).

The task of probing is usually formulated as a classifier problem, with the representations as input, and the features indicating information as output. A straightforward method to train a classifier is by minimizing cross entropy, which is the approach we follow. Note that Voita and Titov (2020) derived training objectives from minimum description lengths, resulting in cross entropy loss and some variants.

3 Information theoretic probes

3.1 Formulation

We adopt the information theoretic formulation of linguistic probes of (Pimentel et al., 2020), and briefly summarize as follows.

We want to probe true labels from representations . An ideal probe should accurately report the code-target mutual information , which is unfortunately intractable. We will write in an alternative form.

Let be the unknown true conditional distribution, and a diagnostic probe, according to the setting in literature (Alain and Bengio, 2017; Hewitt and Manning, 2019; Maudslay et al., 2020), is an approximation parameterized by , then:

where and stand for and respectively. We also use to represent the cross entropy for simplicity.

3.2 The source of probing error

A valid dichotomy

Traditionally, people use the cross entropy loss of the diagnostic probe to approximate . We can derive a source of error by rewinding the above formulations:

The first term on RHS, , is independent of either or . Therefore, a low cross entropy loss can be caused by either of the two scenarios:

  • High code-target mutual information , indicating the representation contains rich information about the target .

  • Low KL-divergence between and , indicating the probe learns the task.

The two scenarios exactly correspond to the dichotomy of Hewitt and Liang (2019).

A good probe

To get a good probe, we want to approximate as much as possible. This means a good probe should minimize , as proposed by Pimentel et al. (2020).

However, empirically Pimentel et al. (2020) (as well as many previous articles) used the cross entropy loss to select good probes, which is insufficient, as described above. Alternatively, Hewitt and Liang (2019) and Pimentel et al. (2020) proposed control tasks and control functions, respectively.

3.3 The control task

The control task (Hewitt and Liang, 2019) sets random targets for the probing task. Let us use to indicate a “control function” applied on a token that originally has label . The control function could nullify the information of the input, if necessary.

If we measure the difference between cross entropy in the control task and probing task , we can derive a form of error margin in their measurements111Note that Hewitt and Liang (2019) used accuracy instead of cross entropy. We discuss cross entropy so as to compare the errors against the control function (Pimentel et al., 2020). Let us use a short-hand notations for clarity. Now, what does the diff between cross entropy on control task and probing task actually contains?

We already knew that . According the definition of control function, the output would be independent of . Then:

Therefore:

where is a short-hand notation for the measurement error in the control task criteria:

(1)

When the probe fits the true distribution to a similar extent on both the control task and probing task, the error would be small. Unfortunately, both KL terms are intractable.

3.4 The control function

Control function (Pimentel et al., 2020) introduces a random processor on the representation . To measure the information gain, they used an “information gain” criterion:

Noticing that mutual information terms are intractable, they approximated the objective with the difference between cross entropy in the control function task (we refer to as “control function” henceforth) and the probing task:

To compute the error of this approximation, they reformulated the terms as following:

The two terms cancel out, then:

Where we abbreviate similarly as we did for the control task. Specifically, we write for the probe parameters of control function to tell apart from in the control task.

Pimentel et al. (2020) showed that the error of their approximation, , can be expressed as:

(2)

Again, when the probe fits the true distribution to a similar extent on both the target labels distribution and the probing task, the will be small. Unfortunately, both KL terms are intractable too.

Language # POS # Tokens Correlations
train / dev / test (t_acc,f_ent) (t_acc,t_ent) (f_acc,f_ent)
English 17 177k / 22k / 22k 0.1615 0.1334 0.1763
French 15 303k / 31k / 8k 0.0906 0.0606 0.1295
Spanish 16 341k / 33k / 11k 0.1360 0.0560 0.1254
Table 1: Spearman correlations between t_acc (the “selectivity” criterion (Hewitt and Liang, 2019)) and f_ent (the “gain” criterion (Pimentel et al., 2020)) are on par with two “accuracy vs. cross entropy” correlations.

3.5 Control task vs control function

From Equations 1 and 2, we showed that the selectivity criterion of Hewitt and Liang (2019) and the information gain criterion of Pimentel et al. (2020), if both measured in cross entropy loss, have very similar errors in approximating information gains.

These errors, and respectively, appear in very similar forms222When the two are ideal, and differ by only irrelevant terms – we include the derivations in Supplementary Material.. The probes selected from these two criteria should be highly correlated to each other, and our experiments will confirm.

4 Experiments

4.1 Setup

We use the same family of probes as Hewitt and Liang (2019) and Pimentel et al. (2020)

, multiple layers perceptrons with ReLU activations, to show the correlations of their “good probe” criteria (control task and control function, respectively).

Overall, we sweep the probe model hyper-parameters with a unified training scheme on three tasks (probe, control task, control function). The control task (function) setting includes labels (embeddings) drawn from a uniform random sample once before all experiments. In each training, we follow the setting of (Hewitt and Liang, 2019). We save the model with the best dev loss, report the test set loss and accuracy, and average across 4 different random seeds.

Data

We use the Universal Dependency (Zeman et al., 2019) dataset loaded with the Flair toolkit (Akbik et al., 2018). We examine three languages: English, French, and Spanish. For the probing task, we use POS with labels provided by SpaCy333https://spacy.io. We use the embedding of multilingual BERT (mBERT) implemented by huggingface (Wolf et al., 2019). If a word is split into multiple word pieces, we average its representations.

4.2 The “good probes” are good for both

When measuring the qualities of probes using the “selectivity” (Hewitt and Liang, 2019) or “information gain” (Pimentel et al., 2020) criterion, we show that the rules-of-thumb for training good probes largely agree.

  • [nosep]

  • Early stopping before 24,000 gradient steps (approximately 4 epochs) could inhibit probe quality, but longer training procedures do not improve the probe qualities considerably.

  • Smaller probes are better in general, but exceptions exist. For example, when weight decay is set to 0, probes with one hidden layer and 40 hidden neurons are better in both criteria.

  • A small weight decay is beneficial.

We include more descriptions, including comprehensive experiment configurations and plots in the Supplementary Material.

4.3 The high correlation between criteria

In addition to the qualitative correlations shown above, we compute the correlations between the two criteria over a grid-search style hyper-parameter sweep of over 10,000 configurations. For each “probe, control task, control function” experiment set, we record the following four criteria:

  • [nosep]

  • t_acc: Difference between probing task and control task accuracy. This is the “selectivity” criterion of Hewitt and Liang (2019).

  • f_ent: Difference between control function and probing task cross entropy. This is the “gain” criterion of Pimentel et al. (2020).

  • t_ent: Difference between the control task and the probing task cross entropy.

  • f_acc: Difference between the probing task and control function accuracy.

We collect all experiments of each language according to these criteria, and use Spearman correlation to test three pairs of correlations. As is reported in Table 1, the (t_acc, f_ent) correlations are comparable to two strong baselines, (t_acc, t_ent) and (f_acc, f_ent), the correlations between measurements in accuracy and cross entropy losses.

5 Conclusion

When selecting probes that better approximate , we recommend measuring with a control mechanism instead of relying on the traditional cross entropy on probing task. We show both information-theoretically and empirically, that controlling the targets and representations are equivalent, as long as the control mechanism is randomized.

6 Acknowledgement

We would like to thank Mohamed Abdalla for his insights and discussion. Rudzicz is supported by a CIFAR Chair in artificial intelligence.

References

  • Y. Adi, E. Kermany, Y. Belinkov, O. Lavi, and Y. Goldberg (2017) Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In ICLR, Toulon, France. External Links: Link Cited by: §2.
  • A. Akbik, D. Blythe, and R. Vollgraf (2018) Contextual String Embeddings for Sequence Labeling. In COLING, pp. 1638–1649. External Links: Link Cited by: §4.1.
  • G. Alain and Y. Bengio (2017) Understanding intermediate layers using linear classifier probes. In ICLR, Toulon, France. External Links: 1610.01644, Link Cited by: §1, §2, §3.1.
  • C. Alt, A. Gabryszak, and L. Hennig (2020) Probing Linguistic Features of Sentence-Level Representations in Neural Relation Extraction. In ACL, External Links: 2004.08134, Link Cited by: §2.
  • Y. Belinkov, N. Durrani, F. Dalvi, H. Sajjad, and J. Glass (2017)

    What do Neural Machine Translation Models Learn about Morphology?

    .
    In ACL, Stroudsburg, PA, USA, pp. 861–872. External Links: Document, Link Cited by: §2.
  • M. Chen, Z. Chu, and K. Gimpel (2019) Evaluation Benchmarks and Learning Criteria for Discourse-Aware Sentence Representations. In EMNLP, Stroudsburg, PA, USA, pp. 649–662. External Links: Document, Link Cited by: §1, §2.
  • E. A. Chi, J. Hewitt, and C. D. Manning (2020) Finding Universal Grammatical Relations in Multilingual BERT. In ACL, External Links: 2005.04511v2, Link Cited by: §2.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL, Minneapolis, Minnesota, pp. 4171–4186. External Links: Link Cited by: §1.
  • J. Hewitt and P. Liang (2019) Designing and Interpreting Probes with Control Tasks. In EMNLP, Hong Kong, China, pp. 2733–2743. External Links: Document, Link Cited by: Appendix A, Appendix B, Appendix B, Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 9, An information theoretic view on selecting linguistic probes, §1, §1, §1, §1, §3.2, §3.2, §3.3, §3.5, Table 1, 1st item, §4.1, §4.1, §4.2, footnote 1.
  • J. Hewitt and C. D. Manning (2019) A Structural Probe for Finding Syntax in Word Representations. In NAACL, Minneapolis, Minnesota, pp. 4129–4138. External Links: Document, Link Cited by: §1, §3.1.
  • N. Kassner and H. Schütze (2020) Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly. ACL. External Links: 1911.03343, Link Cited by: §2.
  • Y. Lin, Y. C. Tan, and R. Frank (2019) Open Sesame: Getting Inside BERT’s Linguistic Knowledge. ACL BlackBoxNLP Workshop. External Links: Link Cited by: §1.
  • T. Linzen, E. Dupoux, and Y. Goldberg (2016) Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies. TACL 4, pp. 521–535. External Links: Document, Link Cited by: §2.
  • N. F. Liu, M. Gardner, Y. Belinkov, M. E. Peters, and N. A. Smith (2019) Linguistic Knowledge and Transferability of Contextual Representations. Minneapolis, Minnesota, pp. 1073–1094. External Links: Document, 1903.08855, Link Cited by: §1.
  • R. H. Maudslay, J. Valvoda, T. Pimentel, A. Williams, and R. Cotterell (2020) A Tale of a Probe and a Parser. In ACL, External Links: 2005.01641, Link Cited by: §2, §3.1.
  • F. Petroni, T. Rocktäschel, S. Riedel, P. Lewis, A. Bakhtin, Y. Wu, and A. Miller (2019) Language Models as Knowledge Bases?. In EMNLP, Hong Kong, China, pp. 2463–2473. External Links: Document, Link Cited by: §1.
  • T. Pimentel, J. Valvoda, R. H. Maudslay, R. Zmigrod, A. Williams, and R. Cotterell (2020) Information-Theoretic Probing for Linguistic Structure. In ACL, External Links: Link Cited by: Appendix A, Appendix B, Appendix B, Appendix B, Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 9, An information theoretic view on selecting linguistic probes, §1, §1, §1, §3.1, §3.2, §3.2, §3.4, §3.4, §3.5, Table 1, 2nd item, §4.1, §4.2, footnote 1.
  • N. Poerner, U. Waltinger, and H. Schütze (2019) BERT is Not a Knowledge Base (Yet): Factual Knowledge vs. Name-Based Reasoning in Unsupervised QA. External Links: 1911.03681, Link Cited by: §1.
  • A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever (2019) Language models are unsupervised multitask learners. OpenAI Blog 1 (8), pp. 1–24. External Links: Link Cited by: §1.
  • I. Tenney, D. Das, and E. Pavlick (2019) BERT Rediscovers the Classical NLP Pipeline. In ACL, Florence, Italy, pp. 4593–4601. External Links: Document, Link Cited by: §1, §2.
  • I. Tenney, P. Xia, B. Chen, A. Wang, A. Poliak, R. Thomas McCoy, N. Kim, B. Van Durme, S. R. Bowman, D. Das, and E. Pavlick (2019) What do you learn from context? Probing for sentence structure in contextualized word representations. In ICLR, External Links: 1905.06316, Link Cited by: §1, §2.
  • E. Voita and I. Titov (2020) Information-Theoretic Probing with Minimum Description Length. arXiv preprint arXiv:2003.12298. External Links: Link Cited by: §2.
  • T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, and J. Brew (2019)

    HuggingFace’s Transformers: State-of-the-art Natural Language Processing

    .
    ArXiv abs/1910.0, pp. 1–11. External Links: Link Cited by: §4.1.
  • Y. Yaghoobzadeh, K. Kann, T. J. Hazen, E. Agirre, and H. Schütze (2019) Probing for Semantic Classes: Diagnosing the Meaning Content of Word Embeddings. In ACL, Florence, Italy, pp. 5740–5753. External Links: Document, Link Cited by: §1.
  • D. Zeman, J. Nivre, and M. e. a. Abrams (2019) Universal dependencies 2.5. Note: LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University External Links: Link Cited by: §4.1.

Appendix A Difference between the two criteria

In Section 3 we show that the error of the two criteria can both be written as difference between a pair of KL divergence (modulo a constant term). Here we further simplify the terms when we assume the control task and functions take perfectly random distributions (i.e., independent of the task and representations, respectively).

When we use the same hyperparameter setting, and should be able to approximate to the same extent, so and cancel out. Additionally, following the definitions of the control function and control tasks, we can simplify as follows:

KL
KL

Therefore, the difference between errors of criteria of Hewitt and Liang (2019) and Pimentel et al. (2020) are:

(3)

In short, these two criteria differ by terms dependent only on the randomization functions and the inherent distributions of task labels, i.e., irrelevant terms.

Appendix B Experiments details

Hewitt and Liang (2019) proposed some rules-of-thumb to select good probes, including a “simple probe” suggestion. We sweep the hyper parameters to test whether these rules also apply when measuring probe qualities using the control function (Pimentel et al., 2020).

Hyper-parameter ranges

We sweep hyper parameters from the following ranges:

  • [nosep]

  • Learning rate:

  • Maximum gradient steps: . Their effects are shown in Figure 1.

  • Weight decay: . Their effects are shown in Figures 2, 3 and 4. When sweeping weight decay, max gradient step is set to 24000.

In any configuration mentioned above, we run four experiments with random seeds 73, 421, 9973, 361091, and average the reported results (i.e., accuracy and loss).

Early stopping could inhibit probe quality

Early stopping, if stopped before 24,000 gradient steps (approximately 4 epochs) may inhibit the quality of probes. In addition, we Figure 1 shows high correlation between the “selectivity” (Hewitt and Liang, 2019) and “information gain” (Pimentel et al., 2020) criteria.

Small weight decay is beneficial

We find that smaller weight decays (e.g., 0.01) are more beneficial for probes than larger weight decays. While the two criteria rank the capacity of probes similarly, the most simple probes tend to stand out more distinctively with the “selectivity” criterion (Pimentel et al., 2020), as are shown on Figures 2, 3, and 4.

Smaller probes are not necessarily better

We find that while smaller probes have higher “selectivity” and “information gain” for mBERT representations, probes with one hidden layer and 40-80 hidden neurons are better than more simplistic probes, as shown in Figures 5, 6 and 7. The plots show consistency between the two criteria. For example, larger models and more layers do not necessary lead to better results. Neither are the smallest probes with 0 hidden layers.

Note that we also swept hyperparameters for FastText, where probes with less parameters do not always outperform more complex probes in either accuracy, loss, selectivity, or information gain. Figures 8 and 9 illustrate these observations.

Appendix C Reproducibility

On a T4 GPU card, training one epoch takes around 20 seconds. Without setting maximum gradient steps,

of experiments finish within 400 epochs. We will open source our codes.

Figure 1: Max gradient step vs accuracy, t_acc, f_acc (blue) and loss, t_ent, f_ent (green) on English. The “t” refers to control task (Hewitt and Liang, 2019), and “h” refers to control function (Pimentel et al., 2020). In these set of experiments, we look for the best learning rate and zero weight decay in each configuration.
Figure 2: The “difference of accuracy” (Hewitt and Liang, 2019) and the “difference of loss” (Pimentel et al., 2020) criteria against weight decay on model configurations, on UD English. For each configuration, the learning rate leading to the highest accuracy is selected.
Figure 3: The “difference of accuracy” (Hewitt and Liang, 2019) and the “difference of loss” (Pimentel et al., 2020) criteria against weight decay on model configurations, on UD French. For each configuration, the learning rate leading to the highest accuracy is selected.
Figure 4: The “difference of accuracy” (Hewitt and Liang, 2019) and the “difference of loss” (Pimentel et al., 2020) criteria against weight decay on model configurations, on UD Spanish. For each configuration, the learning rate leading to the highest accuracy is selected.
Figure 5: The “difference of accuracy” (Hewitt and Liang, 2019) and the “difference of loss” (Pimentel et al., 2020) criteria with different learning rates on model configurations, on UD English. The weight decay is set to 0.
Figure 6: The “difference of accuracy” (Hewitt and Liang, 2019) and the “difference of loss” (Pimentel et al., 2020) criteria with different learning rates on model configurations, on UD French. The weight decay is set to 0.
Figure 7: The “difference of accuracy” (Hewitt and Liang, 2019) and the “difference of loss” (Pimentel et al., 2020) criteria with different learning rates on model configurations, on UD Spanish. The weight decay is set to 0.
Figure 8: The accuracy and cross entropy loss of probes on FastText. These performances are much worse than those on mBERT, indicating the richness of information encoded in contextuality of mBERT.
Figure 9: The selectivity (Hewitt and Liang, 2019) and information gain (Pimentel et al., 2020) of probes on FastText. Probes with different capacities are ranked similarly using these two criteria.