The success of self-supervised pretrained models in NLP Devlin et al. (2019); Peters et al. (2018a); Radford et al. (2019); Lan et al. (2020) has stimulated interest in how these models work, and—motivated by their strong performance on many tasks Wang et al. (2018, 2019)—what they learn about language. Recent work on model analysis (Belinkov and Glass, 2019) indicates that they may learn a lot about linguistic structure, including part of speech (Belinkov et al., 2017a), syntax (Blevins et al., 2018; Marvin and Linzen, 2018), word sense (Peters et al., 2018a; Reif et al., 2019), and more (Tenney et al., 2019; Liu et al., 2019).
Many of these results are based on predictive methods, such as probing, which measure how well a linguistic variable can be predicted from intermediate representations. However, the ability of supervised probes to fit weak features makes it difficult to find unbiased answers about how those representations are structured (Saphra and Lopez, 2019; Voita et al., 2019). Descriptive methods like clustering and visualization explore this structure directly, but provide limited control and often regress to dominant categories such as lexical features (Singh et al., 2019) or word sense (Reif et al., 2019). This leaves open many questions: how are linguistic features like entity types, syntactic dependencies, or semantic roles represented by an encoder like ELMo (Peters et al., 2018a) or BERT (Devlin et al., 2019)? To what extent do familiar categories like PropBank roles or Universal Dependencies appear naturally? Do these unsupervised encoders learn their own categorization of language?
To tackle these questions, we propose a systematic method for extracting latent ontologies, or discrete categorizations of a representation space, which we call latent subclass learning; see Figure 1 for an overview. In LSL, we use a binary classification task (such as detecting entity mentions or syntactic dependency arcs) as weak supervision to induce a set of latent clusters relevant to that task (i.e., entity or dependency types). As with predictive methods, the choice of targets allows us to explore different phenomena, and the induced clusters can be quantified and measured against gold annotations. But also, as with descriptive methods, our clusters can be inspected and qualified directly, and observations have high specificity: agreement with external (e.g., gold) categories can be taken as strong evidence that those categories are salient in the representation space.
We describe the LSL classifier in Section 3, and apply it to the edge probing paradigm (Tenney et al., 2019) in Section 4. In Section 5 we evaluate LSL on multiple encoders, including ELMo and BERT. We find that LSL induces stable and consistent ontologies, which include both striking rediscoveries of gold categories—for example, ELMo discovers personhood of named entities and BERT similarly has a notion of dates—as well as novel ontological distinctions—such as fine-grained semantic roles for core arguments—which are not easily observed by fully supervised probes. Overall, we find unique new evidence of emergent latent structure in our encoders, while also revealing new properties of their representations which are inaccessible to earlier methods.
A common form of model analysis is predictive: assessing how well a linguistic variable can be predicted from a model, whether in intrinsic behavioral tests (Goldberg, 2019; Marvin and Linzen, 2018) or extrinsic probing tasks.
Probing involves training lightweight classifiers over features produced by a pretrained model, and assessing the model’s knowledge by the probe’s performance. Probing has been used for low-level properties such as word order and sentence length (Adi et al., 2017; Conneau et al., 2018), as well as phenomena at the level of syntax (Hewitt and Manning, 2019), semantics (Tenney et al., 2019; Liu et al., 2019; Clark et al., 2019), and discourse structure (Chen et al., 2019). Error analysis on probes has been used to argue that BERT may simulate sequential decision making across layers (Tenney et al., 2019), or that it encodes its own, soft notion of syntactic distance (Reif et al., 2019).
Predictive methods such as probing are flexible: Any task with data can be assessed. However, they only track predictability of pre-defined categories, limiting their descriptive power. In addition, a powerful enough probe, given enough data, may be insensitive to differences between encoders, making it difficult to interpret results based on accuracy (Saphra and Lopez, 2019; Zhang and Bowman, 2018). So, many probing experiments appeal to the ease of extraction of a linguistic variable (Pimentel et al., 2020). Existing work has measured this by controlling for the capacity of the probe, either by making relative claims between layers and encoders (Belinkov et al., 2017b; Blevins et al., 2018; Tenney et al., 2019; Liu et al., 2019)
or using explicit measures to estimate and trade off probe capacity with accuracy(Hewitt and Liang, 2019; Voita and Titov, 2020). An alternative is to control amount of supervision, whether by restricting training set size (Zhang and Bowman, 2018), comparing learning curves (Talmor et al., 2019), or using description length with online coding (Voita and Titov, 2020).
We extend this further by removing the distinction between gold categories in the training data and reducing the supervision to binary classification, as explained in Section 3. This extreme measure makes our test high specificity, in the sense that positive results—i.e., when comprehensible categories are recovered by our probe—are much stronger, since a category must be essentially invented without direct supervision.
In contrast to predictive methods, which assess an encoder against particular data, descriptive methods analyze models on their own terms, and include clustering, visualization (Reif et al., 2019), and correlation analysis techniques (Voita et al., 2019; Saphra and Lopez, 2019; Abnar et al., 2019; Chrupała and Alishahi, 2019). Descriptive methods produce high-specificity tests of what structure is present in the model, and facilitate discovery of new patterns that weren’t hypothesized prior to testing. However, they lack the flexibility of predictive methods. Clustering results tend to be dominated by principal components of the embedding space, which correspond to only some salient aspects of linguistic knowledge, such as lexical features (Singh et al., 2019) and word sense (Reif et al., 2019). Alternatively, more targeted latent variable analysis techniques generally have a restricted inventory of inputs, such as layer mixing weights (Peters et al., 2018b) or transformer attention distributions (Clark et al., 2019). As a result of these issues, it is more difficult to discover the underlying structure corresponding to rich, layered ontologies. Our approach retains the advantages of descriptive methods, while admitting more control as the choice of binary classification targets can guide the LSL model to discover structure relevant to a particular linguistic task.
Questions of what encoders learn about language require well-defined linguistic ontologies, or meaningful categorizations of inputs, to evaluate against. Most analysis work uses formalisms from the classical NLP pipeline, such as part-of-speech and syntax from the Penn Treebank (Marcus et al., 1993) or Universal Dependencies (Nivre et al., 2015), semantic roles from PropBank (Palmer et al., 2005) or Dowty (1991)’s Proto-Roles (Reisinger et al., 2015), and named entities, which have a variety of available ontologies (Pradhan et al., 2007; Ling and Weld, 2012; Choi et al., 2018). Work on ontology-free, or open, representations suggests that the linguistic structure captured by traditional ontologies may be encoded in a variety of possible ways (Banko et al., 2007; He et al., 2015; Michael et al., 2018) while being annotatable at large scale (Fitzgerald et al., 2018). This raises the question: when looking for linguistic knowledge in pretrained encoders, what exactly should we expect to find? Predictive methods are useful for fitting an encoder to an existing ontology; but do our encoders latently hold their own ontologies as well? If so, what do they look like? That is the question we investigate in this work.
We propose a way to extract latent linguistic ontologies from pretrained encoders and systematically compare them to existing gold ontologies. We use a classifier based on latent subclass learning (Section 3.1), which is applicable in any binary classification setting.111A similar classifier was concurrently developed and presented for use in model distillation by Müller et al. (2020). We propose several quantitative metrics to evaluate the induced ontologies (Section 3.2), providing a starting point for qualitative analysis (Section 5) and future research.
3.1 Latent Subclass Learning
Consider a logistic regression classifier over inputs
. It outputs probabilities according to the following formula:
where is a learned parameter. Instead, we propose the latent subclass learning classifier:
where is a parameter matrix, and
is a hyperparameter corresponding to the number of latent classes.
This corresponds to +1-way multiclass logistic regression with a fixed 0 baseline for a null class, but trained on binary classification by marginalizing over the non-null classes (Figure 1). The vector may then be treated as a set of latent logits
for a random variabledefined by the softmax distribution. Taking the hard maximum of assigns a latent class to each input, which may be viewed as a weakly supervised clustering, learned on the basis of external supervision but not explicitly optimized to match prior gold categories.
For the loss , we use the cross-entropy loss on . However, this does not necessarily encourage a diverse, coherent set of clusters; an LSL classifier may simply choose to collapse all examples into a single category, producing an uninteresting ontology. To mitigate this, we propose two clustering regularizers.
Adjusted batch-level negative entropy
We wish for the model to induce a diverse ontology. One way to express this is that the expectation of has high entropy, i.e., we wish to maximize
In practice, we use the expectation over a batch. The maximum value this can take is the entropy of the uniform distribution overitems, or . Therefore, we wish to minimize the adjusted batch-level negative entropy loss:
which takes values in .
In addition to using all latent classes in the expected case, we also wish for the model to assign a single coherent class label to each input example. This can be done by minimizing the instance-level entropy loss:
This also takes values in , and we compute the expectation over a batch.
We optimize the regularized LSL loss
where and are hyperparameters, via gradient descent. Together, the regularizers encourage a balanced solution where the model uses many clusters yet gives each input a distinct assignment.
|Named Entities||Universal Dependencies|
|P / R / F1||Acc.||Div.||Unc.||P / R / F1||Acc.||Div.||Unc.|
|Gold||1.0 / 1.0 / 1.0||1.0||9.71||1.00||1.0 / 1.0 / 1.0||1.0||22.91||1.00|
|Multi||.86 / .88 / .87||.94||8.58||1.88||.86 / .83 / .84||.93||21.94||1.77|
|LSL||.28 / .80 / .41||.96||2.85||1.45||.10 / .60 / .18||.94||3.50||2.07|
|+be||.20 / .43 / .27||.96||4.78||31.23||.18 / .13 / .15||.94||29.83||12.33|
|+ie||.13 / 1.0 / .23||.93||1.00||1.00||.09 / .79 / .15||.94||2.00||1.01|
|+be +ie||.43 / .54 / .48||.88||7.00||1.10||.18 / .27 / .22||.86||14.96||1.35|
|Single||.13 / 1.0 / .23||-||1.00||1.00||.06 / 1.0 / .11||-||1.00||1.00|
For the following metrics, we consider only points in the gold positive class.
We compare induced ontologies to gold using the standard B-cubed (or B3) clustering metrics (Bagga and Baldwin, 1998)
. For each input point, this calculates the precision and recall of its predicted cluster against its gold cluster. These values are averaged over all points for aggregate scoring. B3 is argued to have favorable properties (Amigó et al., 2009) and allows for label-wise scoring by restricting to points with specific gold labels.
Pointwise mutual information (PMI) is commonly used as an association measure reflecting how likely two items (such as tokens in a corpus) are to occur together relative to chance (Church and Hanks, 1990). Normalized PMI (nPMI; Bouma, 2009) is a way of factoring out the effect of item frequency on PMI. Formally, the nPMI of two items and is
taking the limit value of -1 when they never occur together, 1 when they only occur together, and 0 when they occur independently. We use nPMI to analyze the co-occurrence of gold labels in predicted clusters: high nPMI pairs are preferentially grouped together by the induced ontology, whereas low nPMI pairs are preferentially distinguished.
We desire fine-grained ontologies with many meaningful classes. Number of attested classes may not be a good measure of this, since it could include classes with very few members and no broad meaning. So we propose diversity:
This increases as the clustering becomes more fine-grained and evenly distributed, with a maximum of when is uniform. More generally, exponentiated entropy is sometimes referred to as the perplexity of a distribution, and corresponds (softly) to the number of classes required for a uniform distribution of the same entropy. In that sense, it may be regarded as the effective number of classes in an ontology. We use the predicted class rather than its distribution because we care about the diversity of the model’s clustering, and not just uncertainty in the model.
In order for our learned classes to be meaningful, we desire distinct and coherent clusters. To measure this, we propose uncertainty:
This is also related to perplexity, but unlike diversity, it takes the expectation over the input after calculating the perplexity of the distribution. This reflects how many classes, on average, the model is confused between when provided with an input. Low values correspond to coherent clusters, with a minimum of 1 when every latent class is assigned with full confidence. As with diversity, we take the expectation over the evaluation set.
|Task||P / R / F1||Div.||P / R / F1||Div.||P / R / F1||Div.||Div.|
|Dependencies||.06 / .86 / .11||1.33||.23 / .42 / .29||11.11||.14 / .33 / .19||11.22||22.91|
|Named Entities||.19 / .39 / .26||4.33||.40 / .66 / .50||5.07||.47 / .53 / .50||7.50||9.71|
|Nonterminals||.22 / .80 / .34||1.47||.36 / .25 / .30||10.16||.35 / .34 / .35||7.80||7.15|
|Semantic Roles||.19 / .39 / .26||2.81||.40 / .17 / .24||22.35||.37 / .17 / .24||18.70||8.73|
4 Experimental Setup
We adopt a similar setup to Tenney et al. (2019) and Liu et al. (2019), training probing models over several contextualizing encoders on a variety of linguistic tasks. While our interest is in linguistic structure, our model can be used in any binary classification setting, and our analysis methods apply in any case that finer-grained labels are present to compare against.
We cast several structure labeling tasks from Tenney et al. (2019) as binary classification by adding negative examples, bringing the positive to negative ratio to 1:1 where possible.
Named entity labeling requires labeling noun phrases with entity types, such as person, location, date, or time. We randomly sample non-entity noun phrases as negatives.
Nonterminal labeling requires labeling phrase structure constituents with syntactic types, such as noun phrases and verb phrases. We randomly sample non-constituent spans as negatives.
Syntactic dependency labeling requires labeling token pairs with their syntactic relationship, such as a subject, direct object, or modifier. We randomly sample non-attached token pairs as negatives.
Semantic role labeling requires labeling predicates (usually verbs) and their arguments (usually syntactic constituents) with labels that abstract over syntactic relationships in favor of more semantic notions such as agent, patient, modifier roles involving e.g. time and place, or predicate-specific roles. We draw the closest non-attached predicate-argument pairs as negatives.
We run experiments on the following encoders:
ELMo encodes input tokens with 2-layer LSTMs (Hochreiter and Schmidhuber, 1997) run forward and backward over the text, trained with a language modeling objective (Peters et al., 2018a). We use the publicly available instance222tfhub.dev/google/elmo/2 trained on the One Billion Word Benchmark (Chelba et al., 2014).
BERT uses a deep Transformer stack Vaswani et al. (2017) trained on masked language modeling and next sentence prediction tasks (Devlin et al., 2019). We use the 24-layer BERT-large instance trained on about 2.3B tokens from English Wikipedia and BooksCorpus (Zhu et al., 2015).333github.com/google-research/bert; uncased_L-24_H-1024_A-16
BERT-lex is a lexical baseline, encoding inputs with BERT-large’s context-independent wordpiece embedding layer.
4.3 Probing Model
We use the model architecture of Tenney et al. (2019), which classifies arbitrary spans or pairs of spans by leveraging pretrained encoders in the following way: 1) construct token representations by pooling across encoder layers with a learned scalar mix (Peters et al., 2018a), 2) construct span representations from these token representations using self-attentive pooling (Lee et al., 2017)
, and 3) concatenate those span representations and feed the result into a multi-layer perceptron to produce input features for the classification layer. This architecture allows for a unified model for all probing tasks and simplifies our experiments. For the classification layer, we use the LSL classifier (Section 3).
4.4 Model selection
We run initial studies to determine hidden layer sizes and regularization coefficients. For all LSL probes, we use latent classes.444Preliminary experiments found similar results for larger , with similar diversity in the full setting.
Hewitt and Liang (2019) suggest that results with expressive probes may reflect the probe’s learning capacity rather than structure encoded in the inputs. To mitigate this, we follow their advice and use a single hidden layer with the smallest dimensionality that does not sacrifice performance. For each task, we train binary logistic regression probes with a range of hidden sizes and select the smallest yielding at least 97% of the best model’s performance. Details are in Appendix A.
To mitigate variance across random restarts, we use a consistency-based model selection criterion: train 5 separate models, compute their pairwise B3 F1-scores, and choose the model with the highest F1 score on average.
We run preliminary experiments using BERT-large on Universal Dependencies and Named Entity Labeling with ablations on our clustering regularizers. For each ablation, we choose the hyperparameter setting which yields the best F1 against gold.
Results, shown in Table 1, validate our intuitions about the clustering regularizers. The batch-level entropy loss drives up both diversity and uncertainty, while the instance-level entropy loss drives them down. In combination, however, they produce the right balance, with uncertainty close to 1 while retaining diversity.
Notably, the Named Entity labeling model has lower diversity without the instance-level loss than with it. Intuitively, this may happen because the batch-level entropy can be increased by driving up instance-level entropy, without changing the entropy of the expected distribution of predictions . So by keeping the uncertainty down on each input, the instance-level entropy loss helps the batch-level entropy loss promote diversity in the induced ontology.
Based on these results, we set for and for the main experiments.
5 Results and Analysis
We train and evaluate our final probing model on all combinations of task and encoder described in Section 4. Aggregate results are shown in Table 2.555Results for more tasks and encoders are in Appendix B. Taking all metrics into account, contextualized encodings produce richer ontologies that agree more with gold than the lexical baseline does. In fact, BERT-lex has normalized PMI scores very close to zero across the board, encoding virtually no information about gold categories. For this reason, we omit it from the rest of the analysis.
|Gold Label||BERT F1||ELMo F1|
It may be surprising that our induced ontologies have any relationship at all to gold classes, since the only extra supervision is in binary classification that collapses them together. Indeed, many tasks addressed here have multiple human-written ontologies, as discussed in Section 2. In our case, we let the model choose its own ontology. The resulting matches and mismatches with human-labeled ontologies provide a new lens with which to analyze both pretrained encoders and linguistic ontologies.
As shown in Table 3, neither BERT nor ELMo are sensitive to categories that are related to specialized world knowledge, such as languages, laws, and events. However, they are in tune with other types: ELMo discovers a clear PERSON category, whereas BERT has distinguished DATEs. Visualization of the clusters (Figure 2) corroborates this, furthermore showing that the models have a sense of scalar values and measurement; indeed, instead of the gold distinction between ORDINAL and CARDINAL numbers, both models distinguish between small and large (roughly, seven or greater) numbers. See Appendix C for detailed nPMI scores.
|Gold Label||P / R / F1|
|ARGM-MOD||.62 / .41 / .49|
|ARG0||.52 / .17 / .26|
|ARG1||.50 / .09 / .15|
|ARGM-NEG||.36 / .60 / .45|
|ARG2||.28 / .13 / .18|
Patterns in nPMI ((a)) suggest basic syntactic notions: complete clauses (S, TOP, SINV) form a group, as do phrase types which take subjects (SBAR, VP, PP), and wh-phrases (WHADVP, WHPP, WHNP).
Patterns in nPMI ((b)) indicate several salient groups: verb arguments (nsubj, obj, obl, xcomp), left-heads (det, nmod:poss, compound, amod, case), right-heads (acl, acl:relcl, nmod666Often the object in a prepositional phrase modifying a noun.), and punct.
Patterns in nPMI ((c)) roughly match intuition: primary core arguments (ARG0, ARG1) are distinguished, as well as modals (ARGM-MOD) and negation (ARGM-NEG), while trailing arguments (ARG2–5) and modifiers (ARGM-TMP, LOC, etc.) form a large group. On one hand, this reflects surface patterns: primary core arguments tend to be close to the verb, with ARG0 on the left and ARG1 on the right; trailing arguments and modifiers tend to be prepositional phrases or subordinate clauses; and modals and negation are identified by lexical and positional cues. On the other hand, this also reflects error patterns in state-of-the-art systems, where label errors can sometimes be traced to ontological choices in PropBank, which distinguish between arguments and adjuncts that have very similar meaning (He et al., 2017; Kingsbury et al., 2002).
While number of induced classes roughly matches gold for most tasks, induced ontologies for semantic roles are considerably more diverse (Table 2). Among high-precision labels (Table 4), core arguments ARG0–2 are split apart most by the model. This follows intuition for PropBank core argument labels, which have predicate-specific meanings. Other approaches based on Frame Semantics (Baker et al., 1998; Fillmore and others, 2006), Proto-Roles (Dowty, 1991; Reisinger et al., 2015), or Levin classes (Levin, 1993; Schuler, 2005)
have more explicit fine-grained roles. Comparison with these frameworks and investigation of learned clusters could be informative for future work on ontology design or unsupervised learning.
Our exploration of latent ontologies has yielded some surprising results: ELMo knows people, BERT knows dates, and both sense scalar and measurable values, while distinguishing between small and large numbers. Both models preferentially split core semantic roles into many fine-grained categories, and seem to encode broad notions of syntactic and semantic structure. These findings contrast with those from fully-supervised probes, which produce strong agreement with existing annotations (Tenney et al., 2019) but can also report false positives by fitting to weak patterns in large feature spaces (Zhang and Bowman, 2018; Voita and Titov, 2020). Instead, agreement of latent categories with known concepts can be taken as strong evidence that these concepts are present as important, salient features in the representation space.
This issue is particularly important when looking for deep, inherent understanding of linguistic structure, which by nature must generalize. For supervised systems, generalization is often measured by out-of-distribution objectives like out-of-domain performance (Ganin et al., 2016), transferability (Wang et al., 2018), or robustness to adversarial inputs (Jia and Liang, 2017). Recent work also advocates for counterfactual learning and evaluation (Qin et al., 2019; Kaushik et al., 2020) to mitigate confounds, or contrastive evaluation sets Gardner et al. (2020) to rigorously test local decision boundaries. Overall, these techniques target discrepancies between salient features in a model and causal relationships in a task. In this work, we extract such features directly and investigate them by comparing induced and gold ontologies. This identifies some very strong cases of transferability from the binary detection task to detection tasks over gold subcategories, such as ELMo’s people and BERT’s dates (Table 3). Future work may investigate cross-task ontology matching to identify further cases of transferable features, or perhaps the emergence of categories signifying pipelined reasoning (Tenney et al., 2019), surface patterns, or new, perhaps unexpected distinctions which can appear when going beyond existing schemas (Michael et al., 2018).
Our results point to a general paradigm of probing with latent variables, for which LSL is just one potential technique. We have only scratched the surface of what may emerge with such methods: while our probing test is high specificity, it is low power; plenty of extant latent structure may still be missed. LSL probing may produce different ontologies due to many factors, such as tokenization (Singh et al., 2019), encoder architecture (Peters et al., 2018b), probe architecture (Hewitt and Manning, 2019), data distribution (Gururangan et al., 2018), pretraining task (Liu et al., 2019; Wang et al., 2019), or pretraining checkpoint. Any of these factors may be at work in the differences we observe between ELMo and BERT: for example, BERT’s tokenization method may not as readily induce personhood features due to splitting of rare words (like names) in byte-pair encoding. Furthermore, concurrent work (Chi et al., 2020) has already found qualitative evidence of syntactic dependency types emergent in the special case of multilingual structural probes (Hewitt and Manning, 2019). With LSL, we provide a method that can be adapted to a variety of probing settings to both quantify and qualify this kind of structure.
We introduced a new classifier and model analysis method based on latent subclass learning: By factoring a binary classifier through a forced choice of latent subclasses, latent ontologies can be coaxed out of input features. Using this approach, we found that encoders such as BERT and ELMo can be found to hold stable, consistent latent ontologies on a variety of linguistic tasks. In these ontologies, we found clear connections to existing categories, such as personhood of named entities. We also found evidence of ontological distinctions beyond traditional gold categories, such as distinguishing large and small numbers, or preferring fine-grained semantic roles for core arguments. With latent subclass learning, we have shown a general technique to uncover some of these features discretely, providing a starting point for descriptive analysis of our models’ latent ontologies. Potential future work may include investigating how LSL results vary with probe architecture, developing intrinsic quality measures on latent ontologies, or applying the technique to discover new patterns in settings where gold annotations are not present.
We would like to thank Tim Dozat, Kenton Lee, Emily Pitler, Kellie Webster, other members of the Google AI Language team, and Sewon Min, who all provided valuable feedback on this paper. We also thank Rafael Müller, Simon Kornblith, and Geoffrey Hinton for discussion on the LSL classifier.
Blackbox meets blackbox: representational similarity and stability analysis of neural language models and brains.
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 191–203. External Links: Cited by: §2.
- Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In International Conference on Learning Representations, External Links: Cited by: §2.
A comparison of extrinsic clustering evaluation metrics based on formal constraints. Information retrieval 12 (4), pp. 461–486. Cited by: §3.2.
- Entity-based cross-document core f erencing using the vector space model. In 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1, Montreal, Quebec, Canada, pp. 79–85. External Links: Cited by: §3.2.
- The Berkeley FrameNet project. In Proceedings of the 17th international conference on Computational linguistics-Volume 1, pp. 86–90. Cited by: §5.
- Open information extraction from the web. In Proceedings of the 20th International Joint Conference on Artifical Intelligence, IJCAI’07, San Francisco, CA, USA, pp. 2670–2676. External Links: Cited by: §2.
What do neural machine translation models learn about morphology?. In Proceedings of EMNLP, Cited by: §1.
- Analysis methods in neural language processing: a survey. Transactions of the Association for Computational Linguistics (TACL) 7, pp. 49–72. External Links: Cited by: §1.
- Evaluating layers of representation in neural machine translation on part-of-speech and semantic tagging tasks. In Proceedings of IJCNLP, Cited by: §2.
- Deep RNNs encode soft hierarchical syntax. In Proceedings of ACL, Cited by: §1, §2.
- Normalized (pointwise) mutual information in collocation extraction. In GSCL, Cited by: §3.2.
- One billion word benchmark for measuring progress in statistical language modeling. In Proceedings of Interspeech, Cited by: §4.2.
Evaluation benchmarks and learning criteria for discourse-aware sentence representations.
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 649–662. External Links: Cited by: §2.
- Finding universal grammatical relations in multilingual bert. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Cited by: §6.
- Ultra-fine entity typing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, pp. 87–96. External Links: Cited by: §2.
- Correlating neural and symbolic representations of language. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 2952–2962. External Links: Cited by: §2.
- Word association norms, mutual information, and lexicography. Computational Linguistics 16 (1), pp. 22–29. External Links: Cited by: §3.2.
- What does bert look at? an analysis of bert’s attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Cited by: §2, §2.
- What you can cram into a single $&#* vector: probing sentence embeddings for linguistic properties. In Proceedings of ACL, Cited by: §2.
- BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 4171–4186. External Links: Cited by: §1, §1, §4.2.
- Thematic proto-roles and argument selection. Language 67, pp. 547–619. Cited by: §2, §5.
- Frame semantics. Cognitive linguistics: Basic readings 34, pp. 373–400. Cited by: §5.
- Large-scale QA-SRL parsing. In ACL, Cited by: §2.
Domain-adversarial training of neural networks.
The Journal of Machine Learning Research17 (1), pp. 2096–2030. Cited by: §6.
- Evaluating nlp models via contrast sets. External Links: Cited by: §6.
- Assessing bert’s syntactic abilities. arXiv preprint arXiv:1901.05287. Cited by: §2.
- Annotation artifacts in natural language inference data. In Proceedings of NAACL, Vol. 2, pp. 107–112. Cited by: §6.
- Deep semantic role labeling: what works and what’s next. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, Canada, pp. 473–483. External Links: Cited by: §5.
- Question-answer driven semantic role labeling: using natural language to annotate natural language. In EMNLP, Cited by: §2.
- Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 2733–2743. External Links: Cited by: §2, §4.4.
- A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 4129–4138. External Links: Cited by: §2, §6.
- Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §4.2.
- Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, pp. 2021–2031. External Links: Cited by: §6.
- Learning the difference that makes a difference with counterfactually-augmented data. In International Conference on Learning Representations, External Links: Cited by: §6.
- Adding semantic annotation to the penn treebank. In In Proceedings of the Human Language Technology Conference, Cited by: §5.
- ALBERT: a lite bert for self-supervised learning of language representations. In International Conference on Learning Representations, External Links: Cited by: Appendix B, §1.
- End-to-end neural coreference resolution. In Proceedings of EMNLP, Cited by: §4.3.
- English verb classes and alternations: a preliminary investigation. University of Chicago Press. External Links: Cited by: §5.
Fine-grained entity recognition.
Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, AAAI’12, pp. 94–100. External Links: Cited by: §2.
- Linguistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 1073–1094. External Links: Cited by: §1, §2, §4, §6.
- Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 4487–4496. External Links: Cited by: §2.
- Building a large annotated corpus of english: the penn treebank. Computational linguistics 19 (2), pp. 313–330. Cited by: §2.
- Targeted syntactic evaluation of language models. In Proceedings of EMNLP, Cited by: §1, §2.
- Umap: uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426. Cited by: Figure 2.
- Crowdsourcing question-answer meaning representations. In NAACL-HLT, Cited by: §2, §6.
- Subclass distillation. External Links: Cited by: footnote 1.
- Universal dependencies 1.2. Note: LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University External Links: Cited by: Appendix B, §2.
- The proposition bank: an annotated corpus of semantic roles. Computational linguistics 31 (1), pp. 71–106. Cited by: §2.
- Deep contextualized word representations. In Proceedings of NAACL, Cited by: §1, §1, §4.2, §4.3.
- Dissecting contextual word embeddings: architecture and representation. In Proceedings of EMNLP, Cited by: §2, §6.
- Information-theoretic probing for linguistic structure. In Proceedings of the 58th annual meeting of the Association for Computational Linguistics, Cited by: §2.
- Ontonotes: a unified relational semantic representation. International Journal of Semantic Computing 1 (04), pp. 405–419. Cited by: Appendix B, §2.
- Counterfactual story reasoning and generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 5043–5053. External Links: Cited by: §6.
- Language models are unsupervised multitask learners. https://blog.openai.com/better-language-models. Cited by: §1.
- Visualizing and measuring the geometry of bert. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.), pp. 8592–8600. External Links: Cited by: §1, §1, §2, §2.
- Semantic proto-roles. Transactions of the Association of Computational Linguistics. Cited by: §2, §5.
- Understanding learning dynamics of language models with SVCCA. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 3257–3267. External Links: Cited by: §1, §2, §2.
VerbNet: a broad-coverage, comprehensive verb lexicon. Ph.D. Thesis, University of Pennsylvania. Cited by: §5.
- A gold standard dependency corpus for English. In Proceedings of the Ninth International Conference on Language Resources and Evaluation, Cited by: §4.1.
BERT is not an interlingua and the bias of tokenization.
Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), Hong Kong, China, pp. 47–55. External Links: Cited by: §1, §2, §6.
- Embedding projector: interactive visualization and interpretation of embeddings. In NIPS 2016 Workshop on Interpretable Machine Learning in Complex Systems, Cited by: Figure 2.
- OLMpics – on what language model pre-training captures. External Links: Cited by: §2.
- BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 4593–4601. External Links: Cited by: §2, §6.
- What do you learn from context? probing for sentence structure in contextualized word representations. In International Conference on Learning Representations, Cited by: §1, §1, §2, §2, §4.1, §4.3, §4, §6.
- Attention is all you need. In Proceedings of NIPS, Cited by: §4.2.
- The bottom-up evolution of representations in the transformer: a study with machine translation and language modeling objectives. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 4387–4397. Cited by: §1, §2.
- Information-theoretic probing with minimum description length. External Links: Cited by: §2, §6.
- Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 4465–4476. External Links: Cited by: §6.
- SuperGLUE: a multi-task benchmark and analysis platform for natural language understanding. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.), pp. 3261–3275. External Links: Cited by: §1.
- GLUE: a multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 353–355. Cited by: §1, §6.
- OntoNotes release 5.0 LDC2013T19. Linguistic Data Consortium, Philadelphia, PA. Cited by: §4.1.
- Language modeling teaches you more than translation does: lessons learned through auxiliary syntactic task analysis. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 359–361. Cited by: §2, §6.
- Position-aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017), pp. 35–45. External Links: Cited by: Appendix B.
Aligning books and movies: towards story-like visual explanations by watching movies and reading books.
Proceedings of the IEEE international conference on computer vision, pp. 19–27. Cited by: §4.2.
Appendix A Probe capacity tuning
Results from hidden size tuning experiments are shown in Figure 4.
Appendix B Full Experimental Results
We ran on several additional encoders and tasks. Extra tasks include undirected Universal Dependencies (Nivre et al., 2015), TAC relation classification (Zhang et al., 2017), and coreference on OntoNotes (Pradhan et al., 2007). Extra encoders include BERT-base, mBERT777https://github.com/google-research/bert/blob/master/multilingual.md and ALBERT Lan et al. (2020). Full results are shown in Tables 1–7.
Appendix C More Analysis Results