Neural networks trained on text alone, without explicit syntactic supervision, have been surprisingly successful in tasks that require sensitivity to sentence structure. The difficulty of interpreting the learned neural representations that underlie this success has motivated a range of analysis techniques, including diagnostic classifiers Giulianelli et al. (2018); Conneau et al. (2018); Shi et al. (2016)
, visualization of individual neuron activationsKádár et al. (2017); Qian et al. (2016), ablation of individual neurons or sets of neurons Lakretz et al. (2019) and behavioral tests of generalization to infrequent or held out syntactic structures Linzen et al. (2016); Weber et al. (2018); McCoy et al. (2018); for reviews, see Belinkov and Glass (2019) and Alishahi et al. (2019).
This paper expands the toolkit of neural network analysis techniques by drawing on the syntactic priming paradigm, a central tool in psycholinguistics for analyzing human syntactic representations Bock (1986). This paradigm is based on the empirical finding that people tend to reuse syntactic structures that they have recently produced or encountered. For example, English provides two roughly equivalent ways to express a transfer event:
. .The boy threw the ball to the dog. .̱The boy threw the dog the ball.
When readers encounter one of these variants in the text more frequently than the other, they expect that future transfer events will more likely be expressed using the frequent construction than the infrequent one. For example, after reading sentences like 1 (the prime), readers expect sentences like 1, which shares syntactic structure with the prime, to occur with a greater likelihood than the alternative variant like 1 which does not Wells et al. (2009).111Wells et al. (2009) measured priming effects for relative clauses, not dative constructions. For work on priming in production with dative constructions, see Kaschak et al. (2011).
. . The lawyer sent the letter to the client. .̱ The lawyer sent the client the letter.
We use the priming paradigm to analyze neural network language models (LMs), systems that define a probability distribution over theth word of a sentence given its first words. Building on paradigms that determine whether the LM’s expectations are consistent with the syntactic structure of the sentence Linzen et al. (2016), we measure the extent to which a LM’s expectation for a specific syntactic structure is affected by recent experience with related structures. We prime a fully trained model with a structure by adapting it to a small number of sentences containing that structure (van Schijndel and Linzen, 2018). We then measure the change in surprisal (negative log probability) after adaptation when the LM is tested either on sentences with the same structure or sentences with different but related structures. The degree to which one structure primes another provides a graded similarity metric between the model’s representations of those structures (cf. Branigan and Pickering 2017), which allows us to investigate how the representations of sentences with these structures are organized.
As a case study, we applied this technique to investigate how recurrent neural network (RNN) LMs represent sentences with relative clauses (RCs). We found that the representations of these sentences are organized in a linguistically interpretable manner: sentences with a particular type of RC were most similar to other sentences with the same type of RC in the LMs’ representation space. Furthermore, sentences with different types of RCs were more similar to each other than sentences without RCs. We demonstrate that the similarity between sentences was not driven merely by specific words that appeared in the sentence, suggesting that the LMs tracked abstract properties of the sentence. This ability to track abstract properties decreased as the training corpus size increased. Finally, we tested the hypothesis that LMs’ accuracy on agreement prediction(Marvin and Linzen, 2018) would increase with the LMs’ ability to track more abstract properties of the sentence, but did not find evidence for this hypothesis.
|Unreduced Object RC||The conspiracy that the employee welcomed divided the beautiful country.|
|Reduced Object RC||The conspiracy the employee welcomed divided the beautiful country.|
|Unreduced Passive RC||The conspiracy that was welcomed by the employee divided the beautiful country.|
|Reduced Passive RC||The conspiracy welcomed by the employee divided the beautiful country.|
|Active Subject RC||The employee that welcomed the conspiracy quickly searched the buildings.|
|PS/ORC-matched Coordination||The conspiracy welcomed the employee and divided the beautiful country.|
|ASRC-matched Coordination||The employee welcomed the conspiracy and quickly searched the buildings.|
2.1 Syntactic predictions in neural LMs
We build on paradigms that use LM probability estimates for words in a given context as a measure of the model’s sensitivity to the syntactic structure of the sentenceLinzen et al. (2016); Gulordava et al. (2018); Marvin and Linzen (2018). If a language model assigns a higher probability to a verb form that agrees in number with the subject (the boy… writes) than a verb form that does not (the boy… write), we can infer that the model encodes information about the agreement features of nouns and verbs (that is, the difference between singular and plural) and has correctly identified the subject that corresponds to this verb. This reasoning has been extended beyond subject-verb agreement to study whether the predictions of neural LMs are sensitive to a range of other syntactic dependencies, including negative polarity items Jumelet and Hupkes (2018), filler-gap dependencies Wilcox et al. (2018) and reflexive pronoun binding Futrell et al. (2019).
2.2 Syntactic priming in humans
Syntactic priming has been used to study whether the representations of two sentences have shared structure. For example, 1 (repeated below as 2.2) shares the structure VP V NP PP with 2.2 but not 2.2.
. The boy threw the ball to the dog.
. . The renowned chef made some wonderful pasta for the guest. .̱ The renowned chef made the guest some wonderful pasta.
If 2.2 primes 2.2 more than it primes 2.2, we can infer that the representations of 2.2 are more similar to that of 2.2 than to that of 2.2. Since 2.2 and 2.2 differ only in their structure, this difference in similarity must be driven by structural information in the representations of the sentences (for reviews, see Mahowald et al. 2016 and Tooley and Traxler 2010).
Although priming studies have traditionally measured the priming effect on the sentence immediately following the prime, more recent studies have demonstrated that the effects of syntactic priming can be cumulative and long-lasting: sentences with a shared structure become progressively easier to process when preceded by sentences with the same structure than when preceded by sentences with a different structure Kaschak et al. (2011); Wells et al. (2009).222In studies looking at non-cumulative priming, . In conjunction with the finding that words that are consistent with a probable syntactic parse are easier to process than words consistent with less probable parses Hale (2001); Levy (2008), the increased ease of processing in cumulative priming studies can be interpreted as evidence that, with increased exposure to a structure, participants begin to expect that structure with a greater probability Chang et al. (2006).
Cumulative priming allows us to study how sentences are related to each other in the human (or LM) representation space in the same way that non-cumulative priming does: when participants (or LMs) are exposed to sentences with structure , if there is a greater decrease in surprisal when they are tested on other sentences with than when they are tested on other sentences with , we can infer that the representations of sentences with are more similar to each other than to the representations of sentences with .
2.3 LM adaptation as cumulative priming
van Schijndel and Linzen (2018) modeled cumulative priming in recurrent neural networks (RNNs) by adapting fully trained RNN LMs to new stimuli — i.e. taking a fully trained RNN LM and continuing to train it on a small set of sentences (cf. Grave et al. 2017; Krause et al. 2017; Chowdhury and Zamparelli 2019). They demonstrated that when an RNN LM was adapted to a small number of sentences with a shared syntactic structure, the surprisal for novel sentences with that structure decreased, enabling them to infer that the LM’s representations of sentences contained information about that structure.
3 Similarity between syntactic structures in RNN LM representational space
Following the assumptions in Section 2.2, we define a similarity metric between two structures and in an LM’s representation space by adapting the LM to sentences with and measuring the change in surprisal for sentences with — i.e. measuring to what extent sentences with prime sentences with . We use the notation to refer to this change in surprisal333 is shorthand for adaptation., where and are non-lexically-overlapping sets of sentences whose members share the structures and respectively. If we assume that and are similar to each other in the LM’s representation space, then — i.e., encountering sentences with causes the LM to assign a higher probability to sentences with . On the other hand, if we assume that and are unrelated to each other, then — i.e., encountering sentences with does not cause the LM to change its probability for sentences with .
4 Experimental setup
4.1 Syntactic structures
We analyzed five types of RCs. In an active subject RC, the gap is in the subject position of the embedded clause:444We illustrate the location of the gap with underscores here, but the underscores were not included in the LM’s input.
. My cousin that liked the book …
..The book that was liked by my cousin … .̱The book liked by my cousin …
. . The book that my cousin liked … .̱The book my cousin liked …
Finally, we also included two additional conditions with verb coordination: one with nearly identical word order and lexical content as active subject RCs (4.1; ASRC-matched Coordination), and another with nearly identical word order and lexical content as passive RCs and object RCs (4.1; PS/ORC-matched Coordination).555In order to maintain the same word order as in object and passive RCs, the subject of the coordinated verb phrases is an NP that tends to fill the object position in other sentences (e.g, “the equation”). Therefore, many of the sentences in this condition are implausible (e.g., “The equation reviewed the physicists and challenged the method.”)
. My cousin liked the book and …
. The book liked my cousin and …
These conditions enable us to measure whether sentences with different types of RCs are more similar to each other in an LM’s representation space than they are to lexically matched sentences without RCs.
4.2 Adaptation and test sets
We generated sentences from seven templates, one for each of the syntactic structures of interest. The slots were filled with 223 verbs, 164 nouns, 24 adverbs and 78 adjectives such that the semantic plausibility of the combination of nouns, verbs, adverbs and adjectives was ensured. The seven variants of every sentence had nearly identical lexical items (see Table 1).666Since the main verb of the sentence was constrained to be semantically plausible with the subject of the sentence, it often varied between active subject RC and ASRC-matched coordination on the one had and all other conditions on the other. We used these templates to generate five experimental lists — each list comprised of a pair of adaptation and test sets with minimal lexical overlap between them (only function words and some modifiers were shared). Each adaptation set contained 20 sentences and each test set contained 50.
In order to infer that any decrease in surprisal is caused by adaptation to an abstract syntactic structure, we need to ensure that the models are not adapting to properties of the sentence that are unrelated to the abstract structure of interest. Consider a LM adapted to 4.2 and tested on 4.2:
. The conspiracy that the employee welcomed divided the country.
. The proposal that the receptionist managed shocked the CEO.
When the LM is adapted to sentences such as 4.2, it could adjust its expectations about several properties of the sentence, some more linguistically interesting than others. For instance, it could learn that there are three determiners in the sentence, that the third word of the sentence is that, that sentences have nine words, that every verb is preceded by a noun, and so on and so forth. If there is a decrease in surprisal when a model is adapted to 4.2 and tested on 4.2, it is unclear if this is because the model learned to expect object relative clauses or if it learned to expect any of the other mentioned properties.
To minimize the likelihood that the adaptation effects are driven by irrelevant properties of the sentence, we introduced several sources of variability to our templates: nouns could either be singular or plural, noun phrases could be optionally modified by an adjective, adjectives were optionally modified with an intensifier and verb phrases were optionally modified with adverbs which could occur either pre-verbally or post-verbally (details in the Supplementary Materials).777The templates and code for all the analyses along with the data can be found on GitHub: https://github.com/grushaprasad/RNN-Priming
We used 75 of the LSTM language models trained by van Schijndel et al. (2019); these LMs varied in the number of hidden units per layer (100, 200, 400, 800, 1600) and the number of tokens they were trained on (2 million, 10 million or 20 million). For each training corpus size, van Schijndel and Linzen trained models on five disjoint subsets of the WikiText-103 corpus, to ensure that the results generalized across different training sets.
4.4 Calculating the adaptation effect (AE)
For every structure, we computed the similarity between that structure and every other structure (including itself) as described in Section 3. This process is schematized in Figure 1. The surprisal values were averaged across the entire sentence.888Unknown words were excluded from this average.
We found that was proportional to the surprisal of prior to adaptation (see Supplementary Materials). As a consequence, for three structures , and , could be greater than merely because was a more surprising structure to begin with than
. In order to remove this confound, we first fit a linear regression model predictingfrom the surprisal of prior to adaptation ():
We then regressed out the linear relationship between and as follows:
Since was centered around its mean, reflects the mean of when is equal to the mean surprisal of all sentences prior to adaptation. The term
reflects any variance inthat is not predicted by . By summing these two terms together, reflects the change in surprisal for after adapting to that is independent of .
4.5 Statistical analyses
We used linear mixed effects models Pinheiro et al. (2000) to test for statistical significance; all of the results reported below were highly significant. Details about the statistical analyses can be found in the Supplementary Materials.
5.1 Validating AE as a similarity metric
As discussed in Section 2.3, under the adaptation-as-priming paradigm, we would expect sentences that share the same specific structure to be more similar to each other than lexically matched sentences that do not share the structure.999By lexically matched we mean that all content words were shared between sentences. In other words, if and are non-lexically-overlapping sets of sentences with shared structure , and is a set of sentences with structure , but is lexically matched with , then we would expect . We found this prediction to be true for all of our seven structures (Figure 1(a)), thus validating our similarity metric.
5.2 Similarity between sentences with different types of VP coordination
Our two coordination conditions were structurally identical to each other but varied in their semantic plausibility — the sentences in PS/ORC-matched coordination condition were often semantically implausible whereas sentences in ASRC-matched condition were always semantically plausible (see footnote 5). If sentences that were structurally similar were close together irrespective of semantic plausibility, then we expect sentences with coordination to be more similar to each other than lexically matched sentences with RCs. Consistent with this prediction, the adaptation effect for models adapted to one type of coordination was greater when the models were tested on sentences with the other type of coordination than when they were tested on sentences with RCs (top panel of Figure 1(b)).
5.3 Similarity between sentences with different types of RCs
Unlike sentences with coordination, sentences with different types of RCs differ from each other at a surface level (see Table 1). However, at a more abstract level they all share a common property: a gap. If the RNN LMs were keeping track of whether or not a sentence contained a gap, we would expect sentences with different types of RCs to be more similar to each other in the RNN LMs’ representation space than lexically matched sentences without a gap. In other words, if and are two different types of RCs and is a sentence with verb coordination lexically matched with , then we would expect .
Consistent with this prediction, the adaptation effect for models adapted to RCs was greater when they were tested on sentences with other types of RCs than when they were tested on sentences with coordination (bottom panel of Figure 1(b)). This suggests that the LMs do keep track of whether or not a sentence contains a gap, even though this property is not overtly indicated by a lexical item that is shared across all types of RCs.
5.4 Similarity between sentences belonging to different sub-classes of RCs
The different types of RCs we tested can be divided into sub-classes based on at least two linguistically interpretable features: reduction and passivity. Reduction distinguishes reduced passive and object RCs on the one hand from unreduced passive and object RCs on the other. Passivity distinguishes reduced and unreduced passive RCs on the one hand from reduced and unreduced object RCs on the other. The LMs could be tracking either, both or none of these features.
We probed whether the LMs track these features by comparing the similarity between sentences that share one feature but not the other, with the similarity between sentences that share neither feature. If the adaptation effect is greater when there is a match in one feature than when there is a match in neither of the features, we can infer that the LMs track whether sentences have that feature. We found that the LMs track both of these features (Figure 3).
Additionally, we probed which of the features contributes more towards the similarity between sentences by comparing the similarity between sentences that match only in passivity with sentences that match only in reduction. When the adaptation and test sets matched only in passivity, the adaptation effect was slightly (but significantly) greater than when the adaptation and test sets matched only in reduction (Figure 3). In other words, in the LMs’ representation space, 5.4 is more similar to 5.4 than it is to 5.4, suggesting that passivity contributes more towards the similarity between sentences than reduction.
. The conspiracy the employee welcomed divided the country.
. The conspiracy that the employee welcomed divided the country.
. The conspiracy welcomed by the employee divided the country.
5.5 What properties of sentences drive the similarity between them?
Our analyses so far have demonstrated that sentences that belong to linguistically interpretable classes (e.g., sentences that match in reduction) are more similar to each other in the LMs’ representation space than they are to sentences that do not belong to those classes (e.g., sentences that do not match in reduction). However, it is unclear what properties of the sentences are driving this similarity between members of the class. For almost all of the linguistically interpretable classes we considered, all sentences belonging to a class shared at least some, if not all, function words. The only exception was the class of all RCs, where the property shared by all sentences in this class (the presence of a gap) was not overtly observable. Therefore, it is possible that the similarity between members of most of the classes we tested was being driven entirely by the presence of these function words.
In order to test whether the similarity between members of classes was indeed being driven by the presence of shared function words, we compared the representation space of the models we tested in the previous sections (henceforth trained models) with the representation space of models trained on no data (henceforth baseline models). Since the baseline models were only ever exposed to the 20 sentences in the adaptation set and there was no lexical overlap in content words between adaptation and test sets, any similarity between sentences in the representation space of these models would be driven by the presence of function words. If the similarity between sentences in the representation space of the trained models was being driven by factors other than the presence of function words, we would expect this similarity to be greater than the similarity between these sentences in the representation space of the baseline models.
We cannot directly use adaptation effect to compare the similarity between sentences in the representation spaces of trained models and baseline models, however: models trained on more data are likely to have stronger priors and are therefore less likely to drastically change their representations after 20 sentences than models trained on less data. In order to mitigate this issue, we defined a distance measure between sentences that belong to a class and sentences that do not belong to a class as follows (see Figure 4 for a schematic):
This value would be greater than one if sentences that belonged to a class were more similar to each other than they were to sentences that did not belong to the class. Since the strength of prior belief would affect sentences that belong to the class the same way it would affect sentences that do not belong to the class, the effect would cancel out.
We measured the distance between members and non-members for three linguistically interpretable classes: sentences which contained the same type of RC, sentences that matched in their reduction or sentences that contained any type of RC. In our baseline models, for all three classes, sentences that belonged to one of these classes were more similar to each other than sentences that did not belong to that class (Figure 4(a)). This was surprising for the class of sentences that contained any type of RC because there was no function word that was shared by all sentences in this class. We hypothesize that this is because sentences without RCs always contained the word and, whereas sentences with RCs never did.
In cases where members of the class shared at least some function words, the distance between sentences that belonged to the class and sentences that did not for the trained models was greater than that for the baseline models. This suggests that the similarity between sentences in the representation space of trained models was being driven by factors other than the mere presence of function words. However, somewhat surprisingly, as the number of training tokens increased, the distance between members and non-members decreased.
In the case where the members of the class did not share any function words, the distance between sentences that belonged to the class and sentences that did not belong to the class did not differ between the trained models and the baseline models. This suggests that any similarity between sentences in the representation space of trained models was driven purely by the presence (or in this case absence) of lexical items.
5.6 Does predict agreement prediction accuracy?
Marvin and Linzen (2018) created a dataset that evaluated the grammaticality of the predictions of language models. Using this dataset, they showed that LSTM LMs could not accurately predict the number of the main verb if the main clause subject was modified by an object RCs (either reduced or unreduced). However, the models had better performance if the main clause was modified by an active subject RC. For example, the models were at near chance levels in predicting that 5.6 should have higher probability than 5.6, but were slightly better at predicting that 5.6 should have higher probability than 5.6:
. . The farmer that the parents love swims. .̱ *The farmer that the parents love swim.
. . The farmer that loves the parents swims. .̱ *The farmer that loves the parents swim.
One possible explanation for this poor performance is that object RCs, either reduced or unreduced, are quite infrequent Roland et al. (2007). If the LM treats object RCs as unrelated to other RCs, there are likely very few training examples from which the models can learn about subject-verb agreement when the subject is modified by an object RC. If the LM had instead treated object RCs as belonging to the same class as other RCs, it could learn to generalize from training examples of subject-verb agreement when the subject is modified by other RCs. This suggests the hypothesis that agreement prediction accuracy on object RCs will be higher in LMs in which the representation of object RCs is more similar to the representation of other RCs.
The similarity between object RCs and other RCs was defined as in the previous section (the proportion of blue squares to pink squares of the top two rows in Figure 4). There was an increase in accuracy as the number of hidden units increased (see Figure 4(b)). However, the similarity between object RCs and other types of RCs did not significantly correlate with agreement prediction; we therefore did not find any evidence for the hypothesis mentioned above.101010Similar patterns were observed for the other constructions in the dataset. See Supplementary Materials.
Drawing on the syntactic priming paradigm from psycholinguistics, we proposed a new technique to analyze how the representations of sentences in neural language models (LMs) are organized. Applying this paradigm to sentences with relative clauses (RCs), we found that the representations of these sentences were organized in a linguistically interpretable hierarchical manner (summarized in Figure 6).
We investigated whether this hierarchical organization was driven by function words that are shared among sentences sentences or whether there was evidence that LMs were tracking more abstract properties of the sentence. We found that for at least some linguistically interpretable classes, sentences that belonged to these classes were more similar to each other in the representation space of the LMs we tested than in the representation space of baseline LMs that were not trained on any data. This suggests that the trained LMs were capable of tracking abstract properties of the sentence.
However, for linguistically interpretable classes in which sentences shared a non-lexically observable property (e.g. presence of a gap), sentences were as similar to each other in the representation space of the LMs we tested as in the representation space of baseline LMs. Taken together, these results suggest that LMs might be able to track abstract properties of classes of sentences only if these classes also share a lexically observable property.
Additionally, we found that the sentences belonging to linguistically interpretable classes were more similar to each other in the representation spaces of models trained on 2 million tokens than in the representation spaces for models trained on 20 million tokens. We infer from this that LMs’ ability to track abstract properties of sentences decreases with an increase in the training corpus size. This suggests that if we want these LMs to track more abstract linguistic properties, training them on more data from the same distribution is unlikely to help (cf. van Schijndel et al. 2019). Future work can explore how to bias these models to track linguistically useful properties through architectural biases Dyer et al. (2016), training on auxiliary tasks Enguehard et al. (2017) or data augmentation Perez and Wang (2017).
We hypothesized that models’ accuracy on subject verb agreement when preceded by object RCs would increase as the similarity between object RCs and the other types of RCs increased. However, we did not find evidence for this. This could either be because the similarity between object RCs and the other types of RCs was too weak to be useful (see Figure 4(a)) or because the LMs do not use this property when predicting verb agreement. Future work can disambiguate these reasons by testing models that are biased to treat sentences with object RCs and other RCs as being similar.
Finally, our method allows us to generate a similarity matrix in the LMs representation space for any given set of structures. In the future, generating a similar matrix for human representations using priming experiments and comparing these two matrices using analysis methods from cognitive neuroscience Kriegeskorte et al. (2008) may enable us to gain insight into how human-like the LM representations are and vice versa.
We proposed a novel technique to analyze how the representations of various syntactic structures are organized in neural language models. As a case study, we applied this technique to gain insight into the representations of sentences with relative clauses in RNN language models and found that the representations of sentences were organized in a linguistically interpretable manner.
We would like to thank Sadhwi Srinivas and the members of the CAP lab at JHU for helpful discussions and valuable feedback.
- Analyzing and interpreting neural networks for NLP: a report on the first BlackboxNLP workshop. Journal of Natural Language Engineering 25 (4), pp. 543–557. Cited by: §1.
- Analysis methods in neural language processing: a survey. Transactions of the Association for Computational Linguistics 7, pp. 49–72. External Links: Cited by: §1.
- Syntactic persistence in language production. Cognitive Psychology 18 (3), pp. 355–387. Cited by: §1.
- An experimental approach to linguistic representation. Behavioral and Brain Sciences 40. Cited by: §1.
- Becoming syntactic. Psychological Review 113 (2), pp. 234. Cited by: §2.2.
- An LSTM adaptation study of (un)grammaticality. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Florence, Italy, pp. 204–212. External Links: Cited by: §2.3.
What you can cram into a single $&!#* vector: probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, pp. 2126–2136. External Links: Cited by: §1.
- Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, California, pp. 199–209. External Links: Cited by: §6.
- Exploring the syntactic abilities of RNNs with multi-task learning. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), Vancouver, Canada, pp. 3–14. External Links: Cited by: §6.
- Neural language models as psycholinguistic subjects: representations of syntactic state. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 32–42. External Links: Cited by: §2.1.
- Under the hood: using diagnostic classifiers to investigate and improve how language models track agreement information. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Brussels, Belgium, pp. 240–248. External Links: Cited by: §1.
- Improving neural language models with a continuous cache. In Proceedings of the Fifth International Conference on Learning Representations, Y. Bengio and Y. LeCun (Eds.), External Links: Cited by: §2.3.
- Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), New Orleans, Louisiana, pp. 1195–1205. External Links: Cited by: §2.1.
- A probabilistic Earley parser as a psycholinguistic model. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language Technologies, Pittsburgh, PA, pp. 1–8. External Links: Cited by: §2.2.
- Do language models understand anything? on the ability of LSTMs to understand negative polarity items. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Brussels, Belgium, pp. 222–231. External Links: Cited by: §2.1.
- Representation of linguistic form and function in recurrent neural networks. Computational Linguistics 43 (4), pp. 761–780. External Links: Cited by: §1.
- Structural priming as implicit learning: cumulative priming effects and individual differences. Psychonomic Bulletin & Review 18 (6), pp. 1133–1139. Cited by: §2.2, footnote 1.
- Dynamic evaluation of neural sequence models. Technical report University of Edinburgh. External Links: Cited by: §2.3.
- Representational similarity analysis-connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience 2, pp. 4. Cited by: §6.
- The emergence of number and syntax units in LSTM language models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 11–20. External Links: Cited by: §1.
- Expectation-based syntactic comprehension. Cognition 106, pp. 1126–1177. Cited by: §2.2.
- Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics 4, pp. 521–535. External Links: Cited by: §1, §1, §2.1.
- A meta-analysis of syntactic priming in language production. Journal of Memory and Language 91, pp. 5–27. Cited by: §2.2.
Targeted syntactic evaluation of language models.
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 1192–1202. External Links: Cited by: §1, §2.1, §5.6.
- Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks. In Proceedings of the 40th Annual Conference of the Cognitive Science Society, T. Rogers, M. Rau, J. Zhu, and C. Kalish (Eds.), Austin, TX, pp. 2093–2098. Cited by: §1.
The effectiveness of data augmentation in image classification using deep learning. arXiv preprint arXiv:1712.04621. Cited by: §6.
- Mixed-effects models in s and s-plus. Springer Science & Business Media. Cited by: §4.5.
- Analyzing linguistic knowledge in sequential model of sentence. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pp. 826–835. External Links: Cited by: §1.
- Frequency of basic english grammatical structures: a corpus analysis. Journal of Memory and Language 57 (3), pp. 348–379. Cited by: §5.6.
- Does string-based neural MT learn source syntax?. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pp. 1526–1534. External Links: Cited by: §1.
- Syntactic priming effects in comprehension: a critical review. Language and Linguistics Compass 4 (10), pp. 925–937. Cited by: §2.2.
- A neural model of adaptation in reading. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 4704–4710. External Links: Cited by: §1, §2.3, §4.3.
- Quantity doesn’t buy quality syntax with neural language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, Hong Kong, China. Cited by: §4.3, §6.
The fine line between linguistic generalization and failure in Seq2Seq-attention models. In Proceedings of the Workshop on Generalization in the Age of Deep Learning, New Orleans, Louisiana, pp. 24–27. External Links: Cited by: §1.
- Experience and sentence processing: Statistical learning and relative clause comprehension. Cognitive Psychology 58, pp. 250–271. Cited by: §1, §2.2, footnote 1.
- What do RNN language models learn about filler–gap dependencies?. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Brussels, Belgium, pp. 211–221. External Links: Cited by: §2.1.
Appendix A Templates
We created seven templates (one for each of the structures we tested) to generate the adaptation and test sets. Each template had seven slots: subject, object of the relative clause, object of the main clause, verb in the relative clause, verb in the main clause, adverb for the main clause and adverb for the relative clause. The adverb arguments were blank strings half the time. The seven templates varied in the order in which they combined these arguments together to form a sentence. Therefore, for a given set of arguments, we were able to generate seven lexically matched sentences with different structures.
We included several sources of noise in our sentence generation process.
Each noun slot was filled by a plural noun 40% of the time.
Every noun phrase was modified with an adjective with 50% probability and every adjective was further modified with an intensifier with 40% probability.
In cases when a verb (in the main clause or relative clause) was modified by an adverb, the adverb occurred pre-verbally or post-verbally with equal probability.
The slots in the templates were filled by 223 verbs, 164 nouns, 24 adverbs and 78 adjectives. In order to ensure semantic plausibility, we created sub-classes of nouns, adverbs and adjectives and manually specified which sub-classes could combine together. For example, the noun subclass “human” consisted of the nouns friend, cousin, partner, sibling and colleague. This class could serve as subjects for 38 verbs and could be modified by four sub-classes of adjectives. Similarly the verb congratulated could take the noun subclass “human” as its subject and the noun subclasses “scienceperson” and “power” and as its object (e.g., scientist, researcher etc.; principal, manager etc.). Additionally, it could be modified by adverb subclasses “sad” and “time” (e.g, sadly,gloomily etc.; yesterday, last month etc.)
We ensured that there was no lexical overlap between adaptation and test sets, apart from function words (like the, and, by, that etc) and intensifiers (like very, rather, quite etc). We also ensured that verbs, nouns, adverbs and adjectives were not repeated within the same sentence.
Appendix B Relationship between and prior to adaptation
Appendix C Statistical Analyses:
This section contains details about the statistical analyses for all the results described in the main paper. In describing the formula for our mixed effects models we use standard LMER notation.
c.1 Validating AE as a similarity metric
For this analyses we fit a separate LMEM for each of the different structures that models could get adapted to.
AE structure + (1 adaptlist) + (1 clist)
Structure is a categorical variable coded asif the test structure is the same as the adaptation structure and if it is different.
adaptlist: Which of the 10 adaptation-test sets we generated was the model adapted to and tested on?
clist: Which subset of Wikipedia was the model trained on?
|Structure adapted to||SE||p-value|
|Unreduced Object RC||0.256||0.001|
|Reduced Object RC||0.171||0.001|
|Unreduced Passive RC||0.229||0.001|
|Reduced Passive RC||0.100||0.001|
|Active Subject RC||0.194||0.001|
c.2 Similarity between sentences with different types of VP coordination
We fit the following mixed effect model on LMs that were adapted to sentences with coordination.
AE testtype + (1 adaptlist) + (1 clist)
testtype was a categorical variable coded as if the model was tested on sentences with RCs and if the model was tested on sentences with the other type of coordination (e.g, for model adapted to ASRC-matched coordination, testtype was if it was tested on PS/ORC-matched coordination)
c.3 Similarity between sentences with different types of RCs
We fit the following mixed effect model on LMs that were adapted to sentences with RCs.
AE testtype + (1 adaptlist) + (1 clist)
testtype was a categorical variable coded as if the model was tested on sentences with other types RCs (e.g., for a model adapted to unreduced object RC, the value of testttype was when tested on reduced object RC, reduced/unreduced passive RC and active subject RC). It was coded as if the model was tested on sentences with coordination.
c.4 Similarity between sentences belonging to different sub-classes of RCs
We fit the following mixed effect model on LMs that were adapted to sentences with object or passive RCs.
AE testtype + (1 adaptlist) + (1 clist)
testtype was a categorical variable with four levels: passive match, reduced match, no match and both match. Since there were four levels, there were three contrasts. Passive match was chosen as the baseline and coded as for all of the contrasts. For each contrast, one of the other levels was coded as — i.e. in each contrast, the mean adaptation effect of passive match was compared to the mean adaptation effect of one of the other conditions.
c.5 What properties of sentences drive the similarity between them?
We a separate mixed effects model for each of the three linguistically interpretable classes discussed in Section 5.5 of the paper. We did not include the baseline models in these analyses.
scale(nhid) * scale(csize) + (1 adaptlist) + (1 clist)
nhid refers to the number of hidden units (100, 200, 400, 800, 1600) and csize refers to the training corpus size in millions of tokens (2, 10, 20).
c.6 Does predict agreement prediction accuracy?
We fit a separate linear regression model for LMs adapted to either reduced or unreduced Object RCs.
accuracy + scale(nhid) + scale(csize)
Appendix D Relationship between and agreement prediction accuracy for other structures
accuracy + scale(nhid) + scale(csize)