Syntactic Structure Distillation Pretraining For Bidirectional Encoders

05/27/2020 ∙ by Adhiguna Kuncoro, et al. ∙ Google berkeley college 0

Textual representation learners trained on large amounts of data have achieved notable success on downstream tasks; intriguingly, they have also performed well on challenging tests of syntactic competence. Given this success, it remains an open question whether scalable learners like BERT can become fully proficient in the syntax of natural language by virtue of data scale alone, or whether they still benefit from more explicit syntactic biases. To answer this question, we introduce a knowledge distillation strategy for injecting syntactic biases into BERT pretraining, by distilling the syntactically informative predictions of a hierarchical—albeit harder to scale—syntactic language model. Since BERT models masked words in bidirectional context, we propose to distill the approximate marginal distribution over words in context from the syntactic LM. Our approach reduces relative error by 2-21 although we obtain mixed results on the GLUE benchmark. Our findings demonstrate the benefits of syntactic biases, even in representation learners that exploit large amounts of data, and contribute to a better understanding of where syntactic biases are most helpful in benchmarks of natural language understanding.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Large-scale textual representation learners trained with variants of the language modelling (LM) objective have achieved remarkable success on downstream tasks (Peters et al., 2018; Devlin et al., 2019; Yang et al., 2019). Furthermore, these models have also been shown to perform remarkably well at syntactic grammaticality judgment tasks (Goldberg, 2019), and encode substantial amounts of syntax in their learned representations (Liu et al., 2019a; Tenney et al., 2019a, b; Hewitt and Manning, 2019; Jawahar et al., 2019). Intriguingly, the success on these syntactic tasks has been achieved by Transformer architectures (Vaswani et al., 2017) that lack explicit notions of hierarchical syntactic structures.

Based on such evidence, it would be tempting to conclude that data scale alone is all we need to learn the syntax of natural language. Nevertheless, recent findings that systematically compare the syntactic competence of models trained at varying data scales suggest that model inductive biases are in fact more important than data scale for acquiring syntactic competence (Hu et al., 2020). Two natural questions, therefore, are the following: can representation learners that work well at scale still benefit from explicit syntactic biases? And where exactly would such syntactic biases be helpful in different language understanding tasks? Here we work towards answering these questions by devising a new pretraining strategy that injects syntactic biases into a BERT (Devlin et al., 2019) learner that works well at scale. We hypothesise that this approach can improve the competence of BERT on various tasks, which provides evidence for the benefits of syntactic biases in large-scale learners.

Our approach is based on the prior work of Kuncoro et al. (2019), who devised an effective knowledge distillation (Bucilǎ et al., 2006; Hinton et al., 2015, KD)

procedure for improving the syntactic competence of scalable LMs that lack explicit syntactic biases. More concretely, their KD procedure utilised the predictions of an explicitly hierarchical (albeit hard to scale) syntactic LM, recurrent neural network grammars

(Dyer et al., 2016, RNNGs), as a syntactically informed learning signal for a sequential LM that works well at scale.

Our setup nevertheless presents a new challenge: here the BERT student is a denoising autoencoder that models a collection of conditionals for words in

bidirectional context, while the RNNG teacher is an autoregressive LM that predicts words in a left-to-right fashion, i.e.

. This mismatch crucially means that the RNNG’s estimate of

may fail to take into account the right context that is accessible to the BERT student (§3). Hence, we propose an approach where the BERT student distills the RNNG’s marginal distribution over words in context, . We develop an efficient yet effective approximation for this quantity, since exact inference is expensive owing to the RNNG’s left-to-right parameterisation.

Our structure-distilled BERT model differs from the standard BERT only in its pretraining objective, and hence retains the scalability afforded by Transformer architectures and specialised hardwares like TPUs. Our approach also maintains complete compatibility with the standard BERT pipelines; the structure-distilled BERT models can simply be loaded as pretrained BERT weights, which can then be fine-tuned in the exact same fashion.

We hypothesise that the stronger syntactic biases from our new pretraining procedure are useful for a variety of natural language understanding (NLU) tasks that involve structured output spaces—including tasks like semantic role labelling (SRL) and coreference resolution that are not explicitly syntactic in nature. We thus evaluate our models on 6 diverse structured prediction tasks, including phrase-structure parsing (in-domain and out-of-domain), dependency parsing, SRL, coreference resolution, and a CCG supertagging probe, in addition to the GLUE benchmark (Wang et al., 2019). On the structured prediction tasks, our structure-distilled BERT reduces relative error by 2% to 21%. These gains are more pronounced in the low-resource scenario, suggesting that stronger syntactic biases help improve sample efficiency (§4).

Despite the gains on the structured prediction tasks, we achieve mixed results on GLUE: our approach yields improvements on the corpus of linguistic acceptability (Warstadt et al., 2018, CoLA), and yet performs slightly worse on the rest of GLUE. These findings allude to a partial dissociation between model performance on GLUE, and on other more syntax-sensitive benchmarks of NLU.

Altogether, our findings: (i) showcase the benefits of syntactic biases, even for representation learners that leverage large amounts of data, (ii) help better understand where syntactic biases are most helpful, and (iii) make a case for designing approaches that not only work well at scale, but also integrate stronger notions of syntactic biases.

2 Recurrent Neural Network Grammars

Here we briefly describe the RNNG (Dyer et al., 2016)

that we use as the teacher model. An RNNG is a syntactic LM that defines the joint probability of surface strings

and phrase-structure nonterminals , henceforth denoted as , through a series of structure-building actions that traverse the tree in a top-down, left-to-right fashion. Let and denote the set of phrase-structure non-terminals and word terminals, respectively. At each time step, the decision over the next action , where and , is parameterised by a stack LSTM (Dyer et al., 2015) that encodes partial constituents. The choice of yields these transitions:

  • would the corresponding embeddings or onto the stack;

  • would the top elements up to the last incomplete non-terminal, compose these elements with a separate bidirectional LSTM, and lastly the composite phrase embedding back onto the stack. The hierarchical inductive bias of RNNGs can be attributed to this composition function,111Not all syntactic LMs have hierarchical biases; Choe and Charniak (2016) modelled strings and phrase structures sequentially with LSTMs. This model can be understood as a special case of RNNGs without the composition function. which recursively combines smaller units into larger ones.

RNNGs attempt to maximise the probability of correct action sequences relative to each gold tree.222Unsupervised RNNGs (Kim et al., 2019) exist, although they perform worse on measures of syntactic competence.

Extension to subwords.

Here we extend the RNNG to operate over subword units (Sennrich et al., 2016) to enable compatibility with the BERT student. As each word can be split into an arbitrary-length sequence of subwords, we preprocess the phrase-structure trees to include an additional nonterminal symbol that represents a word sequence, as illustrated by the example “(S (NP (WORD the) (WORD d ##og)) (VP (WORD ba ##rk ##s)))”, where tokens prefixed by “##” are subword units.333An alternative here is to represent each phrase as a flat sequence of subwords, although our preliminary experiments indicate that this approach yields worse perplexity.

3 Approach

We begin with a brief review of the BERT objective, before outlining our structure distillation approach.

3.1 BERT Pretraining Objective

The aim of BERT pretraining is to find model parameters that would maximise the probability of reconstructing parts of conditional on a corrupted version , where denotes the stochastic corruption protocol of Devlin et al. (2019) that is applied to each word . Formally:


where denotes the indices of masked tokens that serve as reconstruction targets.444In practice, the corruption protocol and the reconstruction targets are intertwined; denotes the indices of tokens in () that were altered by . This masked LM objective is then combined with a next-sentence prediction loss that predicts whether the two segments in are contiguous sequences.

3.2 Motivation

Figure 1: An example of the masked LM task, where [MASK] = chase and window is an attractor (red). We suppress phrase-structure annotations and corruptions on the context tokens for clarity.

Since the RNNG teacher is an expert on syntactic generalisations (Kuncoro et al., 2018; Futrell et al., 2019; Wilcox et al., 2019), we adopt a structure distillation procedure (Kuncoro et al., 2019) that enables the BERT student to learn from the RNNG’s syntactically informative predictions. Our setup nevertheless means that the two models here crucially differ in nature: the BERT student is not a left-to-right LM like the RNNG, but rather a denoising autoencoder that models a collection of conditionals for words in bidirectional context (Eq. 1).

We now present two strategies for dealing with this challenge. The first, naïve approach is to ignore this difference, and let the BERT student distill the RNNG’s marginal next-word distribution for each based on the left context alone, i.e. . While this approach is surprisingly effective (§4.3), we illustrate an issue in Fig. 1 for “The dogs by the window [MASK=chase] the cat”.

The RNNG’s strong syntactic biases mean that we can expect to assign high probabilities to plural verbs like bark, chase, fight, and run that are consistent with the agreement controller dogs—despite the presence of a singular attractor (Linzen et al., 2016), window, that can distract the model into predicting singular verbs like chases. Nevertheless, some plural verbs that are favoured based on the left context alone, such as bark and run, are in fact poor alternatives when considering the right context (e.g. “The dogs by the window bark/run the cat” are syntactically illicit). Distilling thus fails to take into account the right context that is accessible to the BERT student, and runs the risk of encouraging the student to assign high probabilities for words that fit poorly with the bidirectional context.

Hence, our second approach is to learn from teacher distributions that not only: (i) reflect the strong syntactic biases of the RNNG teacher, but also (ii) consider both the left and right context when predicting . Formally, we propose to distill the RNNG’s marginal distribution over words in bidirectional context, , henceforth referred to as the posterior probability for generating under all available information. We now demonstrate that this quantity can, in fact, be computed from left-to-right LMs like RNNGs.

3.3 Posterior Inference

Given a pretrained autoregressive, left-to-right LM that factorises , we discuss how to infer an estimate of . By definition of conditional probabilities:555In this setup, we assume that is a fixed-length sequence, and we aim to infer the LM’s estimate for generating a single token conditional on the full bidirectional context.


where is an alternate left context where is replaced by .


After cancelling common factors , the posterior computation in Eq. 2 is decomposed into two terms: (i) the likelihood of producing given its prefix, and (ii) conditional on the fact that we have generated and its prefix , the likelihood of producing the observed continuations . In our running example (Fig. 1), the posterior would assign low probabilities to plural verbs like bark that are nevertheless probable under the left context alone (i.e. barkThe dogs by the window would be high), because they are unlikely to generate the continuations (i.e. we expect the cat The dogs by the window bark to be low since it is syntactically illicit). In contrast, the posterior would assign high probabilities to plural verbs like fight and chase that are consistent with the bidirectional context, since we expect both fight The dogs by the window and the catThe dogs by the window fight to be probable.

Computational cost.

Let denote the maximum length of . Our KD approach requires computing the posterior distribution (Eq. 2) for every masked token in the dataset , which (excluding marginalisation cost over ) necessitates operations, where each operation returns the RNNG’s estimate of . In the standard BERT setup,666In our BERT pretraining setup, (vocabulary size of BERT-cased), , and . this procedure leads to a prohibitive number of operations ().

3.4 Posterior Approximation

Since exact inference of the posterior is computationally expensive, here we propose an efficient approximation procedure. Approximating in Eq. 2 yields:777This approximation preserves the intuition explained in §3.3. Concretely, verbs like bark would also be assigned low probabilities under this approximation, since the cat bark would be low since it is syntactically illicit—the alternative “bark at the cat” would be syntactically licit.


While Eq. 3 is still expensive to compute, it enables us to apply the Bayes rule to compute :


where denotes the unigram distribution. For efficiency, we replace with a separately trained “reverse” RNNG that operates in a right-to-left fashion, denoted as ; a complete example of the right-to-left RNNG action sequences is provided in Appendix C. We now apply Eq. 4 and the right-to-left parameterisation into Eq. 3, and cancel common factors :


Our approximation in Eq. 5 crucially reduces the required number of operations from to , although the actual speedup is much more substantial in practice, since Eq. 5 involves easily batched operations that considerably benefit from specialised hardwares like GPUs.

Notably, our proposed approach here is a general one; it can approximate the posterior over from any left-to-right LM, which can be used as a learning signal for BERT through KD, irrespective of the LM’s parameterisation. It does, however, necessitate a separately trained right-to-left LM.

Connection to product of experts.

Eq. 5 has a similar form to a product of experts (Hinton, 2002, PoE) between the left-to-right and right-to-left RNNGs’ next-word distributions, albeit with extra unigram terms . If we replace the unigram distribution with a uniform one, i.e. , Eq. 5 reduces to a standard PoE.

Approximating the marginal.

The approximation in Eq. 5 requires estimates of and from the left-to-right and right-to-left RNNGs, respectively, which necessitate expensive marginalisations over all possible tree prefixes and . Following Kuncoro et al. (2019), we approximate this marginalisation using a one-best predicted tree , where is parameterised by the transition-based parser of Fried et al. (2019), and denotes the set of all possible trees for . Formally:


where denotes the non-terminal symbols in that occur before .888Our approximation of relies on a tree prefix from a separate discriminative parser, which has access to yet unseen words . This non-incremental procedure is justified, however, since we aim to design the most informative teacher distributions for the non-incremental BERT student, which also has access to bidirectional context. The marginal next-word distributions from the right-to-left RNNG is approximated similarly.

Preliminary Experiments.

Before proceeding with the KD experiments, we assess the quality and feasibility of our approximation through preliminary language modelling experiments on the Penn Treebank (Marcus et al., 1993, PTB); full details are provided in Appendix A. We find that our approximation is much faster than exact inference by a factor of more than 50,000, at the expense of a slightly worse average posterior negative log-likelihood (2.68 rather than 2.5 for exact inference).

3.5 Objective Function

In our structure distillation pretraining, we aim to find BERT parameters that emulate our approximation of through a word-level cross-entropy loss (Hinton et al., 2015; Kim and Rush, 2016; Furlanello et al., 2018, inter alia):

where is our approximation of , as defined in Eqs. 5 and 6.


The RNNG teacher is an expert on syntax, although in practice it is only feasible to train it on a much smaller dataset. Hence, we not only want the BERT student to learn from the RNNG’s syntactic expertise, but also from the rich common-sense and semantics knowledge contained in large text corpora by virtue of predicting the true identity of the masked token ,999The KD loss is defined independently of .

as done in the standard BERT setup. We thus interpolate the KD loss and the original BERT masked LM objective:


omitting the next-sentence prediction for brevity. We henceforth set unless stated otherwise.

4 Experiments

Here we outline the evaluation setup, present our results, and discuss the implications of our findings.

4.1 Evaluation Tasks and Setup

We conjecture that the improved syntactic competence from our approach would benefit a broad range of tasks that involve structured output spaces, including those that are not explicitly syntactic. We thus evaluate our structure-distilled BERTs on six diverse structured prediction tasks that encompass syntactic, semantic, and coreference resolution tasks, in addition to the GLUE benchmark that is largely comprised of classification tasks.

Phrase-structure parsing - PTB.

We first evaluate our model on phrase-structure parsing on the WSJ section of the PTB. Following prior work, we use sections 02-21 for training, section 22 for validation, and section 23 for testing. We apply our approach on top of the BERT-augmented in-order (Liu and Zhang, 2017) transition-based parser of Fried et al. (2019), which approaches the current state of the art. Since the RNNG teacher that we distill into BERT also employs phrase-structure trees, this setup is related to self-training (Yarowsky, 1995; Charniak, 1997; Zhou and Li, 2005; McClosky et al., 2006; Andor et al., 2016, inter alia).

Phrase-structure parsing - OOD.

Still in the context of phrase-structure parsing, we evaluate how well our approach generalises to three out-of-domain (OOD) treebanks: Brown (Francis and Kučera, 1979), Genia (Tateisi et al., 2005), and the English Web Treebank (Petrov and McDonald, 2012). Following Fried et al. (2019), we test the PTB-trained parser on the test splits101010We use the Brown test split of Gildea (2001), the Genia test split of McClosky et al. (2008), and the EWT test split from SANCL 2012 (Petrov and McDonald, 2012). of these OOD treebanks without any retraining, to simulate the case where no in-domain labelled data are available. We use the same codebase as above.

Dependency parsing - PTB.

Our third task is PTB dependency parsing with Stanford Dependencies (De Marneffe and Manning, 2008) v3.3.0. We use the BERT-augmented joint phrase-structure and dependency parser of Zhou and Zhao (2019), which is inspired by head-driven phrase-structure grammar (Pollard and Sag, 1994, HPSG).

Semantic role labelling.

Our fourth evaluation task is span-based semantic role labelling (SRL) on the CoNLL 2012 dataset (Pradhan et al., 2013). We apply our approach on top of the BERT-augmented model of Shi and Lin (2019), as implemented in AllenNLP (Gardner et al., 2017).

Coreference resolution.

Our fifth evaluation task is coreference resolution on the OntoNotes benchmark (Pradhan et al., 2012). For this task, we use the BERT-augmented model of Joshi et al. (2019), which extends the higher-order coarse-to-fine model of Lee et al. (2018).

CCG Supertagging Probe.

All proposed tasks thus far necessitate either fine-tuning the entire BERT model, or training a task-specific model on top of the BERT embeddings. Hence, it remains unclear how much of the gains are due to better structural representations from our new pretraining strategy, rather than the available supervision at the fine-tuning stage. To better understand the gains from our approach, we evaluate on combinatory categorial grammar (Steedman, 2000, CCG) supertagging (Bangalore and Joshi, 1999; Clark and Curran, 2007) through a classifier probe (Shi et al., 2016; Adi et al., 2017; Belinkov et al., 2017, inter alia), where no BERT fine-tuning takes place.111111A similar CCG probe was explored by Liu et al. (2019a); we obtain comparable numbers for the no distillation baseline.

CCG supertagging is a compelling probing task since it necessitates an understanding of bidirectional context information; the per-word classification setup also lends itself well to classifier probes. Nevertheless, it remains unclear how much of the accuracy can be attributed to the information encoded in the representation, as opposed to the classifier probe itself. We thus adopt the control task protocol of Hewitt and Liang (2019) that assigns each word type to a random control category,121212Following Hewitt and Liang (2019), the cardinality of this control category is the same as the number of supertags. which assesses the memorisation capacity of the classifier. In addition to the probing accuracy, we report the probe selectivity,131313A probe’s selectivity is defined as the difference between the probing task accuracy and the control task accurary. where higher selectivity denotes probes that faithfully rely on the linguistic knowledge encoded in the representation. We use linear classifiers to maintain high selectivities.


All our structured prediction experiments are conducted on top of publicly available repositories of BERT-augmented models, with the exception of CCG supertagging that we evaluate as a probe. This setup means that obtaining our results is as simple as changing the pretrained BERT weights to our structure-distilled BERT, and applying the exact same steps as in the baseline.


Beyond the 6 structured prediction tasks above, we evaluate our approach on the classification141414This setup excludes the semantic textual similarity benchmark (STS-B), which is formulated as a regression task. tasks of the GLUE benchmark except the Winograd NLI (Levesque et al., 2012) for consistency with the original BERT (Devlin et al., 2019). For each GLUE task fine-tuning, we run a grid search over five potential learning rates, two batch sizes, and five random seeds (Appendix D), leading to 50 fine-tuning configurations that we run and evaluate on the validation set of each GLUE task.

4.2 Experimental Setup and Baselines

Here we describe the key aspects of our empirical setup, and outline the baselines for assessing the efficacy of our approach.

RNNG Teacher.

We implement the subword-augmented RNNG teachers (§2) on DyNet (Neubig et al., 2017a), and obtain “silver-grade” phrase-structure annotations for the entire BERT training set using the transition-based parser of Fried et al. (2019). These trees are used to train the RNNG (§2), and to approximate its marginal next-word distribution at inference (Eq. 6). We use the same WordPiece tokenisation and vocabulary as BERT-Cased; Appendix B summarises the complete list of RNNG hyper-parameters. Since our approximation (Eq. 5) makes use of a right-to-left RNNG, we train this variant (Appendix C) with the same hyper-parameters and data as the left-to-right model. We train each directional RNNG teacher on a shared subset of 3.6M sentences (3%) from the BERT training set with automatic batching (Neubig et al., 2017b), which takes three weeks on a V100 GPU.

BERT Student.

We apply our structure distillation pretraining protocol to BERT-Cased,151515We use BERT rather than BERT to reduce the turnaround of our experiments, although our approach can easily be extended to BERT. using the exact same training dataset, model configuration, WordPiece tokenisation, vocabulary, and hyper-parameters (Appendix D) as in the standard pretrained BERT model.161616 The sole exception is that we use a larger initial learning rate of based on preliminary experiments,171717We find this larger learning to perform better on most of our evaluation tasks. Liu et al. (2019b) has similarly found that tuning BERT’s initial learning rate leads to better results. which we apply to all models (including the no distillation/standard BERT baseline) for fair comparison.

Baselines and comparisons.

We compare the following set of models in our experiments:

  • A standard BERT-Cased without any structure distillation loss, which benefits from scalability but lacks syntactic biases (“No-KD”);

  • Four variants of structure-distilled BERTs that: (i) only distill the left-to-right RNNG (“L2R-KD”), (ii) only distill the right-to-left RNNG (“R2L-KD”), (iii) distill the RNNG’s approximated marginal for generating under the bidirectional context, where (Eq. 5) is the uniform distribution (“UF-KD”), and lastly (iv) a similar variant as (iii), but where is the unigram distribution (“UG-KD”). All these BERT models crucially benefit from the syntactic biases of RNNGs, although only variants (iii) and (iv) learn from teacher distributions that consider bidirectional context for predicting ; and

  • A BERT model that distills the approximated marginal for generating under the bidirectional context, but from sequential LSTM teachers (“Seq-KD”) in place of RNNGs.181818For fair comparison, we train the LSTM on the exact same subset as the RNNG, with comparable number of model parameters. An alternative here is to use Transformers, although we elect to use LSTMs to facilitate fair comparison with RNNGs, which are also based on LSTM architectures. This baseline crucially isolates the importance of learning from hierarchical teachers, since it employs the exact same approximation technique and KD loss as the structure-distilled BERTs.

Learning curves.

Given enough labelled data, BERT can acquire the relevant structural information from the fine-tuning (as opposed to pretraining) procedure, although better pretrained representations can nevertheless facilitate sample-efficient generalisations (Yogatama et al., 2019). We thus additionally examine the models’ fine-tuning learning curves, as a function of varying amounts of training data, on phrase-structure parsing and SRL.

Random seeds.

Since fine-tuning the same pretrained BERT with different random seeds can lead to varying results, we report the mean performance from three random seeds on the structured prediction tasks, and from five random seeds on GLUE.

Test results.

To preserve the integrity of the test sets, we first report all performance on the validation set, and only report test set results for: (i) the No-KD baseline, and (ii) the best structure-distilled model on the validation set (“Best-KD”).

4.3 Findings and Discussion

We report the validation and test results of the structured prediction tasks in Table 1. The validation set learning curves for phrase-structure parsing and SRL that compare the No-KD baseline and the UG-KD variant are provided in Fig. 2.

Task Validation Set Test Set
Baselines Structure-distilled BERTs No-KD Best-KD Err. Red.


Const. PTB - F1 95.38 95.33 95.55 95.55 95.58 95.59 95.35 95.70 7.6%
Const. PTB - EM 55.33 55.41 55.92 56.18 56.39 56.59 55.25 57.77 5.63%
Const. OOD - F1 87.71 87.23 88.36 88.56 88.24 88.21 89.04 89.76 6.55%
Dep. PTB - UAS 96.48 96.40 96.70 96.64 96.60 96.66 96.79 96.86 2.18%
Dep. PTB - LAS 94.65 94.56 94.90 94.80 94.79 94.83 95.13 95.23 1.99%
SRL - CoNLL 2012 86.17 86.09 86.34 86.29 86.30 86.46 86.08 86.39 2.23%
Coref. 72.53 69.27 73.74 73.49 73.79 73.33 72.71 73.69 3.58%
CCG supertag. probe 93.69 91.59 93.97 95.21 95.13 95.21 93.88 95.2 21.57%
Probe selectivity 24.79 23.77 23.3 23.57 27.28 28.3 23.15 26.07 N/A
Table 1: Validation and test results for the structured prediction tasks; each entry reflects the mean of three random seeds. To preserve test set integrity, we only obtain test set results for the no distillation baseline and the best structure-distilled BERT on the validation set; “Err. Red.” reports the test error reductions relative to the No-KD baseline. We report F1 and exact match (EM) for PTB phrase-structure parsing; for dependency, we report unlabelled (UAS) and labelled (LAS) attachment scores. The “Const. OOD” () row indicates the mean F1 from three out-of-domain corpora: Brown, Genia, and the English Web Treebank (EWT), although the validation results exclude the Brown Treebank that has no validation set.
Figure 2: The fine-tuning learning curves that examine how the number of fine-tuning instances (from 5% to 100% of the full training sets) affect validation set F1 in the case of phrase-structure parsing and SRL. We compare the “No-KD”/standard BERT-Cased and the “UG-KD” structure-distilled BERT.

General discussion.

We summarise several key observations from Table 1 and Fig. 2.

  • All four structure-distilled BERT models consistently outperform the No-KD baseline, including the L2R-KD and R2L-KD variants that only distill the syntactic knowledge of unidirectional RNNGs. Remarkably, this pattern holds true for all six structured prediction tasks. In contrast, we observe no such gains for the Seq-KD baseline, which largely performs worse than the No-KD model. We conclude that the gains afforded by our structure-distilled BERTs can be attributed to the hierarchical bias of the RNNG teacher.

  • We conjecture that the surprisingly strong performance of the L2R-KD and R2L-KD models, which distill the knowledge of unidirectional RNNGs, can be attributed to the interpolated objective in Eq. 7 (). This interpolation means that the target distribution assigns a probability mass of at least 0.5 to the true masked word , which is guaranteed to be consistent with the bidirectional context. However, the syntactic knowledge contained in the unidirectional RNNGs’ predictions can still provide a structurally informative learning signal, via the rest of the probability mass, for the BERT student.

  • While all structure-distilled variants outperform the baseline, models that distill our approximation of the RNNG’s distribution for words in bidirectional context (UF-KD and UG-KD) yield the best results on four out of six tasks (PTB phrase-structure parsing, SRL, coreference resolution, and the CCG supertagging probe). This finding confirms the efficacy of our approach.

  • We observe the largest gains for the syntactic tasks, particularly for phrase-structure parsing and CCG supertagging. However, the improvements are not at all confined to purely syntactic tasks: we reduce relative error from strong BERT baselines by 2.2% and 3.6% on SRL and coreference resolution, respectively. While the RNNG’s syntactic biases are derived from phrase-structure grammar, the strong improvement on CCG supertagging, in addition to the smaller improvement on dependency parsing, suggests that the RNNG’s syntactic biases generalise well across different syntactic formalisms.

  • We observe larger improvements in a low-resource scenario, where the model is exposed to fewer fine-tuning instances (Fig. 2), suggesting that syntactic biases are helpful for enabling more sample-efficient generalisations. This pattern holds for both tasks that we investigated: phrase-structure parsing (syntactic) and SRL (not explicitly syntactic). With only 5% of the fine-tuning data, the UG-KD model improves F1 from 79.9 to 80.6 for SRL (a 3.5% error reduction relative to the No-KD baseline, as opposed to. 2.2% on the full data). For phrase-structure parsing, the UG-KD model achieves a 93.68 F1 (a 16% relative error reduction, as opposed to 7.6% on the full data) with only 5% of the PTB—this performance is notably better than past state of the art parsers trained on the full PTB c. 2017 (Kuncoro et al., 2017).

GLUE results and discussion.

We report the GLUE validation and test results in Table 2. Since we observe a different pattern of results on the Corpus of Linguistic Acceptability (Warstadt et al., 2018, CoLA) than on the rest of GLUE, we henceforth report: (i) the CoLA results, (ii) the 7-task average that excludes CoLA, and (iii) the average across all 8 tasks. We select the UG-KD model since it achieved the best 8-task average on the GLUE validation sets; the full GLUE breakdown for these two models is provided in Appendix E.

Validation Set (Per-task average / 1-best)
CoLA 50.7 / 60.2 54.3 / 60.6
7-task avg. (excl. CoLA) 85.4 / 87.8 84.8 / 86.9
Overall 8-task avg. 81.1 / 84.4 81.0 / 83.6
Test set (Per-task 1-best on validation set)
CoLA 53.1 55.3
7-task avg. (excl. CoLA) 84.2 83.5
Overall 8-task avg. 80.3 80.0
Table 2:

Summary of the validation and test set results on GLUE. The validation results are derived from the average of five random seeds for each task, which accounts for variance, and the 1-best random seed, which does not. The test results are derived from the 1-best random seed on the validation set.

The results on GLUE provide an interesting contrast to the consistent improvement we observed on the structured prediction tasks. More concretely, our UG-KD model outperforms the baseline on CoLA, but performs slightly worse on the other GLUE tasks in aggregate, leading to a slightly lower overall test set accuracy (80.0 for the UG-KD as opposed to 80.3 for the No-KD baseline).

The improvement on the syntax-sensitive CoLA provides additional evidence—beyond the improvement on the syntactic tasks (Table 1)—that our approach indeed yields improved syntactic competence. We conjecture that these improvements do not transfer to the other GLUE tasks because they rely more on lexical and semantic properties, and less on syntactic competence (McCoy et al., 2019).

We defer a more thorough investigation of how much syntactic competence is necessary for solving most of the GLUE tasks to future work, but make two remarks. First, the findings on GLUE are consistent with the hypothesis that our approach yields improved structural competence, albeit at the expense of a slightly less rich meaning representation, which we attribute to the smaller dataset used to train the RNNG teacher. Second, human-level natural language understanding includes the ability to predict structured outputs, e.g. to decipher “who did what to whom” (SRL). Succeeding in these tasks necessitates inference about structured output spaces, which (unlike most of GLUE) cannot be reduced to a single classification decision. Our findings indicate a partial dissociation between model performance on these two types of tasks; hence, supplementing GLUE evaluation with some of these structured prediction tasks can offer a more holistic assessment of progress in NLU.

Sent. No-KD & L2R-KD Pred. R2L-KD & UG-KD Pred.
“Apple II owners , for example , had to use their TV (S[b]\NP)/NP ((S[b]\NP)/PP)/NP
sets as screens and stored data on audiocassettes”
Table 3: An example of the CCG supertag predictions for the verb “use” from four different BERT variants. The correct answer is “((S[b]\NP)/PP)/NP”, which both the R2L-KD and UG-KD predict correctly (blue). However, the No-KD baseline and the L2R-KD model produce (the same) incorrect predictions (red); both models fail to subcategorise the prepositional phrase “as screens” as a dependent of the verb “use”. Beyond this, all four models predict the correct supertags for all other words (not shown).

CCG probe example.

The CCG supertagging probe is a particularly interesting test bed, since it clearly assesses the model’s ability to use contextual information in making its predictions, without introducing additional confounds from the BERT fine-tuning procedure. We thus provide a representative example of four different BERT variants’ predictions on the CCG supertagging probe in Table 3, based on which we discuss two observations. First, the different models make different predictions, where the No-KD and L2R-KD models produce (coincidentally the same) incorrect predictions, while the R2L-KD and UG-KD models are able to predict the correct supertag. This finding suggests that different teacher distributions are able to impose different biases on the BERT students.191919All four BERTs have access to the full bidirectional context at test time, although some are trained to mimic the predictions of unidirectional RNNGs (L2R-KD and R2L-KD).

Second, the mistakes of the No-KD and L2R-KD BERTs belong to the broader category of challenging argument-adjunct distinctions (Palmer et al., 2005). Here both models fail to subcategorise for the prepositional phrase (PP) “as screens”, which serves as an argument of the verb “use”, as opposed to the noun phrase “TV sets”. Distinguishing between these two potential dependencies naturally requires syntactic information from the right context; hence the R2L-KD BERT, which is trained to emulate the predictions of an RNNG teacher that observes the right context, is able to make the correct prediction. This advantage is crucially retained by the UG-KD model that distills the RNNG’s approximate distribution over words in bidirectional context (Eq. 5), and further confirms the efficacy of our proposed approach.

4.4 Limitations

We outline two limitations to our approach. First, we assume the existence of decent-quality “silver-grade” phrase-structure trees to train the RNNG teacher. While this assumption holds true for English due to the existence of accurate phrase-structure parsers, this is not necessarily the case for other languages. Second, pretraining the BERT student in our naïve implementation is about half as fast on TPUs compared to the baseline due to I/O bottleneck. This overhead only applies at pretraining, and can be reduced through parallelisation.

5 Related Work

Earlier work has proposed a few ways for introducing notions of hierarchical structures into BERT, for instance through designing structurally motivated auxiliary losses (Wang et al., 2020), or including syntactic information in the embedding layers that serve as inputs for the Transformer (Sundararaman et al., 2019). In contrast, we employ a different technique for injecting syntactic biases, which is based on the structure distillation technique of Kuncoro et al. (2019), although our work features two key differences. First, kuncoro_19 put a sole emphasis on cases where both the teacher and student models are autoregressive, left-to-right LMs; here we extend this objective for when the student model is a representation learner that has access to bidirectional context. Second, kuncoro_19 only evaluated their structure-distilled LMs in terms of perplexity and grammatical judgment (Marvin and Linzen, 2018). In contrast, we evaluate our structure-distilled BERT models on 6 diverse structured prediction tasks and the GLUE benchmark. It remains an open question whether, and how much, syntactic biases are helpful for a broader range of NLU tasks beyond grammatical judgment; our work represents a step towards answering this question.

More recently, substantial progress has been made in improving the performance of BERT and the broader class of masked LMs (Lan et al., 2019; Liu et al., 2019b; Raffel et al., 2019; Sun et al., 2020, inter alia). Our structure distillation technique is orthogonal, and can be applied on top of these approaches. Lastly, our findings on the benefits of syntactic knowledge for structured prediction tasks that are not explicitly syntactic in nature, such as SRL and coreference resolution, are consistent with those of prior work (Swayamdipta et al., 2018; He et al., 2018; Strubell et al., 2018, inter alia).

6 Conclusion

Given the remarkable success of textual representation learners trained on large amounts of data, it remains an open question whether syntactic biases are still relevant these models that work well at scale. Here we present evidence to the affirmative: our structure-distilled BERT models outperform the baseline on a diverse set of 6 structured prediction tasks. We achieve this through a new pretraining strategy that enables the BERT student to learn from the predictions of an explicitly hierarchical, but much less scalable, RNNG teacher model. Since the BERT student is a bidirectional model that estimates the conditional probabilities of masked words in context, we propose to distill an efficient yet surprisingly effective approximation of the RNNG’s estimate for generating each word conditional on its bidirectional context.

Our findings suggest that syntactic inductive biases are beneficial for a diverse range of structured prediction tasks, including for tasks that are not explicitly syntactic in nature. In addition, these biases are particularly helpful for improving fine-tuning sample efficiency on downstream tasks. Lastly, our findings motivate the broader question of how we can design models that integrate stronger notions of structural biases—and yet can be easily scalable at the same time—as a promising (if relatively underexplored) direction of future research.


We would like to thank Mandar Joshi, Zhaofeng Wu, and Rui Zhang for answering questions regarding the evaluation of the model. We also thank Sebastian Ruder, John Hale, Kris Cao, and Stephen Clark for their helpful suggestions.


  • Adi et al. (2017) Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In Proc. of ICLR.
  • Andor et al. (2016) Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2442–2452, Berlin, Germany. Association for Computational Linguistics.
  • Bangalore and Joshi (1999) Srinivas Bangalore and Aravind K. Joshi. 1999. Supertagging: An approach to almost parsing. Computational Linguistics, 25(2):237–265.
  • Belinkov et al. (2017) Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017.

    What do neural machine translation models learn about morphology?

    In Proc. of ACL.
  • Bucilǎ et al. (2006) Cristian Bucilǎ, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Proc. of KDD.
  • Charniak (1997) Eugene Charniak. 1997. Statistical parsing with a context-free grammar and word statistics. In Proc. of AAAI.
  • Choe and Charniak (2016) Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proc. of EMNLP.
  • Clark and Curran (2007) Stephen Clark and James R. Curran. 2007. Wide-coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493–552.
  • De Marneffe and Manning (2008) Marie-Catherine De Marneffe and Christopher D. Manning. 2008. Stanford typed dependencies manual.
  • Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proc. of NAACL.
  • Dyer et al. (2015) Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015.

    Transition-based dependency parsing with stack long short-term memory.

    In Proc. of ACL.
  • Dyer et al. (2016) Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proc. of NAACL.
  • Francis and Kučera (1979) Winthrop Nelson Francis and Henry Kučera. 1979. Manual of information to accompany a standard corpus of present-day edited American English, for use with digital computers. Brown University, Department of Linguistics.
  • Fried et al. (2019) Daniel Fried, Nikita Kitaev, and Dan Klein. 2019. Cross-domain generalization of neural constituency parsers. In Proc. of ACL.
  • Furlanello et al. (2018) Tommaso Furlanello, Zachary Chase Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. 2018. Born-again neural networks. In Proc. of ICML.
  • Futrell et al. (2019) Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, and Roger Levy. 2019. Neural language models as psycholinguistic subjects: Representations of syntactic state. In Proc. of NAACL.
  • Gardner et al. (2017) Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. Allennlp: A deep semantic natural language processing platform.
  • Gildea (2001) Daniel Gildea. 2001. Corpus variation and parser performance. In

    Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing

  • Goldberg (2019) Yoav Goldberg. 2019. Assessing BERT’s syntactic abilities. CoRR, abs/1901.05287.
  • He et al. (2018) Shexia He, Zuchao Li, Hai Zhao, and Hongxiao Bai. 2018. Syntax for semantic role labeling, to be, or not to be. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2061–2071.
  • Hewitt and Liang (2019) John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proc. of EMNLP.
  • Hewitt and Manning (2019) John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proc. of NAACL.
  • Hinton (2002) Geoffrey E Hinton. 2002.

    Training products of experts by minimizing contrastive divergence.

    Neural Computation.
  • Hinton et al. (2015) Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. CoRR, abs/1503.02531.
  • Hu et al. (2020) Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger P. Levy. 2020. A systematic assessment of syntactic generalization in neural language models. In Proc. of ACL.
  • Jawahar et al. (2019) Ganesh Jawahar, Benoît Sagot, and Djamé Seddah. 2019. What does BERT learn about the structure of language? In Proc. of ACL.
  • Joshi et al. (2019) Mandar Joshi, Omer Levy, Luke Zettlemoyer, and Daniel Weld. 2019. BERT for coreference resolution: Baselines and analysis. In Proc. of EMNLP.
  • Kim and Rush (2016) Yoon Kim and Alexander M. Rush. 2016. Sequence-level knowledge distillation. In Proc. of EMNLP.
  • Kim et al. (2019) Yoon Kim, Alexander M. Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, and Gabor Melis. 2019. Unsupervised recurrent neural network grammars. In Proc. of NAACL.
  • Kudo and Richardson (2018) Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proc. of EMNLP System Demonstrations.
  • Kuncoro et al. (2017) Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, and Noah A. Smith. 2017. What do recurrent neural network grammars learn about syntax? In Proc. of EACL.
  • Kuncoro et al. (2018) Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom. 2018. Lstms can learn syntax-sensitive dependencies well, but modeling structure makes them better. In Proc. of ACL.
  • Kuncoro et al. (2019) Adhiguna Kuncoro, Chris Dyer, Laura Rimell, Stephen Clark, and Phil Blunsom. 2019. Scalable syntax-aware language modelling with knowledge distillation. In Proc. of ACL.
  • Lan et al. (2019) Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942.
  • Lee et al. (2018) Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to-fine inference. In Proc. of NAACL.
  • Levesque et al. (2012) Hector J. Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Proc. of KR.
  • Linzen et al. (2016) Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics.
  • Liu and Zhang (2017) Jiangming Liu and Yue Zhang. 2017. In-order transition-based constituent parsing. Transactions of the Association for Computational Linguistics.
  • Liu et al. (2019a) Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019a. Linguistic knowledge and transferability of contextual representations. In Proc. of NAACL.
  • Liu et al. (2019b) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized BERT pretraining approach. CoRR.
  • Marcus et al. (1993) Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: The Penn treebank. Computational Linguistics.
  • Marvin and Linzen (2018) Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proc. of EMNLP.
  • McClosky et al. (2006) David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proc. of NAACL.
  • McClosky et al. (2008) David McClosky, Eugene Charniak, and Mark Johnson. 2008. When is self-training effective for parsing? In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 561–568, Manchester, UK. Coling 2008 Organizing Committee.
  • McCoy et al. (2019) Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019.

    Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference.

    In Proc. of ACL.
  • Mikolov et al. (2010) Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan Černocký, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proc. of Interspeech.
  • Neubig et al. (2017a) Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017a. DyNet: The Dynamic Neural Network Toolkit. arXiv preprint arXiv:1701.03980.
  • Neubig et al. (2017b) Graham Neubig, Yoav Goldberg, and Chris Dyer. 2017b. On-the-fly operation batching in dynamic computation graphs. In Proc. of NeurIPS.
  • Palmer et al. (2005) Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71–106.
  • Peters et al. (2018) Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL.
  • Petrov and McDonald (2012) Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. In Notes of the First Workshop on Syntactic Analysis of Non-Canonical Language (SANCL).
  • Pollard and Sag (1994) Carl Pollard and Ivan A. Sag. 1994. Head-Driven Phrase Structure Grammar. University of Chicago Press.
  • Pradhan et al. (2013) Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Björkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using OntoNotes. In Proc. of CoNLL.
  • Pradhan et al. (2012) Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes. In Proc. of CoNLL.
  • Raffel et al. (2019) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv e-prints.
  • Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proc. of ACL.
  • Shi and Lin (2019) Peng Shi and Jimmy Lin. 2019. Simple BERT models for relation extraction and semantic role labeling. CoRR, abs/1904.05255.
  • Shi et al. (2016) Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural MT learn source syntax? In Proc. of EMNLP.
  • Srivastava et al. (2014) Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting.

    Journal of Machine Learning Research

  • Steedman (2000) Mark Steedman. 2000. The Syntactic Process. MIT Press.
  • Strubell et al. (2018) Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. In Proc. of EMNLP.
  • Sun et al. (2020) Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2020. ERNIE 2.0: A continual pre-training framework for language understanding. In Proc. of AAAI.
  • Sundararaman et al. (2019) Dhanasekar Sundararaman, Vivek Subramanian, Guoyin Wang, Shijing Si, Dinghan Shen, Dong Wang, and Lawrence Carin. 2019. Syntax-infused transformer and bert models for machine translation and natural language understanding. arXiv preprint arXiv:1911.06156.
  • Swayamdipta et al. (2018) Swabha Swayamdipta, Sam Thomson, Kenton Lee, Luke Zettlemoyer, Chris Dyer, and Noah A. Smith. 2018. Syntactic scaffolds for semantic structures. In Proc. of EMNLP.
  • Tateisi et al. (2005) Yuka Tateisi, Akane Yakushiji, Tomoko Ohta, and Jun’ichi Tsujii. 2005. Syntax annotation for the GENIA corpus. In Proc. of IJCNLP.
  • Tenney et al. (2019a) Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proc. of ACL.
  • Tenney et al. (2019b) Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? probing for sentence structure in contextualized word representations. In Proc. of ICLR.
  • Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. of NeurIPS.
  • Wang et al. (2019) Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proc. of ICLR.
  • Wang et al. (2020) Wei Wang, Bin Bi, Ming Yan, Chen Wu, Zuyi Bao, Liwei Peng, and Luo Si. 2020. StructBERT: Incorporating language structures into pre-training for deep language understanding. In Proc. of ICLR.
  • Warstadt et al. (2018) Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2018. Neural network acceptability judgments. arXiv preprint arXiv:1805.12471.
  • Wilcox et al. (2019) Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballesteros, and Roger Levy. 2019. Structural supervision improves learning of non-local grammatical dependencies. In Proc. of NAACL.
  • Yang et al. (2019) Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In Proc. of NeurIPS.
  • Yarowsky (1995) David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proc. of ACL.
  • Yogatama et al. (2019) Dani Yogatama, Cyprien de Masson d’Autume, Jerome Connor, Tomás Kociský, Mike Chrzanowski, Lingpeng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, and Phil Blunsom. 2019. Learning and evaluating general linguistic intelligence. CoRR, abs/1901.11373.
  • Zhou and Zhao (2019) Junru Zhou and Hai Zhao. 2019. Head-driven phrase structure grammar parsing on Penn treebank. In Proc. of ACL.
  • Zhou and Li (2005) Zhi-Hua Zhou and Ming Li. 2005. Tri-training: exploiting unlabeled data using three classifiers. IEEE Transactions on Knowledge and Data Engineering.

Appendix A Preliminary Experiments

Here we discuss the preliminary experiments to assess the quality and computational efficiency of our posterior approximation procedure (§3.4). Recall that this approximation procedure only applies at inference; the LM is still trained in a typical autoregressive, left-to-right fashion.


Since exactly computing the RNNG’s next-word distributions involves an intractable marginalisation over all possible tree prefixes , we run our experiments in the context of sequential LSTM language models, where can be computed exactly. This setup crucially enables us to isolate the impact of approximating the posterior distribution over under the bidirectional context (Eq. 2) with our proposed approximation (Eq. 5), without introducing further confounds stemming from the RNNG’s marginal approximation procedure (Eq. 6).

Dataset and preprocessing.

We train the LSTM LM on an open-vocabulary version of the PTB,202020Our open-vocabulary setup means that our results are not directly comparable to prior work on PTB language modelling (Mikolov et al., 2010, inter alia), which mostly employ a special “UNK” token for infrequent or unknown words. in order to simulate the main experimental setup where both the RNNG teacher and BERT student are also open-vocabulary by virtue of byte-pair encoding (BPE) preprocessing. To this end, we preprocess the dataset with SentencePiece (Kudo and Richardson, 2018) BPE tokenisation, where ; we preserve all case information. We follow the empirical setup of the parsing (§4.1) experiments, with Sections 02-21 for training, Section 22 for validation, and Section 23 for testing.

Model hyper-parameters.

We train the LM with 2 LSTM layers, 250 hidden units per layer, and a dropout (Srivastava et al., 2014)

rate of 0.2. Model parameters are optimised with stochastic gradient descent (SGD), with an initial learning rate of 0.25 that is decayed exponentially by a factor of 0.92 for every epoch after the tenth. Since our approximation relies on a separately trained right-to-left LM (Eq. 

5), we train this variant with the exact same hyper-parameters and dataset split as the left-to-right model.

Evaluation and baselines.

We evaluate the models in terms of the average posterior negative log likelihood (NLL) and perplexity.212121In practice, this perplexity is derived from simply exponentiating the average posterior negative log likelihood. Since exact inference of the posterior is expensive, we evaluate the model only on the first 400 sentences of the test set. We compare the following variants:

  • a mixture of experts baseline that simply mixes () the probabilities from the left-to-right and right-to-left LMs in an additive fashion, as opposed to multiplicative as in the case of our PoE-like approximation in Eq. 5 (“MoE”);

  • our approximation of the posterior over (Eq. 5), where is the uniform distribution (“Uniform Approx.”);

  • our approximation of the posterior over (Eq. 5), but where is the unigram distribution (“Unigram Approx.”); and

  • exact inference of the posterior as computed from the left-to-right model, as defined in Eq. 2 (“Exact Inference”).


We summarise the findings in Table 4, based on which we remark on two observations. First, the posterior NLL of our approximation procedure that makes use of the unigram distribution (Unigram Approx.; third row) is not much worse than that of exact inference, in exchange for a more than 50,000 times speedup222222All three approximations in Table 4 have similar runtimes.

in computation time. Nevertheless, using the uniform distribution (second row) on

in place of the unigram one (Eq. 5) results in a much worse posterior NLL. Second, combining the left-to-right and right-to-left LMs using a mixture of experts—a baseline which is not well-motivated by our theoretical analysis—yields the worst result.

Model Posterior NLL Posterior Ppl.
MoE 3.28 26.58
Uniform Approx. 3.18 24.17
Unigram Approx. 2.68 14.68
Exact Inference 2.50 12.25
Table 4: The findings from the preliminary experiments that assess the quality of our posterior approximation procedure. We compare three variants against exact inference (bottom row; Eq. 2) from the left-to-right model.

Appendix B RNNG Hyper-parameters

To train the subword-augmented RNNG teacher (§2), we use the following hyper-parameters that achieve the best validation perplexity from a grid search: 2-layer stack LSTMs (Dyer et al., 2015) with 512 hidden units per layer, optimised by standard SGD with an initial learning rate of 0.5 that is decayed exponentially by a factor of 0.9 after the tenth epoch. We apply a dropout rate of 0.3.

(m/mm) Avg


No-KD 60.2 92.2 90.0 89.4 90.3/90.9 90.7 71.1 84.4
UG-KD 60.6 92.0 88.9 89.3 89.6/90.0 89.9 68.6 83.6


No-KD 53.1 92.5 88.0 88.8 82.8/81.8 89.9 65.4 80.3
UG-KD 55.3 91.2 87.6 88.7 81.9/80.8 89.5 65.0 80.0
Table 5: Summary of the full results on GLUE, comparing the No-KD baseline with the UG-KD structure-distilled BERT (§4.2). We select the 1-best fine-tuning hyper-parameter (including random seed) on the validation set, which we then evaluate on the test set.

Appendix C Right-to-left RNNG

Here we illustrate the oracle action sequences that we use to train the right-to-left RNNG teacher as part of our approximation of the posterior distribution over (Eq. 5). Recall that the standard RNNG incrementally builds the phrase-structure tree through a top-down, left-to-right traversal in a depth-first fashion. Hence, the right-to-left RNNG employs a similar top-down, depth-first traversal strategy, although the children of each node are recursively expanded in a right-to-left fashion.

We provide example action sequences (Table 6) for the subword-augmented left-to-right and right-to-left RNNGs, respectively, for an example “(S (NP (WORD The) (WORD d ##og)) (VP (WORD ba ##rk ##s)))”, where tokens prefixed by “##” are subword units.

Step Stack Content Action
Left-to-right RNNG
0 NT(S)
1 (S NT(NP)
3 (S (NP (WORD GEN(The)
5 (S (NP (WORD The) NT(WORD)
6 (S (NP (WORD The) (WORD GEN(d)
7 (S (NP (WORD The) (WORD d GEN(##og)
8 (S (NP (WORD The) (WORD d ##og REDUCE
9 (S (NP (WORD The) (WORD d ##og) REDUCE
10 (S (NP (WORD The) (WORD d ##og)) NT(VP)
11 (S (NP (WORD The) (WORD d ##og)) (VP NT(WORD)
12 (S (NP (WORD The) (WORD d ##og)) (VP (WORD GEN(ba)
13 (S (NP (WORD The) (WORD d ##og)) (VP (WORD ba GEN(##rk)
14 (S (NP (WORD The) (WORD d ##og)) (VP (WORD ba ##rk GEN(##s)
15 (S (NP (WORD The) (WORD d ##og)) (VP (WORD ba ##rk ##s REDUCE
16 (S (NP (WORD The) (WORD d ##og)) (VP (WORD ba ##rk ##s) REDUCE
17 (S (NP (WORD The) (WORD d ##og)) (VP (WORD ba ##rk ##s)) REDUCE
18 (S (NP (WORD The) (WORD d ##og)) (VP (WORD ba ##rk ##s)))
Right-to-left RNNG
0 NT(S)
1 (S NT(VP)
3 (S (VP (WORD GEN(##s)
4 (S (VP (WORD ##s GEN(##rk)
5 (S (VP (WORD ##s ##rk GEN(ba)
6 (S (VP (WORD ##s ##rk ba REDUCE
7 (S (VP (WORD ##s ##rk ba) REDUCE
8 (S (VP (WORD ##s ##rk ba)) NT(NP)
9 (S (VP (WORD ##s ##rk ba)) (NP NT(WORD)
10 (S (VP (WORD ##s ##rk ba)) (NP (WORD GEN(##og)
11 (S (VP (WORD ##s ##rk ba)) (NP (WORD ##og GEN(d)
12 (S (VP (WORD ##s ##rk ba)) (NP (WORD ##og d REDUCE
13 (S (VP (WORD ##s ##rk ba)) (NP (WORD ##og d) NT(WORD)
14 (S (VP (WORD ##s ##rk ba)) (NP (WORD ##og d) (WORD GEN(The)
15 (S (VP (WORD ##s ##rk ba)) (NP (WORD ##og d) (WORD The REDUCE
16 (S (VP (WORD ##s ##rk ba)) (NP (WORD ##og d) (WORD The) REDUCE
17 (S (VP (WORD ##s ##rk ba)) (NP (WORD ##og d) (WORD The)) REDUCE
18 (S (VP (WORD ##s ##rk ba)) (NP (WORD ##og d) (WORD The)))
Table 6: Sample stack contents and the corresponding gold action sequences for a simple example “(S (NP (WORD The) (WORD d ##og)) (VP (WORD ba ##rk ##s)))”, under both the left-to-right and right-to-left subword-augmented RNNGs (§2). The symbol “” denotes separate entries on the stack. At the end of the generation process, the stack contains one composite embedding that represents the entire tree.

Appendix D BERT Hyper-parameters

Here we outline the hyper-parameters of the BERT student in terms of pretraining data creation, masked LM pretraining, and GLUE fine-tuning.

Pretraining data creation.

We use the same codebase232323 and pretraining data as Devlin et al. (2019), which are derived from a mixture of Wikipedia and Books text corpora. To train our structure-distilled BERTs, we sample a masking from these corpora following the same hyper-parameters used to train the original BERT-Cased model: a maximum sequence length of 512, a per-word masking probability of 0.15 (up to a maximum of 76 masked tokens in a 512-length sequence), a dupe factor of 10. We apply a random seed of 12345. We preprocess the raw dataset using NLTK tokenisers, and then apply the same (BPE-augmented) vocabulary and WordPiece tokenisation as in the original BERT model. All other hyper-parameters are set to the same values as in the publicly released original BERT model.

Masked LM pretraining.

We train all model variants (including the no distillation/standard BERT baseline for fair comparison) with the following hyper-parameters: a batch size of 256 sequences and an initial Adam learning rate of (as opposed to in the original BERT model). Following Devlin et al. (2019), we pretrain our models for 1M steps. All other hyper-parameters are similarly set to their default values.

GLUE fine-tuning.

For each GLUE task, we fine-tune the BERT model by running a grid search over five potential learning rates , two potential batch sizes , and five random seeds, in order to better account for variance. This setup leads to 50 fine-tuning configurations for each GLUE task. Following Devlin et al. (2019), we train each fine-tuning configuration for 4 epochs.

Structured prediction fine-tuning.

For each structured prediction model, we use the model’s default BERT fine-tuning settings for learning rate, batch size, and learning rate warmup schedule. We use the default settings for BERT, if these are available, and the default settings for BERT otherwise. These settings are:

  • In-order phrase-structure parser: a BERT learning rate of , a batch size of 32, and a warmup period of 160 updates.

  • HPSG dependency parser: a BERT learning rate of , a batch size of 150, and a warmup period of 160 updates.

  • Coreference resolution model: a BERT learning rate of , a batch size of 1 document, and a warmup period of 2 epochs.

  • Semantic role labelling model: a BERT learning rate of and a batch size of 32.


Our no distillation baseline differs from the publicly released BERT-Cased model in its larger pretraining learning rate ( as opposed to ) that we empirically find to work better on most of the tasks. Overall, our no distillation baseline slightly outperforms the publicly released model on all the structured prediction tasks except coreference resolution, where it performs slightly worse. Furthermore, our no distillation baseline also performs slightly better than the official pretrained BERT on most of the GLUE tasks, although the difference in aggregate GLUE performance is fairly minimal ().

Appendix E Full GLUE Results

We summarise the full GLUE results for the No-KD baseline and the UG-KD structure-distilled BERT in Table 5.