Establishing Strong Baselines for the New Decade: Sequence Tagging, Syntactic and Semantic Parsing with BERT

by   Han He, et al.
Emory University

This paper presents new state-of-the-art models for three tasks, part-of-speech tagging, syntactic parsing, and semantic parsing, using the cutting-edge contextualized embedding framework known as BERT. For each task, we first replicate and simplify the current state-of-the-art approach to enhance its model efficiency. We then evaluate our simplified approaches on those three tasks using token embeddings generated by BERT. 12 datasets in both English and Chinese are used for our experiments. The BERT models outperform the previously best-performing models by 2.5 significant case). Moreover, an in-depth analysis on the impact of BERT embeddings is provided using self-attention, which helps understanding in this rich yet representation. All models and source codes are available in public so that researchers can improve upon and utilize them to establish strong baselines for the next decade.


page 1

page 2

page 3

page 4


Compressing Transformer-Based Semantic Parsing Models using Compositional Code Embeddings

The current state-of-the-art task-oriented semantic parsing models use B...

Evaluating Contextualized Embeddings on 54 Languages in POS Tagging, Lemmatization and Dependency Parsing

We present an extensive evaluation of three recently proposed methods fo...

LIMIT-BERT : Linguistic Informed Multi-Task BERT

In this paper, we present a Linguistic Informed Multi-Task BERT (LIMIT-B...

Simple BERT Models for Relation Extraction and Semantic Role Labeling

We present simple BERT-based models for relation extraction and semantic...

From SPMRL to NMRL: What Did We Learn (and Unlearn) in a Decade of Parsing Morphologically-Rich Languages (MRLs)?

It has been exactly a decade since the first establishment of SPMRL, a r...

Modern Baselines for SPARQL Semantic Parsing

In this work, we focus on the task of generating SPARQL queries from nat...

Better, Faster, Stronger Sequence Tagging Constituent Parsers

Sequence tagging models for constituent parsing are faster, but less acc...

Code Repositories

1 Introduction

It is no exaggeration to say that word embeddings trained by vector-based language models 

Mikolov et al. (2013); Pennington et al. (2014); Bojanowski et al. (2017) have changed the game of NLP once and for all. These pre-trained word embeddings trained on large corpus improve downstream tasks by encoding rich word semantics into vector space. However, word senses are ignored in these earlier approaches such that a unique vector is assigned to each word, neglecting polysemy from the context.

Recently, contextualized embedding approaches emerge with advanced techniques to dynamically generate word embeddings from different contexts. To address polysemous words, Peters et al. (2018) introduce ELMo, which is a word-level Bi-LSTM language model. Akbik et al. (2018) apply a similar approach to the character-level, called Flair, while concatenating the hidden states corresponding to the first and the last characters of each word to build the embedding of that word. Apart from these unidirectional recurrent language models, Devlin et al. (2018) replace the transformer decoder from Radford et al. (2018) with a bidirectional transformer encoder, then train the BERT system on 3.3B word corpus. After scaling the model size to hundreds of millions parameters, BERT brings markedly huge improvement to a wide range of tasks without substantial task-specific modifications.

In this paper, we verify the effectiveness and conciseness of BERT by first generating token-level embeddings from it, then integrating them to task-oriented yet efficient model structures (Section 3). With careful investigation and engineering, our simplified models significantly outperform many of the previous state-of-the-art models, achieving the highest scores for 11 out of 12 datasets (Section 4).

To reveal the essence of BERT in these tasks, we analyze our tagging models with self-attention, and find that BERT embeddings capture contextual information better than pre-trained embeddings, but not necessarily better than embeddings generated by a character-level language model (Section 5.1). Furthermore, an extensive comparison between our baseline and BERT models shows that BERT models handle long sentences robustly (Section 5.2). One of the key findings is that BERT embeddings are much more related to semantic than syntactic (Section 5.3). Our findings are consistent with the training procedure of BERT, which provides guiding references for future research.

To the best of our knowledge, it is the first work that tightly integrates BERT embeddings to these three downstream tasks and present such high performance. All our resources including the models and the source codes are publicly available.111

2 Related Work

2.1 Representation Learning

Rich initial word encodings substantially improve the performance of downstream NLP tasks, which have been studied over decades. Except for matrix factorization methods (Pennington et al., 2014), most work train language models to predict some words given their contexts. Among these work, CBOW and Skip-Gram Mikolov et al. (2013) are pioneers of neural language models extracting features within a fixed length window. Then, Joulin et al. (2017) augment these models with subword information to handle out-of-vocabulary words.

To learn contextualized representations, Peters et al. (2018) apply bidirectional language model (bi-LM) to tokenized unlabeled corpus. Similarly, the contextual string embeddings Akbik et al. (2018) model language on character level, which can efficiently extract morphological features. However, bi-LM consists of two unidirectional LMs without left or right context, leading to potential bias on one side. To address this limitation, BERT Devlin et al. (2018) employ masked LM to jointly condition on both left and right contexts, showing impressive improvement in various tasks.

2.2 Sequence Tagging

Sequence tagging is one of the most well-studied NLP tasks, which can be directly applied to part-of-speech tagging (POS) and named entity recognition (NER). As a general trend, fine grained features often result in better performance.

Akbik et al. (2018) feed the contextual string embeddings into a Bi-LSTM-CRF tagger (Huang et al., 2015), improving tagging accuracy with rich morphological and contextual information. In a more meticulously designed system, Bohnet et al. (2018) generate representations from both string and token based character Bi-LSTM language models, then employ a meta-BiLSTM to integrate them.

Besides, joint learning and semi-supervised learning can lead to more generalization. As a highly end-to-end approach, the character level transition system proposed by

Kurita et al. (2017) benefits from joint learning on Chinese word segmentation, POS tagging and dependency parsing. Recently, Clark et al. (2018) exploit large scale unlabeled data with Cross-View Training (CVT), which improves the RNN feature detector shared between the full model and auxiliary modules.

2.3 Syntactic Parsing

Dependency tree and constituency structure are two closely related syntactic forms. Choe and Charniak (2016) cast constituency parsing as language modeling, achieving high UAS after conversion to dependency tree. Kuncoro et al. (2017)

investigate recurrent neural network grammars through ablations and gated attention mechanism, finding that lexical heads are crucial in phrasal representation.

Recently graph-based parsers resurge due to their ability to exploit modern GPU parallelization. Dozat and Manning (2017) successfully implement a graph-based dependency parser with biaffine attention mechanism, showing impressive performance and decent simplicity. Clark et al. (2018) improve the feature detector of the biaffine parser through CVT and joint learning. While Ma et al. (2018) introduce stack-pointer networks to model parsing history of a transition-based parser, with biaffine attention mechanism built-in.

2.4 Semantic Parsing

Currently, parsing community are shifting from syntactic dependency tree parsing to semantic dependency graph parsing (SDP). As graph nodes can have multi-head or zero head, it allows for more flexible representations of sentence meanings. Wang et al. (2018) modify the preconditions of List-Based Arc-Eager transition system Choi and McCallum (2013)

, implementing it with Bi-LSTM Subtraction and Tree-LSTM for feature extraction.

Among graph-based approaches, Peng et al. (2017)

investigate higher-order structures across different graph formalisms with tensor scoring strategy, benefiting from multitask learning.

Dozat and Manning (2018) replace the softmax cross-entropy in the biaffine parser with sigmoid cross-entropy, successfully turning the syntactic tree parser into a simple yet accurate semantic graph parser.

3 Approach

3.1 Token-level Embeddings with BERT

BERT splits each token into subwords using WordPiece Wu et al. (2016), which do not necessarily reflect any morphology in linguistics. For example, ‘Rainwater’ gets split into ‘Rain’ and ‘##water’, while words such as running or rapidly remain unchanged although typical morphology would split them into run+ing and rapid+ly. To obtain token-level embeddings for tagging and parsing tasks, the following two methods are experimented:

Last Embedding

Since the subwords from each token are trained to predict one another during language modeling, their embeddings must be correlated. Thus, one way is to pick the embedding of the last subword as a representation of the token.

Average Embedding

For a compound word like ‘doghouse’ that gets split into ‘dog’ and ‘##house’, the last subword does not necessarily convey the key meaning of the token. Hence, another way is to take the average embedding of the subwords.

Model In-domain Out-of-domain
BERT: Last 86.7 79.5
BERT: Average 86.7 79.8
BERT: Last 86.8 79.4
BERT: Average 86.4 79.5
Table 1: Results from the PSD semantic parsing task (§4.3) using the last and average embedding methods.

Table 1 shows results from a semantic parsing task, PSD (Section 4.3), using the last and average embedding methods with BERT and BERT models.222BERT uses 12 layers, 768 hidden cells, 12 attention heads, and 110M parameters, while BERT uses 24 layers, 1024 hidden cells, 16 attention heads, and 340M parameters. Both models are uncased, since they are reported to achieve high scores for all tasks except for NER Devlin et al. (2018). The average method is chosen for all our experiments since it gives a marginal advantage to the out-of-domain dataset.

3.2 Input Embeddings with BERT

While Devlin et al. (2018) report that adding just an additional output layer to the BERT encoder can build powerful models in a wide range of tasks, its computational cost is too high. Thus, we separate the BERT architecture from downstream models, and feed pre-generated BERT embeddings, , as input to task-specific encoders:

Alternatively, BERT embeddings can be concatenated with the output of a certain hidden layer:

Table 2 shows results from the PSD semantic parsing task (Section 4.3) using the average method from Section 3.1. shows a slight advantage for both BERT and BERT over ; thus, it is chosen for all our experiments.

Model In-domain Out-of-domain
BERT: 86.7 79.8
BERT: 86.5 79.5
BERT: 86.4 79.5
BERT: 85.9 79.1
Table 2: Results from the PSD semantic parsing task (Section 4.3) using and .

3.3 Bi-LSTM-CRF for Tagging

For sequence tagging, the Bi-LSTM-CRF Huang et al. (2015) with the Flair contextual embeddings Akbik et al. (2018), is used to establish a baseline for English. Given a token in a sequence where and are the starting and ending characters of  ( and are the character offsets; ), the Flair embedding of is generated by concatenating two hidden states of from the forward LSTM and from the backward LSTM (Figure 1):

is then concatenated with a pre-trained token embedding of and fed into the Bi-LSTM-CRF. In our approach, we present two models, one substituting the Flair and pre-trained embeddings with BERT, and the other concatenating BERT to the other embeddings. Note that variational dropout is not used in our approach to reduce complexity.

Figure 1: Generating the Flair embedding for ‘apple’.

As Chinese is characterized as a morphologically poor language, the Flair embeddings are not used for tagging tasks; only pre-trained and BERT embeddings are used for our experiments in Chinese.

Figure 2: Biaffine attention parser

3.4 Biaffine Attention for Syntactic Parsing

A simplified variant of the biaffine parser Dozat and Manning (2017) is used for syntactic parsing (Figure 2). Compared to the original version, the trainable word embeddings are removed and lemmas are used instead of forms to retrieve pre-trained embeddings in our version, leading to less complexity yet better generalization. Given the ’th token , the feature vector is created by concatenating its pre-trained lemma embedding , POS embedding learned during training and the representation from the last layer of BERT. This feature vector is fed into Bi-LSTM, generating two recurrent states and :

Two multi-layer perceptrons (MLP) are then used to extract features for

being a head or a dependent , and two additional MLP are used to extract and for labeled parsing:

are stacked into a matrix

with a bias for the prior probability of each token being a head, and

are stacked into another matrix as follows (: # of tokens, ):

is called a bilinear classifier that predicts head words. Additionally, arc labels are predicted by another biaffine classifier

, which combines bilinear classifiers for multi-classification (: # of labels, , ):

During training, softmax cross-entropy is used to optimize and . Note that for the optimization of , gold heads are used instead of predicted ones. During decoding, a maximum spanning tree algorithm is adopted for searching the optimal tree based on the scores in .

3.5 Biaffine Attention for Semantic Parsing

Dozat and Manning (2018) adapted their original biaffine parser to generate dependency graphs for semantic parsing, where each token can have zero to many heads. Since the tree structure is no longer guaranteed, sigmoid cross-entropy is used instead so that independent binary predictions can be made for every token to be considered a head of any other token. The label predictions are made as outputting the labels with the highest scores in once arc predictions are made, as illustrated in Figure 2.

This updated implementation is further simplified in our approach by removing the trainable word embeddings, the character-level feature detector, and their corresponding linear transformers. Moreover, instead of using the interpolation between the head and label losses, equal weights are applied to both losses, reducing hyperparameters to tune.

4 Experiments

Three sets of experiments are conducted to evaluate the impact of our approaches using BERT (Sec. 3). For sequence tagging (Section 4.1), part-of-speech tagging is chosen where each token gets assigned with a fine-grained POS tag. For syntactic parsing (Section 4.2), dependency parsing is chosen where each token finds exactly one head, generating a tree per sentence. For semantic parsing (Section 4.3), semantic dependency parsing is chosen where each token finds zero to many heads, generating a graph per sentence. Every task is tested on both English and Chinese to ensure robustness across languages.

Standard datasets are adapted to all experiments for fair comparisons to many previous approaches. All our models are experimented three times and average scores with standard deviations are reported. Section 

A describes our environmental settings and data split in details for the replication of this work.

4.1 Sequence Tagging

For part-of-speech tagging, the Wall Street Journal corpus from the Penn Treebank 3 Marcus et al. (1993) is used for English, and the Penn Chinese Treebank 5.1 Xue et al. (2005) is used for Chinese. Table 3 shows tagging results on the test sets.

Ma and Hovy (2016) 97.55 93.45
Ling et al. (2015) 97.78 n/a
Clark et al. (2018) 97.79 n/a
Akbik et al. (2018) 97.85 (0.01) n/a
Bohnet et al. (2018) 97.96 n/a
Baseline 97.70 (0.05) 92.44 (0.03)
Baseline \ BERT 96.96 (0.06) 91.23 (0.22)
Baseline \ BERT 96.96 (0.05) 91.26 (0.25)
Baseline + BERT 97.68 (0.06) 92.69 (0.32)
Baseline + BERT 97.67 (0.02) 93.01 (0.27)
(a) Results from the English test set. BERT and BERT are BERT’s uncased base and cased large models, respectively.
Zhang et al. (2015) 94.47 n/a
Zhang et al. (2014) 94.62 n/a
Kurita et al. (2017) 94.84 n/a
Hatori et al. (2011) 94.64 n/a
Wang and Xue (2014) 96.0 n/a
Baseline 95.65 (0.26) 83.57 (0.55)
Baseline \ BERT 96.38 (0.15) 88.13 (0.72)
Baseline + BERT 97.25 (0.18) 90.53 (0.91)
(b) Results from the Chinese test set. * are evaluated on the character-level due to automatic segmentation, so their results are not directly comparable to ours but reported for reference.
Table 3:

Test results for part-of-speech tagging, where token-level accuracy is used as the evaluation metric. ALL: all tokens, OOV: out-of-vocabulary tokens.

For English, the baseline is our replication of the Flair model using both GloVe and Flair embeddings (Section 3.3). It shows a slightly lower accuracy, -0.15%, than the original model Akbik et al. (2018) due to the lack of variational dropout. \BERT substitutes GloVe and Flair with BERT embeddings, and +BERT uses all three types of embeddings. The baseline outperforms all BERT models for the ALL test, implying that Flair’s Bi-LSTM character language model is more effective than BERT’s word-piece approach. No significant difference is found between BERT and BERT. However, an interesting trend is found in the OOV test, where the +BERT model shows good improvement over the baseline. This implies that BERT embeddings can still contribute to the Flair model for OOV although the CNN character language model from Ma and Hovy (2016) is marginally more effective than +BERT for out-of-vocabulary tokens.

For Chinese, the Bi-LSTM-CRF model with FastText embeddings is used for baseline (Sec. 3.3). \BERT that substitutes FastText embeddings with BERT and +BERT that adds BERT embeddings to the baseline show progressive improvement over the prior model for both the ALL and OOV tests. +BERT gives an accuracy that is 1.25% higher than the previous state-of-the-art using joint-learning between tagging and parsing Wang and Xue (2014).

4.2 Syntactic Parsing

The same datasets used for POS tagging, the Penn Treebank and the Penn Chinese Treebank (Section 4.1), are used for dependency parsing as well. Table 4 shows parsing results on the test sets.

Dozat and Manning (2017) 95.74 94.08
Kuncoro et al. (2017) 95.8 94.6
Ma et al. (2018) 95.87 94.19
Choe and Charniak (2016) 95.9 94.1
Clark et al. (2018) 96.6 95.0
Baseline 95.78 (0.04) 94.04 (0.04)
Baseline \ BERT 96.76 (0.09) 95.27 (0.13)
Baseline + BERT 96.79 (0.08) 95.29 (0.12)
(c) Results from the English test set.
Dozat and Manning (2017) 89.30 88.23
Ma et al. (2018) 90.59 89.29
Baseline 91.02 (0.10) 89.89 (0.09)
Baseline \ BERT 93.21 (0.06) 92.21 (0.05)
Baseline + BERT 93.34 (0.21) 92.29 (0.22)
(d) Results from the Chinese test set.
Table 4: Test results for dependency parsing, where unlabeled and labeled attachment scores (UAS and LAS) are used as the evaluation metrics.

Our simplified version of the biaffine parser (Section 3.4) is used for baseline, where GloVe and FastText embeddings are used for English and Chinese, respectively. The baseline model gives a comparable result to the original model Dozat and Manning (2017) for English, yet shows a notably better result for Chinese, which can be due to higher quality embeddings from FastText. \BERT substitutes the pre-trained embeddings with BERT and +BERT adds BERT embeddings to the baseline. Moreover, BERT’s uncased base model is used for English.

Between \BERT and +BERT, no significant difference is found, implying that those pre-trained embeddings are not so useful when coupled with BERT. All BERT models show significant improvement over the baselines for both languages, and outperform the previous state-of-the-art approaches using cross-view training Clark et al. (2018) and stack-pointer networks Ma et al. (2018) by 0.29% and 3% in LAS for English and Chinese, respectively. Considering the simplicity of our +BERT models, these results are remarkable.

4.3 Semantic Parsing

The English dataset from the SemEval 2015 Task 18: Broad-Coverage Semantic Dependency Parsing Oepen et al. (2015) and the Chinese dataset from the SemEval 2016 Task 9: Chinese Semantic Dependency Parsing Che et al. (2016) are used for semantic dependency parsing.

Du et al. (2015) 89.1 91.3 75.7 85.3
Almeida and Martins (2015) 89.4 91.7 77.6 86.2
Wang et al. (2018) 90.3 91.7 78.6 86.9
Peng et al. (2017) 90.4 92.7 78.5 87.2
Dozat and Manning (2018) 93.7 93.9 81.0 89.5
Baseline 92.48 94.56 85.00 90.68
Baseline \ BERT 94.37 96.03 86.59 92.33
Baseline + BERT 94.58 96.13 86.80 92.50
(e) Results from the in-domain (ID) test sets.
Du et al. (2015) 81.8 87.2 73.3 80.8
Almeida and Martins (2015) 83.8 87.6 76.2 82.5
Wang et al. (2018) 84.9 87.6 75.9 82.8
Peng et al. (2017) 85.3 89.0 76.4 83.6
Dozat and Manning (2018) 88.9 90.6 79.4 86.3
Baseline 87.38 91.35 77.28 85.34
Baseline \ BERT 90.59 94.31 79.31 88.07
Baseline + BERT 90.76 94.38 79.48 88.21
(f) Results from the out-of-domain (OOD) test sets.
Table 5: Test results for semantic dependency parsing in English; labeled dependency F1 scores are used as the evaluation metrics. The standard deviations are reported in Section A.3. DM: DELPH-IN dependencies, PAS: Enju dependencies, PSD: Prague dependencies, AVG: macro-average of (DM, PAS, PSD).

Table 5 shows the English results on the test sets. The baseline, \BERT, and +BERT models are similar to the ones in Section 4.2, except they use the sigmoid instead of the softmax function in the output layer to accept multiple heads (Section 3.5). Our baseline is a simplified version of Dozat and Manning (2018); its average scores are 1.2% higher and 1.0% lower than the original model for ID and OOD, due to different hyperparameter settings. +BERT shows good improvement over \BERT for both test sets, implying that BERT embeddings are complementary to those pre-trained embeddings, and surpasses the previous state-of-the-art scores by 3% and 2% for ID and OOD, respectively.

Artsymenia et al. (2016) 77.64 59.06 82.41 68.59
Wang et al. (2018) 81.14 63.30 85.71 72.92
Baseline 80.51 64.90 88.06 77.28
Baseline \ BERT 82.91 67.17 90.83 80.46
Baseline + BERT 82.92 67.27 91.10 80.41
Table 6: Test results for semantic dependency parsing in Chinese, where unlabeled and labeled dependency F1 scores (UF and LF) are used as the evaluation metrics. The standard deviations are also reported in Section A.3. NEWS: newswire, TEXT: textbook.

Table 6 shows the Chinese results on the test sets. No significant difference is found between \BERT and +BERT. +BERT significantly outperforms the previous state-of-the-art by 4% and 7.5% in LF for NEWS and TEXT, which confirms that BERT embeddings are very effective for semantic dependency parsing in both English and Chinese.

(g) English: Flair

(h) English: BERT

(i) Chinese: FastText

(j) Chinese: BERT
Figure 3: Averaged attention matrices on sentences with 30 tokens. Each cell depicts the attention weight between and , representing ’th and ’th tokens. All models are based on the Bi-LSTM-CRF Huang et al. (2015) using the Flair Akbik et al. (2018), FastText Bojanowski et al. (2017), and BERT Devlin et al. (2018) embeddings.

5 Analysis

This section gives an in-depth analysis of the great results achieved by our approaches (Section 4) to better understand the role of BERT in these tasks.

5.1 Attention Analysis for Tagging

The performance of \BERT models is surprisingly low for English POS tagging, compared to even a linear model achieving the accuracy of 97.64% on the same dataset Choi (2016). This aligns with the findings reported by BERT Devlin et al. (2018) and ELMo Peters et al. (2018), another popular contextualized embedding approach, where their POS and named entity tagging results do not surpass the state-of-the-art. To study how tagging models are trained with BERT embeddings, we augment the baseline and \BERT models in Table 3(a) with dot-product self-attention Luong et al. (2015), and extract their attention weights. We then average the attention matrices decoded from sentences with an equal length, 30 tokens, to find any general trend.

Comparing attention matrices across languages, it is clear that the Chinese matrices are much more checkered, implying that it requires more contents to make correct predictions in Chinese than English. This makes sense because Chinese words tend to be more polysemous than English ones Huang et al. (2007) so that they rely more on contents to disambiguate their categories. For the Flair and BERT models in English, the Flair matrix is more checkered and its diagonal is darker, implying that it uses more contents while individual token embeddings convey more information for POS tagging so their weights are higher than the ones in the BERT matrix. For the FastText and BERT models in Chinese, on the other hand, the BERT model is slightly more checkered and its diagonal is darker, indicating that BERT is better suited for this task than FastText.

(a) Chinese: FastText
(b) Chinese: BERT
Figure 4: Attention matrices for the Chinese sentence: 向to 伊朗Iran 出口exportyearlyproduce 二十万200Ktonspurealkali 成套whole 设备equipment 。, that can be translated to “(China) export(s) (the) whole (set of) equipment(s) to Iran (that) yearly produce 200K tons (of) pure alkali”. X-axis: predicted POS tags.

Figure 4 shows the attention matrices from a sample Chinese sentence. The FastText model mispredicts 出口export and 成套whole as nouns, whereas the BERT model correctly predicts them as a verb and an adjective, respectively. Notice that the BERT model gives the highest attention to 产produce for tagging 出口export, which both happen to be verbs, whereas the Flair model gives the highest attention to 设备equipment that is a noun.

5.2 Far-distance Analysis for Parsing

The outputs of the baseline and \BERT models on semantic dependency parsing (Table 5) are further analyzed for its robustness on long sentences. The average F1 scores for each sentence group, ranging 1-50, are displayed in Figure 5. For DM and PAS, the baseline scores drop faster than those of \BERT as sentences get longer. For PSD, the score drop rates are similar between the two, due to the challenging nature of this dataset Oepen et al. (2015). This reflects that BERT embeddings can handle far-distant dependencies in longer sentences better.

One possible explanation to BERT’s high capability of handling long sentences more robustly is its training objective and structure of masked language modeling (MLM). MLM is trained to predict randomly masked tokens through features extracted by a bidirectional Transformer, which takes up to 512 tokens as input. This is about twice larger than what recurrent neural networks typically expect in practice before gradient vanishes Khandelwal et al. (2018), and an order of magnitude larger than the context windows used by FastText or GloVe. As a result, BERT embeddings can carry information from much farther-distant tokens, leading to higher performance on tasks requiring contextual understanding such as parsing.

(a) ID: DM
(b) ID: PAS
(c) ID: PSD
(d) OOD: DM
(e) OOD: PAS
(f) OOD: PSD
Figure 5: Average labeled F1 scores for semantic parsing w.r.t. sentence lengths. LAS and UAS are represented by solid and dashed lines, and scores from the baseline and \BERT models are shown in blue and red, respectively.
Figure 6: ”Share earnings are reported on a fully diluted basis , by company tradition .”

5.3 Labeling Analysis on Semantic Parsing

Prague Semantic Dependencies (PSD) is used for our labeling analysis because it is manually annotated and well-documented Cinková et al. (2006). The average labeled F1 score of each label is ranked by the difference between the baseline and \BERT models in Table 5. Figure 7 shows the top-5 labels on which \BERT outperforms, and vice versa.

(a) ID label ranking results. {ID: ID, CR: CRIT, DI: DIR2, TT: TTILL, CO: COND}, {EX: EXT-arg, MA: MANN-arg, GR :GRAD.member, AI :AIM-arg, TW :TWHEN-arg}.
(b) OOD label ranking results. {TF: TFRWH, TS: TSIN, RE: RESTR, SM: SM, PA: PAR}, {VO: VOCAT, LO: LOC-arg, AC: ACMP-arg, MA :MANN-arg, EX: EXT-arg}.
Figure 7: Top-5 labels that \BERT outperforms the baseline (in red) and vice versa (in blue) on PSD.

The baseline performs better on certain arguments involving syntactic relations such as LOC-arg (locative), where the relation usually finds a preposition as the head of a noun phrase. \BERT shows robust generalization for arguments involving semantic reasoning i.e., CRIT (criterion) or COND (condition). For tradition in Figure 6, \BERT correctly classifies the CRIT label, while the baseline misclassified it as ACT-arg (argument of action). The far-dependent relation between tradition and reported requires deeper inference on the context, which may be beyond the capacity of the baseline.

6 Conclusion

In this paper, we describe our methods of exploiting BERT as token-level embeddings for tagging and parsing tasks. Our experiments empirically show that tagging and parsing can be tackled using much simpler models without losing accuracy. Out of 12 datasets, our approaches with BERT have established new state-of-the-art for 11 of them. As the first work of employing BERT with syntactic and semantic parsing, our approach is much simpler yet more accurate than the previous state-of-the-art.

Through a dedicated error analysis and extensive dissections based on an attention mechanism, we uncover interesting properties of BERT from syntactics, semantics, and multilingual perspectives. Beyond syntactically intensive or morphologically complex tasks, BERT embeddings are well-suited for semantic reasoning in long sentences.


Appendix A Supplemental Materials

Throughout this paper, we use the following notations for data splits, TRN: training, DEV: development, TST: test.

a.1 Part-of-Speech Tagging

For English, the Wall Street Journal corpus from the Penn Treebank 3 Marcus et al. (1993) is used with the standard split for part-of-speech tagging. The baseline is our replication of the Flair model Akbik et al. (2018) using embeddings trained by GloVe. Specifically, we use -dim GloVe embeddings333 Pennington et al. (2014) trained on Wikipedia 2014 and Gigaword 5 involving 6B tokens in total.

Set Sections Sentences Tokens
TRN 0-18 38,219 912,344
DEV 19-21 5,527 131,768
TST 22-24 5,462 129,654
Table 7: English part-of-speech tagging on Penn Treebank 3 Marcus et al. (1993).

For Chinese, the Penn Chinese Treebank 5.1 Xue et al. (2005) is used with the standard split for POS. The baseline is our replication of the Bi-LSTM-CRF model Huang et al. (2015). We use -dim FastText444 with subword information.

Set Sections Sentences Tokens
TRN 1-270; 400-1151 18,078 493,691
DEV 301-325 350 6,821
TST 271-300 348 8,008
Table 8: Chinese part-of-speech tagging on Penn Chinese Treebank 5.1 Xue et al. (2005).

a.2 Syntactic Parsing

For English, the Wall Street Journal corpus from the Penn Treebank 3 is used with the standard split, converted by the Stanford Parser 3.3.0555, for syntactic dependency parsing. For Chinese, the Penn Chinese Treebank 5.1 is used with the standard split, converted by the head-finding rules of Zhang and Clark (2008) and the labeling rules of Penn2Malt666 The POS tags are auto-generated by the POS tagger in NLP4J Choi (2016)777 using 10-way jackknifing on the training set for English, and the gold word segmentation and POS tags are used for Chinese.

Set Sections Sentences Tokens
TRN 2-21 39,832 950,027
DEV 22 1,700 40,117
TST 23 2,416 56,684
Table 9: English dependency parsing on Penn Treebank 3 Marcus et al. (1993).
Set Sections Sentences Tokens
TRN 1-815; 1001-1136 16,091 437,991
DEV 886-931; 1148-1151 803 20,454
TST 816-885; 1137-1147 1,910 50,319
Table 10: Chinese dependency parsing on Penn Chinese Treebank 5.1 Xue et al. (2005).

a.3 Semantic Dependency Parsing

For English, the English dataset from the SemEval 2015 Task 18 Oepen et al. (2015) is used for semantic dependency parsing. For Chinese, the SemEval 2016 Task 9 Che et al. (2016) dataset is used. However, SemEval 2015 Chinese dataset is not used because it is less popular. The POS tags provided in those datasets are used as they are for both English and Chinese, and the provided word segmentation is used for Chinese.

Set Sections Sentences Tokens
TRN 0-19 33,964 765,025
DEV 20 1,692 37,692
TST-IN 21 1,410 31,948
TST-OOD Brown 1,849 31,583
(a) General statistics.
TRN 585,646 711,064 532,271
DEV 28,854 34,775 26,438
TST-IN 24,307 29,776 22,063
TST-OOD 23,497 28,569 20,095
(b) Number of dependencies.
Table 11: English semantic dependency parsing on SemEval 2015 Task 18: Broad-Coverage Semantic Dependency Parsing Oepen et al. (2015). Section 20 is used as development set, following the recommended split from the organizers.

The standard deviation from English and Chinese semantic parsing test sets are recorded in Table 12 and Table 13 respectively.

Baseline ± 0.03 ± 0.03 ± 0.10 ± 0.04
Baseline \ BERT ± 0.05 ± 0.04 ± 0.11 ± 0.05
Baseline + BERT ± 0.05 ± 0.01 ± 0.17 ± 0.06
(c) Standard deviations from the in-domain (ID) test sets.
Baseline ± 0.10 ± 0.09 ± 0.12 ± 0.04
Baseline \ BERT ± 0.16 ± 0.07 ± 0.18 ± 0.08
Baseline + BERT ± 0.15 ± 0.06 ± 0.01 ± 0.07
(d) Standard deviations from the out-of-domain test sets.
Table 12: Standard deviations for semantic dependency parsing in English; labeled dependency F1 scores are reported in Section 4.3. DM: DELPH-IN dependencies, PAS: Enju dependencies, PSD: Prague dependencies, AVG: macro-average of (DM, PAS, PSD).
Baseline ± 0.24 ± 0.13 ± 0.25 ± 0.81
Baseline \ BERT ± 0.07 ± 0.04 ± 0.19 ± 0.84
Baseline + BERT ± 0.13 ± 0.09 ± 0.15 ± 0.10
Table 13: Standard deviations for semantic dependency parsing in Chinese; average unlabeled and labeled dependency F1 scores (UF and LF) are reported in Section 4.3. NEWS: newswire, TEXT: textbook.

a.4 Implementation

Our models are implemented in MXNet, and ran on NVIDIA Tesla V100 GPUs. Note that in our implementation, the BERT large cased model requires 15GB GPU memory, which exceeds the memory limit of TITAN X (12GB). The training time for baseline+BERT models on each dataset is listed in Table 14.

Dataset Time (hours)
CTB 10
SemEval 2015 6
SemEval 2016 10
Table 14: Training time on specific datasets.

a.5 Hyperparameter

a.5.1 Tagging

The hyperparameter configuration for English and Chinese tagging models are given in Table 15

Hidden Sizes
GloVe / FastText 100 / 300
Flair BiLSTM 1 @ 2048
BiLSTM 1 @ 256
BERT EN / CN 1024 / 768
Dropout Rates
Embedding(s) 50%
Loss & Optimizer
Optimizer SGD
Learning rate
Anneal factor
Anneal patience
Batch size

Max epochs

Table 15: Hyperparameter configuration for tagging.

a.5.2 Parsing

We have similar configurations for both syntactic and semantic dependency parsing in English and Chinese, shown in Table 16.

Hidden Sizes
GloVe / FastText 100 / 300
BiLSTM 3 @ 400
BERT EN / CN 768 / 768
Dropout Rates
Embeddings 33%
Word Dropout 33%
Variational Dropout 33%
MLP 33%
Loss & Optimizer
Optimizer SGD
Learning rate
Adam 0.9
Adam 0.9
Anneal factor
Anneal every 5000
Batch size 5000
Train steps 50000
Table 16: Hyperparameter configuration for parsing.