Pre-Training with Whole Word Masking for Chinese BERT

06/19/2019 ∙ by Yiming Cui, et al. ∙ Harbin Institute of Technology Anhui USTC iFLYTEK Co 0

Bidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks. Recently, an upgraded version of BERT has been released with Whole Word Masking (WWM), which mitigate the drawbacks of masking partial WordPiece tokens in pre-training BERT. In this technical report, we adapt whole word masking in Chinese text, that masking the whole word instead of masking Chinese characters, which could bring another challenge in Masked Language Model (MLM) pre-training task. The model was trained on the latest Chinese Wikipedia dump. We aim to provide easy extensibility and better performance for Chinese BERT without changing any neural architecture or even hyper-parameters. The model is verified on various NLP tasks, across sentence-level to document-level, including sentiment classification (ChnSentiCorp, Sina Weibo), named entity recognition (People Daily, MSRA-NER), natural language inference (XNLI), sentence pair matching (LCQMC, BQ Corpus), and machine reading comprehension (CMRC 2018, DRCD, CAIL RC). Experimental results on these datasets show that the whole word masking could bring another significant gain. Moreover, we also examine the effectiveness of Chinese pre-trained models: BERT, ERNIE, BERT-wwm. We release the pre-trained model (both TensorFlow and PyTorch) on GitHub: https://github.com/ymcui/Chinese-BERT-wwm

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

Chinese-BERT-wwm

Pre-Training with Whole Word Masking for Chinese BERT


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019)

has become enormously popular in recent NLP studies which utilizes large-scale unlabeled training data and generates enriched contextual representations, showing its powerful performance on various natural language processing tasks. As we traverse several popular machine reading comprehension benchmarks, such as SQuAD

(Rajpurkar et al., 2018), CoQA (Reddy et al., 2019), QuAC (Choi et al., 2018), NaturalQuestions (Kwiatkowski et al., 2019), RACE (Lai et al., 2017), we can see that most of the top performing models are based on BERT (Cui et al., 2017; Dai et al., 2019; Zhang et al., 2019b; Ran et al., 2019).

Recently, the authors of BERT have released an updated version of BERT, which is called Whole Word Masking. The whole word masking mainly mitigates the drawbacks in original BERT that, if the masked WordPiece token (Wu et al., 2016) belongs to a whole word, then all the WordPiece tokens (which forms a complete word) will be masked altogether. This will explicitly force the model to recover the whole word in Masked Language Model (MLM) pre-training task, instead of just recovering WordPiece tokens, which is much more challenging. Along with the strategy, they also provide pre-trained English models (BERT-large-wwm) for the community, which is beneficial for the researcher to design more powerful models based on them.222https://github.com/google-research/bert

Before Google releasing whole word masking, Baidu had proposed Enhanced Representation through kNowledge IntEgration (ERNIE) (Sun et al., 2019) with a similar spirit and trained on not only Wikipedia data but also community QA, Baike (similar to Wikipedia), etc.333Tsinghua university has also released a model called ERNIE but was not trained on Chinese (Zhang et al., 2019a). In the following sections, ERNIE refers to the model by Baidu (Sun et al., 2019). It was tested on various NLP tasks and showed consistent improvements over BERT.

In this technical report, we adapt the whole word masking strategy in Chinese BERT to verify its effectiveness. The model was pre-trained on the latest Wikipedia dump in Chinese (both Simplified and Traditional Chinese proportions are kept). Note that, we did not exploit additional data in our model, and aim to provide a more general base for developing NLP systems in Simplified and Traditional Chinese. Extensive experiments are conducted on various Chinese NLP datasets, ranging from sentence-level to document-level, which include machine reading comprehension, named entity recognition, sentiment classification, sentence pair matching, natural language inference, document classification, etc. The results show that the proposed model brings another gain over BERT and ERNIE in most of the tasks, and we provide several useful tips for using these pre-trained models, which may be helpful in the future research.

The contributions of this technical report are listed as follows.

  • We adapt the whole word masking in Chinese BERT and release the pre-trained model for the community.

  • Extensive experiments are carried out to better demonstrate the effectiveness of BERT/BERT-wwm/ERNIE.

  • Several useful tips are provided on using these pre-trained models on Chinese text.

2 Chinese BERT with Whole Word Masking

2.1 Data Processing

We downloaded the latest Wikipedia dump444https://dumps.wikimedia.org/zhwiki/latest/, and pre-processed with WikiExtractor.py as suggested by Devlin et al. (2019), resulting 1,307 extracted files. Note that, we use both Simplified and Traditional Chinese in this dump. After cleaning the raw text (such as removing html tagger) and separating the document, we obtain 13.6M lines in the final input text. In order to identify the boundary of Chinese words, we use LTP555http://ltp.ai (Che et al., 2010) for Chinese Word Segmentation (CWS). We use official create_pretraining_data.py to convert raw input text to the pre-training examples, which was provided in BERT GitHub repository. We generate two sets of pre-training examples with a maximum length of 128 and 512, as suggested by Devlin et al. (2019), for computation efficiency and learning long-range dependencies. We strictly follow the original whole word masking codes and did not change other components, such as the percentage of word masking, etc. An example of the whole word masking is depicted in Figure 1.

[Original Sentence]

使用语言模型来预测下一个词的probability

[Original Sentence with CWS]
使用 语言 模型 来 预测 下 一个 词 的 probability 。
[Original BERT Input]
使 用 语 言 [MASK] 型 来 [MASK] 测 下 一 个 词 的 pro [MASK] ##lity 。
[Whold Word Masking Input]
使 用 语 言 [MASK] [MASK] 来 [MASK] [MASK] 下 一 个 词 的 [MASK] [MASK] [MASK] 。
Figure 1: Examples of whole word masking in BERT.

2.2 Pre-Training

We assume the whole word masking is a remedy for the BERT to know the word boundary and should be a ‘patch’ rather than a brand new model. Under this assumption, we did NOT train our model from scratch but from the official BERT-base (Chinese). We train 100K steps on the samples with a maximum length of 128, batch size of 2,560, an initial learning rate of 1e-4 (with warm-up ratio 10%). Then, we train another 100K steps on a maximum length of 512 with a batch size of 384 to learn the long-range dependencies and position embeddings. Note that, the input of the two phases should be changed according to the maximum length. Instead of using original AdamWeightDecayOptimizer in BERT, we use LAMB optimizer (You et al., 2019) for better scalability in large batch. 666For further tests and TensorFlow codes on LAMB optimizer, please see: https://github.com/ymcui/LAMB_Optimizer_TF The pre-training was done on Google Cloud TPU v3 with 128G HBM.777https://cloud.google.com/tpu/

2.3 Fine-Tuning on Downstream Tasks

It is straightforward to use this model, as only one step is needed: replace original Chinese BERT888https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip with our model, without changing config and vocabulary file.

3 Experiments

We carried out extensive experiments on various natural language processing tasks, covering a wide spectrum of text length (from sentence-level to document-level). Specifically, we choose the following popular Chinese datasets, including the ones that were also used in BERT and ERNIE. We adopt additional datasets for testing their performance in a wider range.

In order to make a fair comparison, for each dataset, we keep the same hyper-parameters (such maximum length, warm-up steps, etc) and only tune the initial learning rate from 1e-4 to 1e-5. We run the same experiment for ten times to ensure the reliability of results. The best initial learning rate is determined by selecting the best average development set performance. We report the maximum, and average scores to both evaluate the peak and average performance of these models. For detailed hyper-parameter settings, please see Table 1.

Dataset Task MaxLen Batch Epoch Train # Dev # Test # Domain
CMRC 2018 MRC 512 64 2 10K 3.2K 4.9K Wikipedia
DRCD MRC 512 64 2 27K 3.5K 3.5K Wikipedia
CJRC MRC 512 64 2 10K 3.2K 3.2K law
People Daily NER 256 64 3 51K 4.6K - news
MSRA-NER NER 256 64 5 45K - 3.4K news
XNLI NLI 128 64 2 392K 2.5K 2.5K various
ChnSentiCorp SC 256 64 3 9.6K 1.2K 1.2K various
Sina Weibo SC 128 64 3 100K 10K 10K microblogs
LCQMC SPM 128 64 3 240K 8.8K 12.5K Zhidao
BQ Corpus SPM 128 64 3 100K 10K 10K QA
THUCNews DC 512 64 3 50K 5K 10K news
Table 1: Hyper-parameter settings and data statistics in different task. represents the dataset was also evaluated by BERT (Devlin et al., 2019). represents the dataset was also evaluated by ERNIE (Sun et al., 2019). The dataset without any marks represent new benchmarks on these models.

In this technical report, we mainly focus on three pre-trained models: BERT, BERT-wwm, ERNIE. The model comparisons are depicted in Table 2.

BERT BERT-wwm ERNIE
Pre-Train Data Wikipedia Wikipedia Wikipedia +Baike+Tieba, etc.
Sentence # 24M 173M
Vocabulary # 21,128 18,000 (17,964)
Hidden Activation GeLU ReLU
Hidden Size/Layers 768 & 12
Attention Head # 12
Table 2: Comparisons of Chinese pre-trained models.

We carried out all experiments under TensorFlow framework (Abadi et al., 2016). Note that, ERNIE only provides PaddlePaddle version131313https://github.com/PaddlePaddle/LARK/tree/develop/ERNIE, so we have to convert the weights into TensorFlow version (the results are verified on XNLI task, where we reproduce the results by ERNIE official).

3.1 Machine Reading Comprehension: CMRC 2018, DRCD, CJRC

Machine Reading Comprehension (MRC) is a representative document-level modeling task which requires to answer the questions based on the given passages. We mainly test on three datasets: CMRC 2018 (Cui et al., 2018), DRCD (Shao et al., 2018), CJRC.

  • CMRC 2018: A span-extraction machine reading comprehension dataset, which is similar to SQuAD (Rajpurkar et al., 2016) that extract a passage span for the given question.

  • DRCD: Also a span-extraction MRC data, but in Traditional Chinese.

  • CJRC: Similar to CoQA (Reddy et al., 2019), which has yes/no questions, no-answer questions and span-extraction questions. The data is collected from Chinese law judgment documents. Note that, we only use small-train-data.json for training. The development and test set are collected in-house (does not publicly available due to the license issue and is not the same as the official competition).

CMRC 2018 Dev Test Challenge
EM F1 EM F1 EM F1
BERT 65.5 (64.4) 84.5 (84.0) 70.0 (68.7) 87.0 (86.3) 18.6 (17.0) 43.3 (41.3)
ERNIE 65.4 (64.3) 84.7 (84.2) 69.4 (68.2) 86.6 (86.1) 19.6 (17.0) 44.3 (42.8)
BERT-wwm 66.3 (65.0) 85.6 (84.7) 70.5 (69.1) 87.4 (86.7) 21.0 (19.3) 47.0 (43.9)
Table 3: Results on CMRC 2018 (Simplified Chinese). The average score of 10 independent runs is depicted in brackets. Best LR: BERT (3e-5), BERT-wwm (3e-5), ERNIE (8e-5).
DRCD Dev Test
EM F1 EM F1
BERT 83.1 (82.7) 89.9 (89.6) 82.2 (81.6) 89.2 (88.8)
ERNIE 73.2 (73.0) 83.9 (83.8) 71.9 (71.4) 82.5 (82.3)
BERT-wwm 84.3 (83.4) 90.5 (90.2) 82.8 (81.8) 89.7 (89.0)
Table 4: Results on DRCD (Traditional Chinese). Best LR: BERT (3e-5), BERT-wwm (3e-5), ERNIE (8e-5).
CJRC Dev Test
EM F1 EM F1
BERT 54.6 (54.0) 75.4 (74.5) 55.1 (54.1) 75.2 (74.3)
ERNIE 54.3 (53.9) 75.3 (74.6) 55.0 (53.9) 75.0 (73.9)
BERT-wwm 54.7 (54.0) 75.2 (74.8) 55.1 (54.1) 75.4 (74.4)
Table 5: Results on CJRC. Best LR: BERT (4e-5), BERT-wwm (4e-5), ERNIE (8e-5).

The results are depicted in Table 3, 4, 5. As we can see, BERT-wwm yields significant improvements on CMRC 2018 and DRCD, which demonstrate its effectiveness on modeling long sequences. Also, we find that ERNIE does not show a competitive performance on DRCD, which indicate that it is not suitable for processing Traditional Chinese text. After examining the vocabulary of ERNIE, we discovered that the Traditional Chinese characters are removed141414Not checked thoroughly, but we could not find some of the common Traditional Chinese characters., and thus, resulting in an inferior performance. When it comes to CJRC, where the text is written in professional ways regarding Chinese laws, BERT-wwm shows moderate improvement over BERT and ERNIE, but not that salient, indicating that further domain adaptation is needed for non-general domains. Also, in professional domains, the performance of Chinese word segmentor may also decrease and will, in turn, affect the performance of ERNIE/BERT-wwm, which rely on Chinese segmentation.

3.2 Named Entity Recognition: People Daily, MSRA-NER

In order to examine the sequence labeling ability, we adopt two Named Entity Recognition datasets: People Daily, MSRA-NER. The label includes O, B-PER, I-PER, B-ORG, I-ORG, B-LOC, I-LOC. We extract the predicted entities and use seqeval151515https://github.com/chakki-works/seqeval to evaluate the NER performance, in terms of Precision, Recall, and F-score. Note that, ERNIE used a different division that has 21k/2.3k/4.6k in train/dev/test (did not use full original train/test). We use the official MSRA-NER train and test set in the following experiments. Also, the test set of People Daily is rather small (less than 100) that we do not report its result. During training, we encountered training failure in ERNIE over half of ten independent runs, where the results are significantly lower than the average score (say lower than 90). We eliminate these results to ensure fair comparisons.

NER People Daily MSRA-NER
P R F P R F
BERT 95.3 (95.0) 95.1 (94.8) 95.2 (94.9) 95.4 (94.8) 95.3 (95.0) 95.3 (94.9)
ERNIE 95.8 (94.7) 95.6 (94.3) 95.7 (94.5) 95.3 (94.9) 95.7 (95.4) 95.4 (95.1)
BERT-wwm 95.4 (95.1) 95.3 (95.0) 95.3 (95.1) 95.4 (95.1) 95.6 (95.3) 95.4 (95.1)
Table 6: Results on People Daily and MSRA-NER. Best LR for PD: BERT (3e-5), BERT-wwm (3e-5), ERNIE (5e-5). Best LR for MSRA-NER: BERT (3e-5), BERT-wwm (4e-5), ERNIE (5e-5).

From Table 6, we can see that ERNIE has a good performance on two NER dataset, especially in the peak performance. However, BERT-wwm has a better average performance on these datasets. These results suggest that by adopting word information in pre-training BERT/ERNIE could further improve the named entity recognition performance. As ERNIE was trained on additional data, it would be beneficial to recognize new named entities, while BERT/BERT-wwm is oriented for general domains. We would not investigate in detail, and leave this for the readers to experiment.

3.3 Natural Language Inference: XNLI

Following BERT and ERNIE, we use Chinese proportion of XNLI to test these models. The results show that ERNIE outperforms BERT/BERT-wwm significantly overall and BERT-wwm shows competitive performance on the test set.

XNLI Dev Test
BERT 77.8 (77.4) 77.8 (77.5)
ERNIE 79.7 (79.4) 78.6 (78.2)
BERT-wwm 79.0 (78.4) 78.2 (78.0)
Table 7: Results on XNLI. Best LR: BERT (3e-5), BERT-wwm (3e-5), ERNIE (5e-5).

3.4 Sentiment Classification: ChnSentiCorp, Sina Weibo

In sentiment classification task, we adopt two datasets. Both of them are binary classification task (positive/negative).

  • ChnSentiCorp

    : A Chinese sentiment analysis dataset.

  • Sina Weibo: We adopt a version that contains 120K microblogs with positive/negative labels. We evenly split (balanced pos/neg labels) into train/dev/test with 100K/10K/10K.

We can see that ERNIE achieves the best performance on ChnSentiCorp, followed by BERT-wwm and BERT. When it comes to Sina Weibo, BERT-wwm shows better performance in terms of maximum and average scores on the test set. As ERNIE was trained on additional web text, it is beneficial to model non-formal text and capture the sentiment in social communication text, such as Weibo.

Sentiment ChnSentiCorp Sina Weibo (100k)
Classification Dev Test Dev Test
BERT 94.7 (94.3) 95.0 (94.7) 97.49 (97.38) 97.37 (97.32)
ERNIE 95.4 (94.8) 95.4 (95.3) 97.54 (97.41) 97.37 (97.29)
BERT-wwm 95.1 (94.5) 95.4 (95.0) 97.49 (97.40) 97.37 (97.35)
Table 8: Results on ChnSentiCorp and Sina Weibo. Best LR for ChnSentiCorp: BERT (2e-5), BERT-wwm (2e-5), ERNIE (5e-5). Best LR for Sina Weibo: BERT (2e-5), BERT-wwm (3e-5), ERNIE (3e-5).

3.5 Sentence Pair Matching: LCQMC, BQ Corpus

We adopt Large-scale Chinese Question Matching Corpus (LCQMC) and BQ Corpus for testing sentence pair matching task. As we can see that ERNIE outperforms BERT/BERT-wwm on LCQMC data. Though the peak performance of BERT-wwm is similar to BERT, the average score is relatively higher, indicating its potential in achieving higher scores (subject to the randomness). However, on BQ Corpus, we find BERT-wwm generally outperforms ERNIE and BERT, especially averaged scores.

Sentence Pair LCQMC BQ Corpus
Matching Dev Test Dev Test
BERT 89.4 (88.4) 86.9 (86.4) 86.0 (85.5) 84.8 (84.6)
ERNIE 89.8 (89.6) 87.2 (87.0) 86.3 (85.5) 85.0 (84.6)
BERT-wwm 89.4 (89.2) 87.0 (86.8) 86.1 (85.6) 85.2 (84.9)
Table 9: Results on LCQMC and BQ Corpus. Best LR for LCQMC: BERT (2e-5), BERT-wwm (2e-5), ERNIE (3e-5). Best LR for BQ Corpus: BERT (3e-5), BERT-wwm (3e-5), ERNIE (5e-5).

3.6 Document Classification: THUCNews

THUCNews is a dataset that contains Sina news in different genres, which is a part of THUCTC.181818http://thuctc.thunlp.org In this paper, specifically, we use a version that contains 50K news in 10 domains (evenly distributed), including sports, finance, technology, etc.191919https://github.com/gaussic/text-classification-cnn-rnn As we can see that, BERT-wwm and BERT outperform ERNIE again on long sequence modeling task, demonstrating their effectiveness.

THUCNews Dev Test
BERT 97.7 (97.4) 97.8 (97.6)
ERNIE 97.6 (97.3) 97.5 (97.3)
BERT-wwm 98.0 (97.6) 97.8 (97.6)
Table 10: Results on THUCNews. Best learning rate: BERT (2e-5), BERT-wwm (2e-5), ERNIE (5e-5).

4 Useful Tips

As we can see, these pre-trained models behave differently in different natural language processing tasks. Due to the limited computing resources, we could not do exhaustive experiments on these datasets. However, we still have some (possibly) useful tips for the readers, where the tips are solely based on the materials above or our experience in using these models.

  • Initial learning rate is the most important hyper-parameters (regardless of BERT or other neural networks), and should ALWAYS be tuned for better performance.

  • As shown in the experimental results, BERT and BERT-wwm share almost the same best initial learning rate, so it is straightforward to apply your initial learning rate in BERT to BERT-wwm. However, we find that ERNIE does not share the same characteristics, so it is STRONGLY recommended to tune the learning rate.

  • As BERT and BERT-wwm were trained on Wikipedia data, they show relatively better performance on the formal text. While, ERNIE was trained on larger data, including web text, which will be useful on casual text, such as Weibo (microblogs).

  • In long-sequence tasks, such as machine reading comprehension and document classification, we suggest using BERT or BERT-wwm.

  • As these pre-trained models are trained in general domains, if the task data is extremely different from the pre-training data (Wikipedia for BERT/BERT-wwm), we suggest taking another pre-training steps on the task data, which was also suggested by Devlin et al. (2019).

  • As there are so many possibilities in pre-training stage (such as initial learning rate, global training steps, warm-up steps, etc.), our implementation may not be optimal using the same pre-training data. Readers are advised to train their own model if seeking for another boost in performance. However, if it is unable to do pre-training, choose one of these pre-trained models which was trained on a similar domain to the down-stream task.

  • When dealing with Traditional Chinese text, use BERT or BERT-wwm.

5 Disclaimer

The experiments only represent the empirical results in certain conditions and should not be regarded as the nature of the respective models. The results may vary using different random seeds, computing devices, etc. Note that, as we have not been testing ERNIE on PaddlePaddle, the results in this technical report may not reflect its true performance (Though we have reproduced several results on the datasets that they had tested.).

6 Conclusion

In this technical report, we utilize the whole word masking strategy for Chinese BERT and release the pre-trained model for the research community. The experimental results indicate that the proposed pre-trained model yields substantial improvements on various NLP tasks, compared to BERT and ERNIE. We hope the release of the pre-trained models could further accelerate the natural language processing in the Chinese research community.

References