StyleBERT: Chinese pretraining by font style information

02/21/2022
by   Chao Lv, et al.
0

With the success of down streaming task using English pre-trained language model, the pre-trained Chinese language model is also necessary to get a better performance of Chinese NLP task. Unlike the English language, Chinese has its special characters such as glyph information. So in this article, we propose the Chinese pre-trained language model StyleBERT which incorporate the following embedding information to enhance the savvy of language model, such as word, pinyin, five stroke and chaizi. The experiments show that the model achieves well performances on a wide range of Chinese NLP tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/01/2020

CPM: A Large-scale Generative Chinese Pre-trained Language Model

Pre-trained Language Models (PLMs) have proven to be beneficial for vari...
research
11/17/2020

MVP-BERT: Redesigning Vocabularies for Chinese BERT and Multi-Vocab Pretraining

Despite the development of pre-trained language models (PLMs) significan...
research
06/30/2021

ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information

Recent pretraining models in Chinese neglect two important aspects speci...
research
12/08/2022

Investigating Glyph Phonetic Information for Chinese Spell Checking: What Works and What's Next

While pre-trained Chinese language models have demonstrated impressive p...
research
10/05/2022

GLM-130B: An Open Bilingual Pre-trained Model

We introduce GLM-130B, a bilingual (English and Chinese) pre-trained lan...
research
04/29/2020

GePpeTto Carves Italian into a Language Model

In the last few years, pre-trained neural architectures have provided im...
research
08/01/2023

JIANG: Chinese Open Foundation Language Model

With the advancements in large language model technology, it has showcas...

Please sign up or login with your details

Forgot password? Click here to reset