WeLM: A Well-Read Pre-trained Language Model for Chinese

09/21/2022
by   Hui Su, et al.
0

Large Language Models pre-trained with self-supervised learning have demonstrated impressive zero-shot generalization capabilities on a wide spectrum of tasks. In this work, we present WeLM: a well-read pre-trained language model for Chinese that is able to seamlessly perform different types of tasks with zero or few-shot demonstrations. WeLM is trained with 10B parameters by "reading" a curated high-quality corpus covering a wide range of topics. We show that WeLM is equipped with broad knowledge on various domains and languages. On 18 monolingual (Chinese) tasks, WeLM can significantly outperform existing pre-trained models with similar sizes and match the performance of models up to 25 times larger. WeLM also exhibits strong capabilities in multi-lingual and code-switching understanding, outperforming existing multilingual language models pre-trained on 30 languages. Furthermore, We collected human-written prompts for a large set of supervised datasets in Chinese and fine-tuned WeLM with multi-prompted training. The resulting model can attain strong generalization on unseen types of tasks and outperform the unsupervised WeLM in zero-shot learning. Finally, we demonstrate that WeLM has basic skills at explaining and calibrating the decisions from itself, which can be promising directions for future research. Our models can be applied from https://welm.weixin.qq.com/docs/api/.

READ FULL TEXT

page 8

page 9

page 11

page 12

page 13

page 15

page 19

research
12/01/2020

CPM: A Large-scale Generative Chinese Pre-trained Language Model

Pre-trained Language Models (PLMs) have proven to be beneficial for vari...
research
05/23/2023

Are Large Language Models Robust Zero-shot Coreference Resolvers?

Recent progress in domain adaptation for coreference resolution relies o...
research
10/15/2021

Multitask Prompted Training Enables Zero-Shot Task Generalization

Large language models have recently been shown to attain reasonable zero...
research
04/18/2023

CancerGPT: Few-shot Drug Pair Synergy Prediction using Large Pre-trained Language Models

Large pre-trained language models (LLMs) have been shown to have signifi...
research
07/28/2021

Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing

This paper surveys and organizes research works in a new paradigm in nat...
research
01/12/2021

Self-Training Pre-Trained Language Models for Zero- and Few-Shot Multi-Dialectal Arabic Sequence Labeling

A sufficient amount of annotated data is usually required to fine-tune p...
research
03/04/2023

Prismer: A Vision-Language Model with An Ensemble of Experts

Recent vision-language models have shown impressive multi-modal generati...

Please sign up or login with your details

Forgot password? Click here to reset