Measuring Massive Multitask Chinese Understanding

04/25/2023
by   Hui Zeng, et al.
0

The development of large-scale Chinese language models is flourishing, yet there is a lack of corresponding capability assessments. Therefore, we propose a test to measure the multitask accuracy of large Chinese language models. This test encompasses four major domains, including medicine, law, psychology, and education, with 15 subtasks in medicine and 8 subtasks in education. We found that the best-performing models in the zero-shot setting outperformed the worst-performing models by nearly 22 percentage points on average. Across the four major domains, the average zero-shot accuracy of all models did not exceed 0.5. In the subdomains, only the GPT-3.5-turbo model achieved a zero-shot accuracy of 0.703 in clinical medicine, which was the highest accuracy among all models across all subtasks. All models performed poorly in the legal domain, with the highest zero-shot accuracy reaching only 0.259. By comprehensively evaluating the breadth and depth of knowledge across multiple disciplines, this test can more accurately identify the shortcomings of the models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/01/2022

Zemi: Learning Zero-Shot Semi-Parametric Language Models from Multiple Tasks

Although large language models have achieved impressive zero-shot abilit...
research
09/07/2020

Measuring Massive Multitask Language Understanding

We propose a new test to measure a text model's multitask accuracy. The ...
research
03/23/2021

Detecting Hate Speech with GPT-3

Sophisticated language models such as OpenAI's GPT-3 can generate hatefu...
research
05/23/2023

ZeroSCROLLS: A Zero-Shot Benchmark for Long Text Understanding

We introduce ZeroSCROLLS, a zero-shot benchmark for natural language und...
research
12/29/2022

GPT Takes the Bar Exam

Nearly all jurisdictions in the United States require a professional lic...
research
03/21/2023

Large Language Models Can Be Used to Scale the Ideologies of Politicians in a Zero-Shot Learning Setting

The aggregation of knowledge embedded in large language models (LLMs) ho...
research
05/26/2023

Can large language models generate salient negative statements?

We examine the ability of large language models (LLMs) to generate salie...

Please sign up or login with your details

Forgot password? Click here to reset