ExplainCPE: A Free-text Explanation Benchmark of Chinese Pharmacist Examination

05/22/2023
by   Dongfang Li, et al.
0

As ChatGPT and GPT-4 spearhead the development of Large Language Models (LLMs), more researchers are investigating their performance across various tasks. But more research needs to be done on the interpretability capabilities of LLMs, that is, the ability to generate reasons after an answer has been given. Existing explanation datasets are mostly English-language general knowledge questions, which leads to insufficient thematic and linguistic diversity. To address the language bias and lack of medical resources in generating rationales QA datasets, we present ExplainCPE (over 7k instances), a challenging medical benchmark in Simplified Chinese. We analyzed the errors of ChatGPT and GPT-4, pointing out the limitations of current LLMs in understanding text and computational reasoning. During the experiment, we also found that different LLMs have different preferences for in-context learning. ExplainCPE presents a significant challenge, but its potential for further investigation is promising, and it can be used to evaluate the ability of a model to generate explanations. AI safety and trustworthiness need more attention, and this work makes the first step to explore the medical interpretability of LLMs.The dataset is available at https://github.com/HITsz-TMG/ExplainCPE.

READ FULL TEXT

page 4

page 12

research
04/14/2023

HuaTuo: Tuning LLaMA Model with Chinese Medical Knowledge

Large Language Models (LLMs), such as the LLaMA model, have demonstrated...
research
03/31/2023

Evaluating GPT-4 and ChatGPT on Japanese Medical Licensing Examinations

As large language models (LLMs) gain popularity among speakers of divers...
research
06/05/2023

Benchmarking Large Language Models on CMExam – A Comprehensive Chinese Medical Exam Dataset

Recent advancements in large language models (LLMs) have transformed the...
research
09/13/2023

SafetyBench: Evaluating the Safety of Large Language Models with Multiple Choice Questions

With the rapid development of Large Language Models (LLMs), increasing a...
research
05/09/2023

Large Language Models Need Holistically Thought in Medical Conversational QA

The medical conversational question answering (CQA) system aims at provi...
research
03/16/2022

E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning

The ability to recognize analogies is fundamental to human cognition. Ex...
research
08/28/2023

LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding

Although large language models (LLMs) demonstrate impressive performance...

Please sign up or login with your details

Forgot password? Click here to reset