Probing Pre-Trained Language Models for Disease Knowledge

06/14/2021
by   Israa Alghanmi, et al.
0

Pre-trained language models such as ClinicalBERT have achieved impressive results on tasks such as medical Natural Language Inference. At first glance, this may suggest that these models are able to perform medical reasoning tasks, such as mapping symptoms to diseases. However, we find that standard benchmarks such as MedNLI contain relatively few examples that require such forms of reasoning. To better understand the medical reasoning capabilities of existing language models, in this paper we introduce DisKnE, a new benchmark for Disease Knowledge Evaluation. To construct this benchmark, we annotated each positive MedNLI example with the types of medical reasoning that are needed. We then created negative examples by corrupting these positive examples in an adversarial way. Furthermore, we define training-test splits per disease, ensuring that no knowledge about test diseases can be learned from the training data, and we canonicalize the formulation of the hypotheses to avoid the presence of artefacts. This leads to a number of binary classification problems, one for each type of reasoning and each disease. When analysing pre-trained models for the clinical/biomedical domain on the proposed benchmark, we find that their performance drops considerably.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/11/2023

Counteracts: Testing Stereotypical Representation in Pre-trained Language Models

Language models have demonstrated strong performance on various natural ...
research
11/27/2021

Common Sense Knowledge Learning for Open Vocabulary Neural Reasoning: A First View into Chronic Disease Literature

In this paper, we address reasoning tasks from open vocabulary Knowledge...
research
05/24/2023

SETI: Systematicity Evaluation of Textual Inference

We propose SETI (Systematicity Evaluation of Textual Inference), a novel...
research
05/27/2023

FERMAT: An Alternative to Accuracy for Numerical Reasoning

While pre-trained language models achieve impressive performance on vari...
research
10/22/2020

Text Mining to Identify and Extract Novel Disease Treatments From Unstructured Datasets

Objective: We aim to learn potential novel cures for diseases from unstr...
research
04/10/2021

NLI Data Sanity Check: Assessing the Effect of Data Corruption on Model Performance

Pre-trained neural language models give high performance on natural lang...
research
09/05/2023

An Automatic Evaluation Framework for Multi-turn Medical Consultations Capabilities of Large Language Models

Large language models (LLMs) have achieved significant success in intera...

Please sign up or login with your details

Forgot password? Click here to reset