Large Language Models Encode Clinical Knowledge

12/26/2022
by   Karan Singhal, et al.
17

Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6 Medical License Exam questions), surpassing prior state-of-the-art by over 17 However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.

READ FULL TEXT
research
09/06/2023

Aligning Large Language Models for Clinical Tasks

Large Language Models (LLMs) have demonstrated remarkable adaptability, ...
research
03/20/2023

Capabilities of GPT-4 on Medical Challenge Problems

Large language models (LLMs) have demonstrated remarkable capabilities i...
research
04/30/2023

Beyond Classification: Financial Reasoning in State-of-the-Art Language Models

Large Language Models (LLMs), consisting of 100 billion or more paramete...
research
04/02/2023

LLMMaps – A Visual Metaphor for Stratified Evaluation of Large Language Models

Large Language Models (LLMs) have revolutionized natural language proces...
research
05/29/2023

Do Large Language Models Know What They Don't Know?

Large language models (LLMs) have a wealth of knowledge that allows them...
research
06/21/2023

Understanding Social Reasoning in Language Models with Language Models

As Large Language Models (LLMs) become increasingly integrated into our ...
research
05/10/2023

Bot or Human? Detecting ChatGPT Imposters with A Single Question

Large language models like ChatGPT have recently demonstrated impressive...

Please sign up or login with your details

Forgot password? Click here to reset