Large Language Models Perform Diagnostic Reasoning

07/18/2023
by   Cheng-Kuang Wu, et al.
0

We explore the extension of chain-of-thought (CoT) prompting to medical reasoning for the task of automatic diagnosis. Motivated by doctors' underlying reasoning process, we present Diagnostic-Reasoning CoT (DR-CoT). Empirical results demonstrate that by simply prompting large language models trained only on general text corpus with two DR-CoT exemplars, the diagnostic accuracy improves by 15 pronounced 18 reasoning in large language models can be elicited through proper promptings.

READ FULL TEXT

page 6

page 8

research
06/07/2023

Multi-Task Training with In-Domain Language Models for Diagnostic Reasoning

Generative artificial intelligence (AI) is a promising direction for aug...
research
09/29/2022

DR.BENCH: Diagnostic Reasoning Benchmark for Clinical Natural Language Processing

The meaningful use of electronic health records (EHR) continues to progr...
research
08/13/2023

Diagnostic Reasoning Prompts Reveal the Potential for Large Language Model Interpretability in Medicine

One of the major barriers to using large language models (LLMs) in medic...
research
08/22/2023

Diversity Measures: Domain-Independent Proxies for Failure in Language Model Queries

Error prediction in large language models often relies on domain-specifi...
research
08/28/2023

Leveraging A Medical Knowledge Graph into Large Language Models for Diagnosis Prediction

Electronic Health Records (EHRs) and routine documentation practices pla...
research
04/10/2020

On the Existence of Tacit Assumptions in Contextualized Language Models

Humans carry stereotypic tacit assumptions (STAs) (Prince, 1978), or pro...
research
06/08/2023

Interpretable Medical Diagnostics with Structured Data Extraction by Large Language Models

Tabular data is often hidden in text, particularly in medical diagnostic...

Please sign up or login with your details

Forgot password? Click here to reset