Evaluation of GPT-3.5 and GPT-4 for supporting real-world information needs in healthcare delivery

04/26/2023
by   Debadutta Dash, et al.
0

Despite growing interest in using large language models (LLMs) in healthcare, current explorations do not assess the real-world utility and safety of LLMs in clinical settings. Our objective was to determine whether two LLMs can serve information needs submitted by physicians as questions to an informatics consultation service in a safe and concordant manner. Sixty six questions from an informatics consult service were submitted to GPT-3.5 and GPT-4 via simple prompts. 12 physicians assessed the LLM responses' possibility of patient harm and concordance with existing reports from an informatics consultation service. Physician assessments were summarized based on majority vote. For no questions did a majority of physicians deem either LLM response as harmful. For GPT-3.5, responses to 8 questions were concordant with the informatics consult report, 20 discordant, and 9 were unable to be assessed. There were 29 responses with no majority on "Agree", "Disagree", and "Unable to assess". For GPT-4, responses to 13 questions were concordant, 15 discordant, and 3 were unable to be assessed. There were 35 responses with no majority. Responses from both LLMs were largely devoid of overt harm, but less than 20 with an answer from an informatics consultation service, responses contained hallucinated references, and physicians were divided on what constitutes harm. These results suggest that while general purpose LLMs are able to provide safe and credible responses, they often do not meet the specific information need of a given question. A definitive evaluation of the usefulness of LLMs in healthcare settings will likely require additional research on prompt engineering, calibration, and custom-tailoring of general purpose models.

READ FULL TEXT
research
05/26/2023

Improving accuracy of GPT-3/4 results on biomedical data using a retrieval-augmented language model

Large language models (LLMs) have made significant advancements in natur...
research
08/25/2023

Large Language Models in Analyzing Crash Narratives – A Comparative Study of ChatGPT, BARD and GPT-4

In traffic safety research, extracting information from crash narratives...
research
03/25/2023

Can Large Language Models assist in Hazard Analysis?

Large Language Models (LLMs), such as GPT-3, have demonstrated remarkabl...
research
08/31/2023

Large language models in medicine: the potentials and pitfalls

Large language models (LLMs) have been applied to tasks in healthcare, r...
research
01/24/2023

Putting ChatGPT's Medical Advice to the (Turing) Test

Objective: Assess the feasibility of using ChatGPT or a similar AI-based...
research
03/01/2023

Can ChatGPT Assess Human Personalities? A General Evaluation Framework

Large Language Models (LLMs) especially ChatGPT have produced impressive...

Please sign up or login with your details

Forgot password? Click here to reset