Evaluation of AI Chatbots for Patient-Specific EHR Questions

06/05/2023
by   Alaleh Hamidi, et al.
0

This paper investigates the use of artificial intelligence chatbots for patient-specific question answering (QA) from clinical notes using several large language model (LLM) based systems: ChatGPT (versions 3.5 and 4), Google Bard, and Claude. We evaluate the accuracy, relevance, comprehensiveness, and coherence of the answers generated by each model using a 5-point Likert scale on a set of patient-specific questions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/13/2019

Adapting and evaluating a deep learning language model for clinical why-question answering

Objectives: To adapt and evaluate a deep learning language model for ans...
research
05/17/2018

Annotating Electronic Medical Records for Question Answering

Our research is in the relatively unexplored area of question answering ...
research
05/21/2023

Evaluating Open Question Answering Evaluation

This study focuses on the evaluation of Open Question Answering (Open-QA...
research
05/31/2023

Building Extractive Question Answering System to Support Human-AI Health Coaching Model for Sleep Domain

Non-communicable diseases (NCDs) are a leading cause of global deaths, n...
research
05/27/2023

Answering Unanswered Questions through Semantic Reformulations in Spoken QA

Spoken Question Answering (QA) is a key feature of voice assistants, usu...
research
06/30/2023

Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education

Artificial intelligence is gaining traction in more ways than ever befor...
research
08/21/2019

How Good is Artificial Intelligence at Automatically Answering Consumer Questions Related to Alzheimer's Disease?

Alzheimer's Disease (AD) is the most common type of dementia, comprising...

Please sign up or login with your details

Forgot password? Click here to reset