Putting ChatGPT's Medical Advice to the (Turing) Test

01/24/2023
by   Oded Nov, et al.
0

Objective: Assess the feasibility of using ChatGPT or a similar AI-based chatbot for patient-provider communication. Participants: A US representative sample of 430 study participants aged 18 and above. 53.2 analyzed were women; their average age was 47.1. Exposure: Ten representative non-administrative patient-provider interactions were extracted from the EHR. Patients' questions were placed in ChatGPT with a request for the chatbot to respond using approximately the same word count as the human provider's response. In the survey, each patient's question was followed by a provider- or ChatGPT-generated response. Participants were informed that five responses were provider-generated and five were chatbot-generated. Participants were asked, and incentivized financially, to correctly identify the response source. Participants were also asked about their trust in chatbots' functions in patient-provider communication, using a Likert scale of 1-5. Results: The correct classification of responses ranged between 49.0 questions. On average, chatbot responses were correctly identified 65.5 time, and provider responses were correctly distinguished 65.1 average, responses toward patients' trust in chatbots' functions were weakly positive (mean Likert score: 3.4), with lower trust as the health-related complexity of the task in questions increased. Conclusions: ChatGPT responses to patient questions were weakly distinguishable from provider responses. Laypeople appear to trust the use of chatbots to answer lower risk health questions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/12/2023

Performance of ChatGPT-3.5 and GPT-4 on the United States Medical Licensing Examination With and Without Distractions

As Large Language Models (LLMs) are predictive models building their res...
research
08/30/2016

Visualisation of Survey Responses using Self-Organising Maps: A Case Study on Diabetes Self-care Factors

Due to the chronic nature of diabetes, patient self-care factors play an...
research
02/23/2023

Talking Abortion (Mis)information with ChatGPT on TikTok

In this study, we tested users' perception of accuracy and engagement wi...
research
09/30/2016

Characterization of experts in crowdsourcing platforms

Crowdsourcing platforms enable to propose simple human intelligence task...
research
04/26/2023

Evaluation of GPT-3.5 and GPT-4 for supporting real-world information needs in healthcare delivery

Despite growing interest in using large language models (LLMs) in health...
research
06/27/2023

Validating a virtual human and automated feedback system for training doctor-patient communication skills

Effective communication between a clinician and their patient is critica...
research
05/11/2023

Taking Advice from ChatGPT

A growing literature studies how humans incorporate advice from algorith...

Please sign up or login with your details

Forgot password? Click here to reset