Comparative Analysis of GPT-4 and Human Graders in Evaluating Praise Given to Students in Synthetic Dialogues

07/05/2023
by   Dollaya Hirunyasiri, et al.
0

Research suggests that providing specific and timely feedback to human tutors enhances their performance. However, it presents challenges due to the time-consuming nature of assessing tutor performance by human evaluators. Large language models, such as the AI-chatbot ChatGPT, hold potential for offering constructive feedback to tutors in practical settings. Nevertheless, the accuracy of AI-generated feedback remains uncertain, with scant research investigating the ability of models like ChatGPT to deliver effective feedback. In this work-in-progress, we evaluate 30 dialogues generated by GPT-4 in a tutor-student setting. We use two different prompting approaches, the zero-shot chain of thought and the few-shot chain of thought, to identify specific components of effective praise based on five criteria. These approaches are then compared to the results of human graders for accuracy. Our goal is to assess the extent to which GPT-4 can accurately identify each praise criterion. We found that both zero-shot and few-shot chain of thought approaches yield comparable results. GPT-4 performs moderately well in identifying instances when the tutor offers specific and immediate praise. However, GPT-4 underperforms in identifying the tutor's ability to deliver sincere praise, particularly in the zero-shot prompting scenario where examples of sincere tutor praise statements were not provided. Future work will focus on enhancing prompt engineering, developing a more general tutoring rubric, and evaluating our method using real-life tutoring dialogues.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/25/2023

Let's Do a Thought Experiment: Using Counterfactuals to Improve Moral Reasoning

Language models still struggle on moral reasoning, despite their impress...
research
12/15/2022

On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning

Generating a chain of thought (CoT) can increase large language model (L...
research
07/08/2023

Is ChatGPT a Good Personality Recognizer? A Preliminary Study

In recent years, personality has been regarded as a valuable personal fa...
research
06/05/2023

Is ChatGPT a Good Teacher Coach? Measuring Zero-Shot Performance For Scoring and Providing Actionable Insights on Classroom Instruction

Coaching, which involves classroom observation and expert feedback, is a...
research
12/21/2022

Crowd Score: A Method for the Evaluation of Jokes using Large Language Model AI Voters as Judges

This paper presents the Crowd Score, a novel method to assess the funnin...
research
05/23/2023

Is Information Extraction Solved by ChatGPT? An Analysis of Performance, Evaluation Criteria, Robustness and Errors

ChatGPT has stimulated the research boom in the field of large language ...
research
01/26/2023

DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature

The fluency and factual knowledge of large language models (LLMs) height...

Please sign up or login with your details

Forgot password? Click here to reset