Assessing the efficacy of large language models in generating accurate teacher responses

07/09/2023
by   Yann Hicke, et al.
0

(Tack et al., 2023) organized the shared task hosted by the 18th Workshop on Innovative Use of NLP for Building Educational Applications on generation of teacher language in educational dialogues. Following the structure of the shared task, in this study, we attempt to assess the generative abilities of large language models in providing informative and helpful insights to students, thereby simulating the role of a knowledgeable teacher. To this end, we present an extensive evaluation of several benchmarking generative models, including GPT-4 (few-shot, in-context learning), fine-tuned GPT-2, and fine-tuned DialoGPT. Additionally, to optimize for pedagogical quality, we fine-tuned the Flan-T5 model using reinforcement learning. Our experimental findings on the Teacher-Student Chatroom Corpus subset indicate the efficacy of GPT-4 over other fine-tuned models, measured using BERTScore and DialogRPT. We hypothesize that several dataset characteristics, including sampling, representativeness, and dialog completeness, pose significant challenges to fine-tuning, thus contributing to the poor generalizability of the fine-tuned models. Finally, we note the need for these generative models to be evaluated with a metric that relies not only on dialog coherence and matched language modeling distribution but also on the model's ability to showcase pedagogical skills.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2023

The ADAIO System at the BEA-2023 Shared Task on Generating AI Teacher Responses in Educational Dialogues

This paper presents the ADAIO team's system entry in the Building Educat...
research
05/26/2023

Few-shot Fine-tuning vs. In-context Learning: A Fair Comparison and Evaluation

Few-shot fine-tuning and in-context learning are two alternative strateg...
research
02/13/2023

Task-Specific Skill Localization in Fine-tuned Language Models

Pre-trained language models can be fine-tuned to solve diverse NLP tasks...
research
04/16/2021

Language Models are Few-Shot Butlers

Pretrained language models demonstrate strong performance in most NLP ta...
research
06/12/2023

The BEA 2023 Shared Task on Generating AI Teacher Responses in Educational Dialogues

This paper describes the results of the first shared task on the generat...
research
09/18/2023

Automatic Personalized Impression Generation for PET Reports Using Large Language Models

Purpose: To determine if fine-tuned large language models (LLMs) can gen...
research
04/18/2023

Stochastic Parrots Looking for Stochastic Parrots: LLMs are Easy to Fine-Tune and Hard to Detect with other LLMs

The self-attention revolution allowed generative language models to scal...

Please sign up or login with your details

Forgot password? Click here to reset