Exploring Self-Reinforcement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models

09/19/2023
by   Qiming Bao, et al.
0

Learnersourcing involves students generating and sharing learning resources with their peers. When learnersourcing multiple-choice questions, creating explanations for the generated questions is a crucial step as it facilitates a deeper understanding of the related concepts. However, it is often difficult for students to craft effective explanations due to limited subject understanding and a tendency to merely restate the question stem, distractors, and correct answer. To help scaffold this task, in this work we propose a self-reinforcement large-language-model framework, with the goal of generating and evaluating explanations automatically. Comprising three modules, the framework generates student-aligned explanations, evaluates these explanations to ensure their quality and iteratively enhances the explanations. If an explanation's evaluation score falls below a defined threshold, the framework iteratively refines and reassesses the explanation. Importantly, our framework emulates the manner in which students compose explanations at the relevant grade level. For evaluation, we had a human subject-matter expert compare the explanations generated by students with the explanations created by the open-source large language model Vicuna-13B, a version of Vicuna-13B that had been fine-tuned using our method, and by GPT-4. We observed that, when compared to other large language models, GPT-4 exhibited a higher level of creativity in generating explanations. We also found that explanations generated by GPT-4 were ranked higher by the human expert than both those created by the other models and the original student-created explanations. Our findings represent a significant advancement in enriching the learnersourcing experience for students and enhancing the capabilities of large language models in educational applications.

READ FULL TEXT

page 4

page 7

research
11/04/2022

Experiences from Using Code Explanations Generated by Large Language Models in a Web Software Development E-Book

Advances in natural language processing have resulted in large language ...
research
04/08/2023

Comparing Code Explanations Created by Students and Large Language Models

Reasoning about code and explaining its purpose are fundamental skills f...
research
06/03/2022

Automatic Generation of Programming Exercises and Code Explanations using Large Language Models

This article explores the natural language generation capabilities of la...
research
02/07/2023

ChatGPT and Software Testing Education: Promises Perils

Over the past decade, predictive language modeling for code has proven t...
research
09/21/2023

Code Soliloquies for Accurate Calculations in Large Language Models

High-quality conversational datasets are integral to the successful deve...
research
12/01/2020

Evaluating Explanations: How much do explanations from the teacher aid students?

While many methods purport to explain predictions by highlighting salien...
research
07/14/2022

Beware the Rationalization Trap! When Language Model Explainability Diverges from our Mental Models of Language

Language models learn and represent language differently than humans; th...

Please sign up or login with your details

Forgot password? Click here to reset