Exploring Automated Distractor and Feedback Generation for Math Multiple-choice Questions via In-context Learning

08/07/2023
by   Hunter McNichols, et al.
0

Multiple-choice questions (MCQs) are ubiquitous in almost all levels of education since they are easy to administer, grade, and are a reliable format in both assessments and practices. An important aspect of MCQs is the distractors, i.e., incorrect options that are designed to target specific misconceptions or insufficient knowledge among students. To date, the task of crafting high-quality distractors has largely remained a labor-intensive process for teachers and learning content designers, which has limited scalability. In this work, we explore the task of automated distractor and corresponding feedback message generation in math MCQs using large language models. We establish a formulation of these two tasks and propose a simple, in-context learning-based solution. Moreover, we explore using two non-standard metrics to evaluate the quality of the generated distractors and feedback messages. We conduct extensive experiments on these tasks using a real-world MCQ dataset that contains student response information. Our findings suggest that there is a lot of room for improvement in automated distractor and feedback generation. We also outline several directions for future work

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/19/2020

Better Distractions: Transformer-based Distractor Generation and Multiple Choice Question Filtering

For the field of education, being able to generate semantically correct ...
research
04/25/2021

Math Operation Embeddings for Open-ended Solution Analysis and Feedback

Feedback on student answers and even during intermediate steps in their ...
research
04/13/2023

How Useful are Educational Questions Generated by Large Language Models?

Controllable text generation (CTG) by large language models has a huge p...
research
07/25/2023

A large language model-assisted education tool to provide feedback on open-ended responses

Open-ended questions are a favored tool among instructors for assessing ...
research
01/24/2023

Generating High-Precision Feedback for Programming Syntax Errors using Large Language Models

Large language models (LLMs), such as Codex, hold great promise in enhan...
research
06/22/2023

CamChoice: A Corpus of Multiple Choice Questions and Candidate Response Distributions

Multiple Choice examinations are a ubiquitous form of assessment that is...
research
07/10/2023

PapagAI:Automated Feedback for Reflective Essays

Written reflective practice is a regular exercise pre-service teachers p...

Please sign up or login with your details

Forgot password? Click here to reset