Learning to Reuse Distractors to support Multiple Choice Question Generation in Education

10/25/2022
by   Semere Kiros Bitew, et al.
0

Multiple choice questions (MCQs) are widely used in digital learning systems, as they allow for automating the assessment process. However, due to the increased digital literacy of students and the advent of social media platforms, MCQ tests are widely shared online, and teachers are continuously challenged to create new questions, which is an expensive and time-consuming task. A particularly sensitive aspect of MCQ creation is to devise relevant distractors, i.e., wrong answers that are not easily identifiable as being wrong. This paper studies how a large existing set of manually created answers and distractors for questions over a variety of domains, subjects, and languages can be leveraged to help teachers in creating new MCQs, by the smart reuse of existing distractors. We built several data-driven models based on context-aware question and distractor representations, and compared them with static feature-based models. The proposed models are evaluated with automated metrics and in a realistic user test with teachers. Both automatic and human evaluations indicate that context-aware models consistently outperform a static feature-based approach. For our best-performing context-aware model, on average 3 distractors out of the 10 shown to teachers were rated as high-quality distractors. We create a performance benchmark, and make it public, to enable comparison between different approaches and to introduce a more standardized evaluation of the task. The benchmark contains a test of 298 educational questions covering multiple subjects languages and a 77k multilingual pool of distractor vocabulary for future research.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/30/2023

Distractor generation for multiple-choice questions with predictive prompting and large language models

Large Language Models (LLMs) such as ChatGPT have demonstrated remarkabl...
research
09/23/2022

Multiple-Choice Question Generation: Towards an Automated Assessment Framework

Automated question generation is an important approach to enable persona...
research
04/10/2020

Towards Automatic Generation of Questions from Long Answers

Automatic question generation (AQG) has broad applicability in domains s...
research
04/29/2022

QRelScore: Better Evaluating Generated Questions with Deeper Understanding of Context-aware Relevance

Existing metrics for assessing question generation not only require cost...
research
06/22/2023

CamChoice: A Corpus of Multiple Choice Questions and Candidate Response Distributions

Multiple Choice examinations are a ubiquitous form of assessment that is...
research
10/12/2022

EduQG: A Multi-format Multiple Choice Dataset for the Educational Domain

We introduce a high-quality dataset that contains 3,397 samples comprisi...
research
11/15/2022

Collaborative and AI-aided Exam Question Generation using Wikidata in Education

Since the COVID-19 outbreak, the use of digital learning or education pl...

Please sign up or login with your details

Forgot password? Click here to reset