Investigating Crowdsourcing to Generate Distractors for Multiple-Choice Assessments

09/10/2019
by   Travis Scheponik, et al.
0

We present and analyze results from a pilot study that explores how crowdsourcing can be used in the process of generating distractors (incorrect answer choices) in multiple-choice concept inventories (conceptual tests of understanding). To our knowledge, we are the first to propose and study this approach. Using Amazon Mechanical Turk, we collected approximately 180 open-ended responses to several question stems from the Cybersecurity Concept Inventory of the Cybersecurity Assessment Tools Project and from the Digital Logic Concept Inventory. We generated preliminary distractors by filtering responses, grouping similar responses, selecting the four most frequent groups, and refining a representative distractor for each of these groups. We analyzed our data in two ways. First, we compared the responses and resulting distractors with those from the aforementioned inventories. Second, we obtained feedback from Amazon Mechanical Turk on the resulting new draft test items (including distractors) from additional subjects. Challenges in using crowdsourcing include controlling the selection of subjects and filtering out responses that do not reflect genuine effort. Despite these challenges, our results suggest that crowdsourcing can be a very useful tool in generating effective distractors (attractive to subjects who do not understand the targeted concept). Our results also suggest that this method is faster, easier, and cheaper than is the traditional method of having one or more experts draft distractors, and building on talk-aloud interviews with subjects to uncover their misconceptions. Our results are significant because generating effective distractors is one of the most difficult steps in creating multiple-choice assessments.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

04/10/2020

Experiences and Lessons Learned Creating and Validating Concept Inventories for Cybersecurity

We reflect on our ongoing journey in the educational Cybersecurity Asses...
01/26/2019

The CATS Hackathon: Creating and Refining Test Items for Cybersecurity Concept Inventories

For two days in February 2018, 17 cybersecurity educators and profession...
12/20/2020

Exploring Effectiveness of Inter-Microtask Qualification Tests in Crowdsourcing

Qualification tests in crowdsourcing are often used to pre-filter worker...
11/12/2014

Collecting Image Description Datasets using Crowdsourcing

We describe our two new datasets with images described by humans. Both t...
10/16/2012

Crowdsourcing Control: Moving Beyond Multiple Choice

To ensure quality results from crowdsourced tasks, requesters often aggr...
11/05/2020

Challenges and strategies for running controlled crowdsourcing experiments

This paper reports on the challenges and lessons we learned while runnin...
07/22/2020

Body sway responses to pseudorandom support surface translations of vestibular loss subjects resemble those of vestibular able subjects

Body sway responses evoked by a horizontal acceleration of a level and f...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.