Leveraging Human Feedback to Scale Educational Datasets: Combining Crowdworkers and Comparative Judgement

05/22/2023
by   Owen Henkel Libby Hills, et al.
0

Machine Learning models have many potentially beneficial applications in education settings, but a key barrier to their development is securing enough data to train these models. Labelling educational data has traditionally relied on highly skilled raters using complex, multi-class rubrics, making the process expensive and difficult to scale. An alternative, more scalable approach could be to use non-expert crowdworkers to evaluate student work, however, maintaining sufficiently high levels of accuracy and inter-rater reliability when using non-expert workers is challenging. This paper reports on two experiments investigating using non-expert crowdworkers and comparative judgement to evaluate complex student data. Crowdworkers were hired to evaluate student responses to open-ended reading comprehension questions. Crowdworkers were randomly assigned to one of two conditions: the control, where they were asked to decide whether answers were correct or incorrect (i.e., a categorical judgement), or the treatment, where they were shown the same question and answers, but were instead asked to decide which of two candidate answers was more correct (i.e., a comparative/preference-based judgement). We found that using comparative judgement substantially improved inter-rater reliability on both tasks. These results are in-line with well-established literature on the benefits of comparative judgement in the field of educational assessment, as well as with recent trends in artificial intelligence research, where comparative judgement is becoming the preferred method for providing human feedback on model outputs when working with non-expert crowdworkers. However, to our knowledge, these results are novel and important in demonstrating the beneficial effects of using the combination of comparative judgement and crowdworkers to evaluate educational data.

READ FULL TEXT
research
03/14/2020

DATASETS FOR MACHINE READING COMPREHENSION : A LITERATURE REVIEW

Machine reading comprehension aims to teach machines to understand a tex...
research
04/04/2022

Using Elo Rating as a Metric for Comparative Judgement in Educational Assessment

Marking and feedback are essential features of teaching and learning, ac...
research
06/08/2022

Few-shot Question Generation for Personalized Feedback in Intelligent Tutoring Systems

Existing work on generating hints in Intelligent Tutoring Systems (ITS) ...
research
10/17/2018

A Scalable, Flexible Augmentation of the Student Education Process

We present a novel intelligent tutoring system which builds upon well-es...
research
04/25/2021

Math Operation Embeddings for Open-ended Solution Analysis and Feedback

Feedback on student answers and even during intermediate steps in their ...
research
05/15/2022

Mask and Cloze: Automatic Open Cloze Question Generation using a Masked Language Model

Open cloze questions have been attracting attention for both measuring t...
research
09/19/2018

Clustering students' open-ended questionnaire answers

Open responses form a rich but underused source of information in educat...

Please sign up or login with your details

Forgot password? Click here to reset