Truthful Peer Grading with Limited Effort from Teaching Staff

07/31/2018
by   Jatin Jindal, et al.
0

Massive open online courses pose a massive challenge for grading the answerscripts at a high accuracy. Peer grading is often viewed as a scalable solution to this challenge, which largely depends on the altruism of the peer graders. Some approaches in the literature treat peer grading as a 'best-effort service' of the graders, and statistically correct their inaccuracies before awarding the final scores, but ignore graders' strategic behavior. Few other approaches incentivize non-manipulative actions of the peer graders but do not make use of certain additional information that is potentially available in a peer grading setting, e.g., the true grade can eventually be observed at an additional cost. This cost can be thought of as an additional effort from the teaching staff if they had to finally take a look at the corrected papers post peer grading. In this paper, we use such additional information and introduce a mechanism, TRUPEQA, that (a) uses a constant number of instructor-graded answerscripts to quantitatively measure the accuracies of the peer graders and corrects the scores accordingly, (b) ensures truthful revelation of their observed grades, (c) penalizes manipulation, but not inaccuracy, and (d) reduces the total cost of arriving at the true grades, i.e., the additional person-hours of the teaching staff. We show that this mechanism outperforms several standard peer grading techniques used in practice, even at times when the graders are non-manipulative.

READ FULL TEXT

page 15

page 16

page 17

page 19

research
06/02/2015

Peer Grading in a Course on Algorithms and Data Structures: Machine Learning Algorithms do not Improve over Simple Baselines

Peer grading is the process of students reviewing each others' work, suc...
research
10/05/2022

Manipulation and Peer Mechanisms: A Survey

In peer mechanisms, the competitors for a prize also determine who wins....
research
08/12/2021

Measurement Integrity in Peer Prediction: A Peer Assessment Case Study

We propose measurement integrity, a property related to ex post reward f...
research
04/15/2021

Linking open-source code commits and MOOC grades to evaluate massive online open peer review

Massive Open Online Courses (MOOCs) have been used by students as a low-...
research
10/27/2021

You Are the Best Reviewer of Your Own Papers: An Owner-Assisted Scoring Mechanism

I consider the setting where reviewers offer very noisy scores for a num...
research
12/30/2018

The Device War - The War Between IOT Brands In A Household

Users buy compatible IOT devices from different brands with an expectati...
research
08/05/2019

Discovery of Bias and Strategic Behavior in Crowdsourced Performance Assessment

With the industry trend of shifting from a traditional hierarchical appr...

Please sign up or login with your details

Forgot password? Click here to reset