Investigating the Essential of Meaningful Automated Formative Feedback for Programming Assignments

06/21/2019
by   Qiang Hao, et al.
0

This study investigated the essential of meaningful automated feedback for programming assignments. Three different types of feedback were tested, including (a) What's wrong - what test cases were testing and which failed, (b) Gap - comparisons between expected and actual outputs, and (c) Hint - hints on how to fix problems if test cases failed. 46 students taking a CS2 participated in this study. They were divided into three groups, and the feedback configurations for each group were different: (1) Group One - What's wrong, (2) Group Two - What's wrong + Gap, (3) Group Three - What's wrong + Gap + Hint. This study found that simply knowing what failed did not help students sufficiently, and might stimulate system gaming behavior. Hints were not found to be impactful on student performance or their usage of automated feedback. Based on the findings, this study provides practical guidance on the design of automated feedback.

READ FULL TEXT
research
06/30/2023

Large Language Models (GPT) for automating feedback on programming assignments

Addressing the challenge of generating personalized feedback for program...
research
01/04/2022

Feedback and Engagement on an Introductory Programming Module

We ran a study on engagement and achievement for a first year undergradu...
research
12/04/2022

A survey on grading format of automated grading tools for programming assignments

The prevalence of online platforms and studies has generated the demand ...
research
04/26/2010

An approach to visualize the course of solving of a research task in humans

A technique to study the dynamics of solving of a research task is sugge...
research
04/23/2021

SnapCheck: Automated Testing for Snap Programs

Programming environments such as Snap, Scratch, and Processing engage le...

Please sign up or login with your details

Forgot password? Click here to reset