Automatic Assessment of the Design Quality of Python Programs with Personalized Feedback

06/02/2021
by   J. Walker Orr, et al.
0

The assessment of program functionality can generally be accomplished with straight-forward unit tests. However, assessing the design quality of a program is a much more difficult and nuanced problem. Design quality is an important consideration since it affects the readability and maintainability of programs. Assessing design quality and giving personalized feedback is very time consuming task for instructors and teaching assistants. This limits the scale of giving personalized feedback to small class settings. Further, design quality is nuanced and is difficult to concisely express as a set of rules. For these reasons, we propose a neural network model to both automatically assess the design of a program and provide personalized feedback to guide students on how to make corrections. The model's effectiveness is evaluated on a corpus of student programs written in Python. The model has an accuracy rate from 83.67 to 94.27 to historical instructor assessment. Finally, we present a study where students tried to improve the design of their programs based on the personalized feedback produced by the model. Students who participated in the study improved their program design scores by 19.58

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/22/2022

Automatic Assessment of the Design Quality of Student Python and Java Programs

Programs are a kind of communication to both computers and people, hence...
research
11/23/2020

An Interactive Foreign Language Trainer Using Assessment and Feedback Modalities

English has long been set as the universal language. Basically most, if ...
research
11/16/2022

Giving Feedback on Interactive Student Programs with Meta-Exploration

Developing interactive software, such as websites or games, is a particu...
research
01/24/2023

Generating High-Precision Feedback for Programming Syntax Errors using Large Language Models

Large language models (LLMs), such as Codex, hold great promise in enhan...
research
08/12/2017

TraceDiff: Debugging Unexpected Code Behavior Using Trace Divergences

Recent advances in program synthesis offer means to automatically debug ...
research
01/15/2021

Automating Program Structure Classification

When students write programs, their program structure provides insight i...
research
09/14/2021

A machine-learning framework for daylight and visual comfort assessment in early design stages

This research is mainly focused on the assessment of machine learning al...

Please sign up or login with your details

Forgot password? Click here to reset