Evaluating Two Approaches to Assessing Student Progress in Cybersecurity Exercises
Cybersecurity students need to develop practical skills such as using command-line tools. Hands-on exercises are the most direct way to assess these skills, but assessing students' mastery is a challenging task for instructors. We aim to alleviate this issue by modeling and visualizing student progress automatically throughout the exercise. The progress is summarized by graph models based on the shell commands students typed to achieve discrete tasks within the exercise. We implemented two types of models and compared them using data from 46 students at two universities. To evaluate our models, we surveyed 22 experienced computing instructors and qualitatively analyzed their responses. The majority of instructors interpreted the graph models effectively and identified strengths, weaknesses, and assessment use cases for each model. Based on the evaluation, we provide recommendations to instructors and explain how our graph models innovate teaching and promote further research. The impact of this paper is threefold. First, it demonstrates how multiple institutions can collaborate to share approaches to modeling student progress in hands-on exercises. Second, our modeling techniques generalize to data from different environments to support student assessment, even outside the cybersecurity domain. Third, we share the acquired data and open-source software so that others can use the models in their classes or research.
READ FULL TEXT