Evaluating the Explainers: Black-Box Explainable Machine Learning for Student Success Prediction in MOOCs

07/01/2022
by   Vinitra Swamy, et al.
0

Neural networks are ubiquitous in applied machine learning for education. Their pervasive success in predictive performance comes alongside a severe weakness, the lack of explainability of their decisions, especially relevant in human-centric fields. We implement five state-of-the-art methodologies for explaining black-box machine learning models (LIME, PermutationSHAP, KernelSHAP, DiCE, CEM) and examine the strengths of each approach on the downstream task of student performance prediction for five massive open online courses. Our experiments demonstrate that the families of explainers do not agree with each other on feature importance for the same Bidirectional LSTM models with the same representative set of students. We use Principal Component Analysis, Jensen-Shannon distance, and Spearman's rank-order correlation to quantitatively cross-examine explanations across methods and courses. Furthermore, we validate explainer performance across curriculum-based prerequisite relationships. Our results come to the concerning conclusion that the choice of explainer is an important decision and is in fact paramount to the interpretation of the predictive results, even more so than the course the model is trained on. Source code and models are released at http://github.com/epfl-ml4ed/evaluating-explainers.

READ FULL TEXT

page 6

page 8

research
12/17/2022

Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design

Deep learning models for learning analytics have become increasingly pop...
research
01/10/2023

Inside the Black Box: Detecting and Mitigating Algorithmic Bias across Racialized Groups in College Student-Success Prediction

Colleges and universities are increasingly turning to algorithms that pr...
research
06/24/2021

Promises and Pitfalls of Black-Box Concept Learning Models

Machine learning models that incorporate concept learning as an intermed...
research
07/01/2023

The future of human-centric eXplainable Artificial Intelligence (XAI) is not post-hoc explanations

Explainable Artificial Intelligence (XAI) plays a crucial role in enabli...
research
10/16/2019

Explaining with Impact: A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms

There has been a significant surge of interest recently around the conce...
research
12/12/2018

Transfer Learning using Representation Learning in Massive Open Online Courses

In a Massive Open Online Course (MOOC), predictive models of student beh...

Please sign up or login with your details

Forgot password? Click here to reset