Is Your Model "MADD"? A Novel Metric to Evaluate Algorithmic Fairness for Predictive Student Models

05/24/2023
by   Mélina Verger, et al.
0

Predictive student models are increasingly used in learning environments due to their ability to enhance educational outcomes and support stakeholders in making informed decisions. However, predictive models can be biased and produce unfair outcomes, leading to potential discrimination against some students and possible harmful long-term implications. This has prompted research on fairness metrics meant to capture and quantify such biases. Nonetheless, so far, existing fairness metrics used in education are predictive performance-oriented, focusing on assessing biased outcomes across groups of students, without considering the behaviors of the models nor the severity of the biases in the outcomes. Therefore, we propose a novel metric, the Model Absolute Density Distance (MADD), to analyze models' discriminatory behaviors independently from their predictive performance. We also provide a complementary visualization-based analysis to enable fine-grained human assessment of how the models discriminate between groups of students. We evaluate our approach on the common task of predicting student success in online courses, using several common predictive classification models on an open educational dataset. We also compare our metric to the only predictive performance-oriented fairness metric developed in education, ABROCA. Results on this dataset show that: (1) fair predictive performance does not guarantee fair models' behaviors and thus fair outcomes, (2) there is no direct relationship between data bias and predictive performance bias nor discriminatory behaviors bias, and (3) trained on the same data, models exhibit different discriminatory behaviors, according to different sensitive features too. We thus recommend using the MADD on models that show satisfying predictive performance, to gain a finer-grained understanding on how they behave and to refine models selection and their usage.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/10/2023

Inside the Black Box: Detecting and Mitigating Algorithmic Bias across Racialized Groups in College Student-Success Prediction

Colleges and universities are increasingly turning to algorithms that pr...
research
07/10/2020

Algorithmic Fairness in Education

Data-driven predictive models are increasingly used in education to supp...
research
05/14/2021

Towards Equity and Algorithmic Fairness in Student Grade Prediction

Equity of educational outcome and fairness of AI with respect to race ha...
research
07/10/2018

A Cautionary Tail: A Framework and Casey Study for Testing Predictive Model Validity

Data scientists frequently train predictive models on administrative dat...
research
07/10/2018

A Cautionary Tail: A Framework and Case Study for Testing Predictive Model Validity

Data scientists frequently train predictive models on administrative dat...
research
02/26/2023

Performance is not enough: a story of the Rashomon's quartet

Predictive modelling is often reduced to finding the best model that opt...
research
01/29/2021

Relaxed Clustered Hawkes Process for Procrastination Modeling in MOOCs

Hawkes processes have been shown to be efficient in modeling bursty sequ...

Please sign up or login with your details

Forgot password? Click here to reset