A Framework for Auditing Multilevel Models using Explainability Methods

07/04/2022
by   Debarati Bhaumik, et al.
0

Applications of multilevel models usually result in binary classification within groups or hierarchies based on a set of input features. For transparent and ethical applications of such models, sound audit frameworks need to be developed. In this paper, an audit framework for technical assessment of regression MLMs is proposed. The focus is on three aspects, model, discrimination, and transparency and explainability. These aspects are subsequently divided into sub aspects. Contributors, such as inter MLM group fairness, feature contribution order, and aggregated feature contribution, are identified for each of these sub aspects. To measure the performance of the contributors, the framework proposes a shortlist of KPIs. A traffic light risk assessment method is furthermore coupled to these KPIs. For assessing transparency and explainability, different explainability methods (SHAP and LIME) are used, which are compared with a model intrinsic method using quantitative methods and machine learning modelling. Using an open source dataset, a model is trained and tested and the KPIs are computed. It is demonstrated that popular explainability methods, such as SHAP and LIME, underperform in accuracy when interpreting these models. They fail to predict the order of feature importance, the magnitudes, and occasionally even the nature of the feature contribution. For other contributors, such as group fairness and their associated KPIs, similar analysis and calculations have been performed with the aim of adding profundity to the proposed audit framework. The framework is expected to assist regulatory bodies in performing conformity assessments of AI systems using multilevel binomial classification models at businesses. It will also benefit businesses deploying MLMs to be future proof and aligned with the European Commission proposed Regulation on Artificial Intelligence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/17/2022

An Audit Framework for Technical Assessment of Binary Classifiers

Multilevel models using logistic regression (MLogRM) and random forest m...
research
09/28/2021

Who Explains the Explanation? Quantitatively Assessing Feature Attribution Methods

AI explainability seeks to increase the transparency of models, making t...
research
10/21/2021

A Survey on Methods and Metrics for the Assessment of Explainability under the Proposed AI Act

This study discusses the interplay between metrics used to measure the e...
research
05/09/2022

A Transparency Index Framework for AI in Education

Numerous AI ethics checklists and frameworks have been proposed focusing...
research
02/23/2023

Local and Global Explainability Metrics for Machine Learning Predictions

Rapid advancements in artificial intelligence (AI) technology have broug...
research
03/01/2022

Explainability for identification of vulnerable groups in machine learning models

If a prediction model identifies vulnerable individuals or groups, the u...
research
08/22/2023

Addressing Fairness and Explainability in Image Classification Using Optimal Transport

Algorithmic Fairness and the explainability of potentially unfair outcom...

Please sign up or login with your details

Forgot password? Click here to reset