ComplAI: Theory of A Unified Framework for Multi-factor Assessment of Black-Box Supervised Machine Learning Models

12/30/2022
by   Arkadipta De, et al.
0

The advances in Artificial Intelligence are creating new opportunities to improve lives of people around the world, from business to healthcare, from lifestyle to education. For example, some systems profile the users using their demographic and behavioral characteristics to make certain domain-specific predictions. Often, such predictions impact the life of the user directly or indirectly (e.g., loan disbursement, determining insurance coverage, shortlisting applications, etc.). As a result, the concerns over such AI-enabled systems are also increasing. To address these concerns, such systems are mandated to be responsible i.e., transparent, fair, and explainable to developers and end-users. In this paper, we present ComplAI, a unique framework to enable, observe, analyze and quantify explainability, robustness, performance, fairness, and model behavior in drift scenarios, and to provide a single Trust Factor that evaluates different supervised Machine Learning models not just from their ability to make correct predictions but from overall responsibility perspective. The framework helps users to (a) connect their models and enable explanations, (b) assess and visualize different aspects of the model, such as robustness, drift susceptibility, and fairness, and (c) compare different models (from different model families or obtained through different hyperparameter settings) from an overall perspective thereby facilitating actionable recourse for improvement of the models. It is model agnostic and works with different supervised machine learning scenarios (i.e., Binary Classification, Multi-class Classification, and Regression) and frameworks. It can be seamlessly integrated with any ML life-cycle framework. Thus, this already deployed framework aims to unify critical aspects of Responsible AI systems for regulating the development process of such real systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/07/2023

Beyond XAI:Obstacles Towards Responsible AI

The rapidly advancing domain of Explainable Artificial Intelligence (XAI...
research
02/23/2023

Local and Global Explainability Metrics for Machine Learning Predictions

Rapid advancements in artificial intelligence (AI) technology have broug...
research
01/14/2022

Tools and Practices for Responsible AI Engineering

Responsible Artificial Intelligence (AI) - the practice of developing, e...
research
12/28/2020

dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python

The increasing amount of available data, computing power, and the consta...
research
11/17/2022

An Audit Framework for Technical Assessment of Binary Classifiers

Multilevel models using logistic regression (MLogRM) and random forest m...
research
01/13/2022

A Method for Controlling Extrapolation when Visualizing and Optimizing the Prediction Profiles of Statistical and Machine Learning Models

We present a novel method for controlling extrapolation in the predictio...
research
05/14/2021

Discovering the Rationale of Decisions: Experiments on Aligning Learning and Reasoning

In AI and law, systems that are designed for decision support should be ...

Please sign up or login with your details

Forgot password? Click here to reset