Interpretable Differencing of Machine Learning Models

06/10/2023
by   Swagatam Haldar, et al.
0

Understanding the differences between machine learning (ML) models is of interest in scenarios ranging from choosing amongst a set of competing models, to updating a deployed model with new training data. In these cases, we wish to go beyond differences in overall metrics such as accuracy to identify where in the feature space do the differences occur. We formalize this problem of model differencing as one of predicting a dissimilarity function of two ML models' outputs, subject to the representation of the differences being human-interpretable. Our solution is to learn a Joint Surrogate Tree (JST), which is composed of two conjoined decision tree surrogates for the two models. A JST provides an intuitive representation of differences and places the changes in the context of the models' decision logic. Context is important as it helps users to map differences to an underlying mental model of an AI system. We also propose a refinement procedure to increase the precision of a JST. We demonstrate, through an empirical evaluation, that such contextual differencing is concise and can be achieved with no loss in fidelity over naive approaches.

READ FULL TEXT

page 24

page 25

page 26

page 27

page 28

research
03/31/2023

DeforestVis: Behavior Analysis of Machine Learning Models with Surrogate Decision Stumps

As the complexity of machine learning (ML) models increases and the appl...
research
02/28/2022

An empirical comparison of machine learning models for student's mental health illness assessment

Student's mental health problems have been explored previously in higher...
research
06/16/2020

Toward Theory of Applied Learning. What is Machine Learning?

Various existing approaches to formalize machine learning (ML) problem a...
research
09/11/2019

Towards Safe Machine Learning for CPS: Infer Uncertainty from Training Data

Machine learning (ML) techniques are increasingly applied to decision-ma...
research
05/28/2023

Interactive Decision Tree Creation and Enhancement with Complete Visualization for Explainable Modeling

To increase the interpretability and prediction accuracy of the Machine ...
research
07/21/2022

Learning Physics from the Machine: An Interpretable Boosted Decision Tree Analysis for the Majorana Demonstrator

The Majorana Demonstrator is a leading experiment searching for neutrino...
research
11/19/2021

Data Excellence for AI: Why Should You Care

The efficacy of machine learning (ML) models depends on both algorithms ...

Please sign up or login with your details

Forgot password? Click here to reset