SUBPLEX: Towards a Better Understanding of Black Box Model Explanations at the Subpopulation Level

07/21/2020
by   Gromit Yeuk-Yin Chan, et al.
0

Understanding the interpretation of machine learning (ML) models has been of paramount importance when making decisions with societal impacts such as transport control, financial activities, and medical diagnosis. While current model interpretation methodologies focus on using locally linear functions to approximate the models or creating self-explanatory models that give explanations to each input instance, they do not focus on model interpretation at the subpopulation level, which is the understanding of model interpretations across different subset aggregations in a dataset. To address the challenges of providing explanations of an ML model across the whole dataset, we propose SUBPLEX, a visual analytics system to help users understand black-box model explanations with subpopulation visual analysis. SUBPLEX is designed through an iterative design process with machine learning researchers to address three usage scenarios of real-life machine learning tasks: model debugging, feature selection, and bias detection. The system applies novel subpopulation analysis on ML model explanations and interactive visualization to explore the explanations on a dataset with different levels of granularity. Based on the system, we conduct user evaluation to assess how understanding the interpretation at a subpopulation level influences the sense-making process of interpreting ML models from a user's perspective. Our results suggest that by providing model explanations for different groups of data, SUBPLEX encourages users to generate more ingenious ideas to enrich the interpretations. It also helps users to acquire a tight integration between programming workflow and visual analytics workflow. Last but not least, we summarize the considerations observed in applying visualization to machine learning interpretations.

READ FULL TEXT

page 1

page 16

page 20

research
09/12/2021

AdViCE: Aggregated Visual Counterfactual Explanations for Machine Learning Model Validation

Rapid improvements in the performance of machine learning models have pu...
research
07/08/2020

Pitfalls to Avoid when Interpreting Machine Learning Models

Modern requirements for machine learning (ML) models include both high p...
research
01/17/2022

Principled Diverse Counterfactuals in Multilinear Models

Machine learning (ML) applications have automated numerous real-life tas...
research
06/15/2019

LioNets: Local Interpretation of Neural Networks through Penultimate Layer Decoding

Technological breakthroughs on smart homes, self-driving cars, health ca...
research
10/01/2021

Consistent Explanations by Contrastive Learning

Understanding and explaining the decisions of neural networks are critic...
research
07/08/2022

TalkToModel: Understanding Machine Learning Models With Open Ended Dialogues

Machine Learning (ML) models are increasingly used to make critical deci...
research
09/01/2023

Declarative Reasoning on Explanations Using Constraint Logic Programming

Explaining opaque Machine Learning (ML) models is an increasingly releva...

Please sign up or login with your details

Forgot password? Click here to reset