DeepAI AI Chat
Log In Sign Up

Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values

by   Zijie J. Wang, et al.

Machine learning (ML) interpretability techniques can reveal undesirable patterns in data that models exploit to make predictions–potentially causing harms once deployed. However, how to take action to address these patterns is not always clear. In a collaboration between ML and human-computer interaction researchers, physicians, and data scientists, we develop GAM Changer, the first interactive system to help domain experts and data scientists easily and responsibly edit Generalized Additive Models (GAMs) and fix problematic patterns. With novel interaction techniques, our tool puts interpretability into action–empowering users to analyze, validate, and align model behaviors with their knowledge and values. Physicians have started to use our tool to investigate and fix pneumonia and sepsis risk prediction models, and an evaluation with 7 data scientists working in diverse domains highlights that our tool is easy to use, meets their model editing needs, and fits into their current workflows. Built with modern web technologies, our tool runs locally in users' web browsers or computational notebooks, lowering the barrier to use. GAM Changer is available at the following public demo link:


GAM Changer: Editing Generalized Additive Models with Interactive Visualization

Recent strides in interpretable machine learning (ML) research reveal th...

TimberTrek: Exploring and Curating Sparse Decision Trees with Interactive Visualization

Given thousands of equally accurate machine learning (ML) models, how ca...

χiplot: web-first visualisation platform for multidimensional data

χiplot is an HTML5-based system for interactive exploration of data and ...

Maintaining The Humanity of Our Models

Artificial intelligence and machine learning have been major research in...

Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs

Interpretability methods aim to help users build trust in and understand...

Enhancing Human-Machine Teaming for Medical Prognosis Through Neural Ordinary Differential Equations (NODEs)

Machine Learning (ML) has recently been demonstrated to rival expert-lev...