Hybrid Decision Making: When Interpretable Models Collaborate With Black-Box Models

02/12/2018
by   Tong Wang, et al.
0

Interpretable machine learning models have received increasing interest in recent years, especially in domains where humans are involved in the decision-making process. However, the possible loss of the task performance for gaining interpretability is often inevitable. This performance downgrade puts practitioners in a dilemma of choosing between a top-performing black-box model with no explanations and an interpretable model with unsatisfying task performance. In this work, we propose a novel framework for building a Hybrid Decision Model that integrates an interpretable model with any black-box model to introduce explanations in the decision making process while preserving or possibly improving the predictive accuracy. We propose a novel metric, explainability, to measure the percentage of data that are sent to the interpretable model for decision. We also design a principled objective function that considers predictive accuracy, model interpretability, and data explainability. Under this framework, we develop Collaborative Black-box and RUle Set Hybrid (CoBRUSH) model that combines logic rules and any black-box model into a joint decision model. An input instance is first sent to the rules for decision. If a rule is satisfied, a decision will be directly generated. Otherwise, the black-box model is activated to decide on the instance. To train a hybrid model, we design an efficient search algorithm that exploits theoretically grounded strategies to reduce computation. Experiments show that CoBRUSH models are able to achieve same or better accuracy than their black-box collaborator working alone while gaining explainability. They also have smaller model complexity than interpretable baselines.

READ FULL TEXT
research
05/10/2019

Hybrid Predictive Model: When an Interpretable Model Collaborates with a Black-box Model

Interpretable machine learning has become a strong competitor for tradit...
research
07/21/2020

An Interpretable Probabilistic Approach for Demystifying Black-box Predictive Models

The use of sophisticated machine learning models for critical decision m...
research
07/31/2019

Local Interpretation Methods to Machine Learning Using the Domain of the Feature Space

As machine learning becomes an important part of many real world applica...
research
10/18/2018

Entropic Variable Boosting for Explainability and Interpretability in Machine Learning

In this paper, we present a new explainability formalism to make clear t...
research
04/11/2022

Bayes Point Rule Set Learning

Interpretability is having an increasingly important role in the design ...
research
06/21/2023

Investigating Poor Performance Regions of Black Boxes: LIME-based Exploration in Sepsis Detection

Interpreting machine learning models remains a challenge, hindering thei...
research
02/10/2022

Interpretable pipelines with evolutionarily optimized modules for RL tasks with visual inputs

The importance of explainability in AI has become a pressing concern, fo...

Please sign up or login with your details

Forgot password? Click here to reset