EBLIME: Enhanced Bayesian Local Interpretable Model-agnostic Explanations

04/29/2023
by   Yuhao Zhong, et al.
0

We propose EBLIME to explain black-box machine learning models and obtain the distribution of feature importance using Bayesian ridge regression models. We provide mathematical expressions of the Bayesian framework and theoretical outcomes including the significance of ridge parameter. Case studies were conducted on benchmark datasets and a real-world industrial application of locating internal defects in manufactured products. Compared to the state-of-the-art methods, EBLIME yields more intuitive and accurate results, with better uncertainty quantification in terms of deriving the posterior distribution, credible intervals, and rankings of the feature importance.

READ FULL TEXT

page 4

page 6

research
08/11/2020

How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations

As local explanations of black box models are increasingly being employe...
research
12/16/2022

Shapley variable importance cloud for machine learning models

Current practice in interpretable machine learning often focuses on expl...
research
10/02/2019

Robust Bayesian Regression with Synthetic Posterior

Regression models are fundamental tools in statistics, but they typicall...
research
10/17/2022

Bayesian Projection Pursuit Regression

In projection pursuit regression (PPR), an unknown response function is ...
research
01/13/2023

Local Model Explanations and Uncertainty Without Model Access

We present a model-agnostic algorithm for generating post-hoc explanatio...
research
01/31/2019

Bayesian active learning for optimization and uncertainty quantification in protein docking

Motivation: Ab initio protein docking represents a major challenge for o...
research
10/21/2020

Incorporating Interpretable Output Constraints in Bayesian Neural Networks

Domains where supervised models are deployed often come with task-specif...

Please sign up or login with your details

Forgot password? Click here to reset