High Dimensional Model Representation as a Glass Box in Supervised Machine Learning

07/26/2018
by   Caleb Deen Bastian, et al.
0

Prediction and explanation are key objects in supervised machine learning, where predictive models are known as black boxes and explanatory models are known as glass boxes. Explanation provides the necessary and sufficient information to interpret the model output in terms of the model input. It includes assessments of model output dependence on important input variables and measures of input variable importance to model output. High dimensional model representation (HDMR), also known as the generalized functional ANOVA expansion, provides useful insight into the input-output behavior of supervised machine learning models. This article gives applications of HDMR in supervised machine learning. The first application is characterizing information leakage in "big-data" settings. The second application is reduced-order representation of elementary symmetric polynomials. The third application is analysis of variance with correlated variables. The last application is estimation of HDMR from kernel machine and decision tree black box representations. These results suggest HDMR to have broad utility within machine learning as a glass box representation.

READ FULL TEXT

page 17

page 36

research
02/11/2018

Global Model Interpretation via Recursive Partitioning

In this work, we propose a simple but effective method to interpret blac...
research
06/16/2020

High Dimensional Model Explanations: an Axiomatic Approach

Complex black-box machine learning models are regularly used in critical...
research
06/17/2016

Using Visual Analytics to Interpret Predictive Machine Learning Models

It is commonly believed that increasing the interpretability of a machin...
research
06/23/2021

False perfection in machine prediction: Detecting and assessing circularity problems in machine learning

Machine learning algorithms train models from patterns of input data and...
research
06/24/2021

Promises and Pitfalls of Black-Box Concept Learning Models

Machine learning models that incorporate concept learning as an intermed...
research
07/23/2019

Invertible Network for Classification and Biomarker Selection for ASD

Determining biomarkers for autism spectrum disorder (ASD) is crucial to ...
research
04/04/2017

Interpretation of Semantic Tweet Representations

Research in analysis of microblogging platforms is experiencing a renewe...

Please sign up or login with your details

Forgot password? Click here to reset