An interpretable machine learning framework for modelling human decision behavior

06/04/2019
by   Mengzhuo Guo, et al.
0

Machine learning has recently been widely adopted to address the managerial decision making problems. However, there is a trade-off between performance and interpretability. Full complexity models (such as neural network-based models) are non-traceable black-box, whereas classic interpretable models (such as logistic regression) are usually simplified with lower accuracy. This trade-off limits the application of state-of-the-art machine learning models in management problems, which requires high prediction performance, as well as the understanding of individual attributes' contributions to the model outcome. Multiple criteria decision aiding (MCDA) is a family of interpretable approaches to depicting the rationale of human decision behavior. It is also limited by strong assumptions (e.g. preference independence). In this paper, we propose an interpretable machine learning approach, namely Neural Network-based Multiple Criteria Decision Aiding (NN-MCDA), which combines an additive MCDA model and a fully-connected multilayer perceptron (MLP) to achieve good performance while preserving a certain degree of interpretability. NN-MCDA has a linear component (in an additive form of a set of polynomial functions) to capture the detailed relationship between individual attributes and the prediction, and a nonlinear component (in a standard MLP form) to capture the high-order interactions between attributes and their complex nonlinear transformations. We demonstrate the effectiveness of NN-MCDA with extensive simulation studies and two real-world datasets. To the best of our knowledge, this research is the first to enhance the interpretability of machine learning models with MCDA techniques. The proposed framework also sheds light on how to use machine learning techniques to free MCDA from strong assumptions.

READ FULL TEXT
research
09/30/2022

Higher-order Neural Additive Models: An Interpretable Machine Learning Model with Feature Interactions

Black-box models, such as deep neural networks, exhibit superior predict...
research
06/22/2019

Model Bridging: To Interpretable Simulation Model From Neural Network

The interpretability of machine learning, particularly for deep neural n...
research
06/25/2018

Why Interpretability in Machine Learning? An Answer Using Distributed Detection and Data Fusion Theory

As artificial intelligence is increasingly affecting all parts of societ...
research
07/28/2021

The Reasonable Crowd: Towards evidence-based and interpretable models of driving behavior

Autonomous vehicles must balance a complex set of objectives. There is n...
research
07/29/2019

A Factored Generalized Additive Model for Clinical Decision Support in the Operating Room

Logistic regression (LR) is widely used in clinical prediction because i...
research
11/03/2022

A k-additive Choquet integral-based approach to approximate the SHAP values for local interpretability in machine learning

Besides accuracy, recent studies on machine learning models have been ad...
research
06/08/2020

A Semiparametric Approach to Interpretable Machine Learning

Black box models in machine learning have demonstrated excellent predict...

Please sign up or login with your details

Forgot password? Click here to reset