Feature construction using explanations of individual predictions

01/23/2023
by   Boštjan Vouk, et al.
0

Feature construction can contribute to comprehensibility and performance of machine learning models. Unfortunately, it usually requires exhaustive search in the attribute space or time-consuming human involvement to generate meaningful features. We propose a novel heuristic approach for reducing the search space based on aggregation of instance-based explanations of predictive models. The proposed Explainable Feature Construction (EFC) methodology identifies groups of co-occurring attributes exposed by popular explanation methods, such as IME and SHAP. We empirically show that reducing the search to these groups significantly reduces the time of feature construction using logical, relational, Cartesian, numerical, and threshold num-of-N and X-of-N constructive operators. An analysis on 10 transparent synthetic datasets shows that EFC effectively identifies informative groups of attributes and constructs relevant features. Using 30 real-world classification datasets, we show significant improvements in classification accuracy for several classifiers and demonstrate the feasibility of the proposed feature construction even for large datasets. Finally, EFC generated interpretable features on a real-world problem from the financial industry, which were confirmed by a domain expert.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/10/2023

Counterfactual Explanation for Fairness in Recommendation

Fairness-aware recommendation eliminates discrimination issues to build ...
research
04/09/2021

Individual Explanations in Machine Learning Models: A Case Study on Poverty Estimation

Machine learning methods are being increasingly applied in sensitive soc...
research
06/15/2022

ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features

Deep learning models have achieved remarkable success in different areas...
research
09/14/2023

SMARTFEAT: Efficient Feature Construction through Feature-Level Foundation Model Interactions

Before applying data analytics or machine learning to a data set, a vita...
research
05/20/2022

Constructive Interpretability with CoLabel: Corroborative Integration, Complementary Features, and Collaborative Learning

Machine learning models with explainable predictions are increasingly so...
research
12/17/2019

Embedded Constrained Feature Construction for High-Energy Physics Data Classification

Before any publication, data analysis of high-energy physics experiments...
research
06/23/2021

groupShapley: Efficient prediction explanation with Shapley values for feature groups

Shapley values has established itself as one of the most appropriate and...

Please sign up or login with your details

Forgot password? Click here to reset