Learning Active Subspaces and Discovering Important Features with Gaussian Radial Basis Functions Neural Networks

07/11/2023
by   Danny D'Agostino, et al.
0

Providing a model that achieves a strong predictive performance and at the same time is interpretable by humans is one of the most difficult challenges in machine learning research due to the conflicting nature of these two objectives. To address this challenge, we propose a modification of the Radial Basis Function Neural Network model by equipping its Gaussian kernel with a learnable precision matrix. We show that precious information is contained in the spectrum of the precision matrix that can be extracted once the training of the model is completed. In particular, the eigenvectors explain the directions of maximum sensitivity of the model revealing the active subspace and suggesting potential applications for supervised dimensionality reduction. At the same time, the eigenvectors highlight the relationship in terms of absolute variation between the input and the latent variables, thereby allowing us to extract a ranking of the input variables based on their importance to the prediction task enhancing the model interpretability. We conducted numerical experiments for regression, classification, and feature selection tasks, comparing our model against popular machine learning models and the state-of-the-art deep learning-based embedding feature selection techniques. Our results demonstrate that the proposed model does not only yield an attractive prediction performance with respect to the competitors but also provides meaningful and interpretable results that potentially could assist the decision-making process in real-world applications. A PyTorch implementation of the model is available on GitHub at the following link. https://github.com/dannyzx/GRBF-NNs

READ FULL TEXT

page 8

page 9

page 10

page 11

page 18

page 19

page 20

page 21

research
10/12/2020

On Feature Selection Using Anisotropic General Regression Neural Network

The presence of irrelevant features in the input dataset tends to reduce...
research
07/17/2023

Cross Feature Selection to Eliminate Spurious Interactions and Single Feature Dominance Explainable Boosting Machines

Interpretability is a crucial aspect of machine learning models that ena...
research
10/19/2021

AEFE: Automatic Embedded Feature Engineering for Categorical Features

The challenge of solving data mining problems in e-commerce applications...
research
05/04/2021

Comparison of Machine Learning Methods for Predicting Winter Wheat Yield in Germany

This study analyzed the performance of different machine learning method...
research
01/10/2021

Curvature-based Feature Selection with Application in Classifying Electronic Health Records

Electronic Health Records (EHRs) are widely applied in healthcare facili...
research
10/31/2019

Sobolev Independence Criterion

We propose the Sobolev Independence Criterion (SIC), an interpretable de...
research
06/19/2022

An Embedded Feature Selection Framework for Control

Reducing sensor requirements while keeping optimal control performance i...

Please sign up or login with your details

Forgot password? Click here to reset