Variance Tolerance Factors For Interpreting Neural Networks

09/28/2022
by   Sichao Li, et al.
58

Black box models only provide results for deep learning tasks and lack informative details about how these results were obtained. In this paper, we propose a general theory that defines a variance tolerance factor (VTF) to interpret the neural networks by ranking the importance of features and constructing a novel architecture consisting of a base model and feature model to demonstrate its utility. Two feature importance ranking methods and a feature selection method based on the VTF are created. A thorough evaluation on synthetic, benchmark, and real datasets is provided.

READ FULL TEXT

page 5

page 7

page 13

page 16

research
10/12/2020

Embedded methods for feature selection in neural networks

The representational capacity of modern neural network architectures has...
research
08/01/2023

Copula for Instance-wise Feature Selection and Ranking

Instance-wise feature selection and ranking methods can achieve a good s...
research
10/18/2020

Feature Importance Ranking for Deep Learning

Feature importance ranking has become a powerful tool for explainable AI...
research
09/04/2018

DeepPINK: reproducible feature selection in deep neural networks

Deep learning has become increasingly popular in both supervised and uns...
research
10/13/2020

Neural Gaussian Mirror for Controlled Feature Selection in Neural Networks

Deep neural networks (DNNs) have become increasingly popular and achieve...
research
10/26/2020

Q-FIT: The Quantifiable Feature Importance Technique for Explainable Machine Learning

We introduce a novel framework to quantify the importance of each input ...
research
12/22/2017

Dropout Feature Ranking for Deep Learning Models

Deep neural networks are a promising technology achieving state-of-the-a...

Please sign up or login with your details

Forgot password? Click here to reset