Exclusion and Inclusion – A model agnostic approach to feature importance in DNNs

07/13/2020
by   Subhadip Maji, et al.
0

Deep Neural Networks in NLP have enabled systems to learn complex non-linear relationships. One of the major bottlenecks towards being able to use DNNs for real world applications is their characterization as black boxes. To solve this problem, we introduce a model agnostic algorithm which calculates phrase-wise importance of input features. We contend that our method is generalizable to a diverse set of tasks, by carrying out experiments for both Regression and Classification. We also observe that our approach is robust to outliers, implying that it only captures the essential aspects of the input.

READ FULL TEXT
research
06/14/2018

Hierarchical interpretations for neural network predictions

Deep neural networks (DNNs) have achieved impressive predictive performa...
research
10/16/2021

TESDA: Transform Enabled Statistical Detection of Attacks in Deep Neural Networks

Deep neural networks (DNNs) are now the de facto choice for computer vis...
research
06/09/2022

DORA: Exploring outlier representations in Deep Neural Networks

Deep Neural Networks (DNNs) draw their power from the representations th...
research
12/01/2015

Analyzing Classifiers: Fisher Vectors and Deep Neural Networks

Fisher Vector classifiers and Deep Neural Networks (DNNs) are popular an...
research
01/17/2022

Black-box error diagnosis in deep neural networks: a survey of tools

The application of Deep Neural Networks (DNNs) to a broad variety of tas...
research
01/18/2021

What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space

Deep neural networks (DNNs) have been widely adopted in different applic...
research
09/14/2023

Statistically Valid Variable Importance Assessment through Conditional Permutations

Variable importance assessment has become a crucial step in machine-lear...

Please sign up or login with your details

Forgot password? Click here to reset