DeepAI AI Chat
Log In Sign Up

Permutation-based Hypothesis Testing for Neural Networks

by   Francesca Mandel, et al.

Neural networks are powerful predictive models, but they provide little insight into the nature of relationships between predictors and outcomes. Although numerous methods have been proposed to quantify the relative contributions of input features, statistical inference and hypothesis testing of feature associations remain largely unexplored. We propose a permutation-based approach to testing that uses the partial derivatives of the network output with respect to specific inputs to assess both the significance of input features and whether significant features are linearly associated with the network output. These tests, which can be flexibly applied to a variety of network architectures, enhance the explanatory power of neural networks, and combined with powerful predictive capability, extend the applicability of these models.


page 1

page 2

page 3

page 4


Sound and Relatively Complete Belief Hoare Logic for Statistical Hypothesis Testing Programs

We propose a new approach to formally describing the requirement for sta...

Understanding Learned Models by Identifying Important Features at the Right Resolution

In many application domains, it is important to characterize how complex...

Feature Importance Explanations for Temporal Black-Box Models

Models in the supervised learning framework may capture rich and complex...

Understanding Weight Similarity of Neural Networks via Chain Normalization Rule and Hypothesis-Training-Testing

We present a weight similarity measure method that can quantify the weig...

A Sieve Quasi-likelihood Ratio Test for Neural Networks with Applications to Genetic Association Studies

Neural networks (NN) play a central role in modern Artificial intelligen...

Selective Probabilistic Classifier Based on Hypothesis Testing

In this paper, we propose a simple yet effective method to deal with the...