Detecting Statistical Interactions from Neural Network Weights

05/14/2017
by   Michael Tsang, et al.
0

Interpreting deep neural networks can enable new applications for predictive modeling where both accuracy and interpretability are required. In this paper, we examine the weights of a deep neural network to interpret the statistical interactions it captures. Our key observation is that any input features that interact with each other must follow strongly weighted paths to a common hidden unit before the final output. We propose a novel framework, which we call Neural Interaction Detector (NID), that identifies meaningful interactions of arbitrary-order without an exhaustive search on an exponential solution space of interaction candidates. Empirical evaluation on both synthetic and real-world data showed the effectiveness of NID, which detects interactions more accurately and efficiently than does the state-of-the-art.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset