Shapley Interpretation and Activation in Neural Networks

09/13/2019
by   Yadong Li, et al.
0

We propose a novel Shapley value approach to help address neural networks' interpretability and "vanishing gradient" problems. Our method is based on an accurate analytical approximation to the Shapley value of a neuron with ReLU activation. This analytical approximation admits a linear propagation of relevance across neural network layers, resulting in a simple, fast and sensible interpretation of neural networks' decision making process. We then derived a globally continuous and non-vanishing Shapley gradient, which can replace the conventional gradient in training neural network layers with ReLU activation, and leading to better training performance. We further derived a Shapley Activation (SA) function, which is a close approximation to ReLU but features the Shapley gradient. The SA is easy to implement in existing machine learning frameworks. Numerical tests show that SA consistently outperforms ReLU in training convergence, accuracy and stability.

READ FULL TEXT
research
08/10/2019

Natural-Logarithm-Rectified Activation Function in Convolutional Neural Networks

Activation functions play a key role in providing remarkable performance...
research
10/22/2020

A ReLU Dense Layer to Improve the Performance of Neural Networks

We propose ReDense as a simple and low complexity way to improve the per...
research
06/04/2021

Regularization and Reparameterization Avoid Vanishing Gradients in Sigmoid-Type Networks

Deep learning requires several design choices, such as the nodes' activa...
research
05/25/2023

Neural Characteristic Activation Value Analysis for Improved ReLU Network Feature Learning

We examine the characteristic activation values of individual ReLU units...
research
04/01/2021

The Compact Support Neural Network

Neural networks are popular and useful in many fields, but they have the...
research
09/26/2018

Rediscovering Deep Neural Networks in Finite-State Distributions

We propose a new way of thinking about deep neural networks, in which th...
research
08/15/2018

Collapse of Deep and Narrow Neural Nets

Recent theoretical work has demonstrated that deep neural networks have ...

Please sign up or login with your details

Forgot password? Click here to reset