Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation

02/01/2019
by   Sahil Singla, et al.
2

Current methods to interpret deep learning models by generating saliency maps generally rely on two key assumptions. First, they use first-order approximations of the loss function neglecting higher-order terms such as the loss curvatures. Second, they evaluate each feature's importance in isolation, ignoring their inter-dependencies. In this work, we study the effect of relaxing these two assumptions. First, by characterizing a closed-form formula for the Hessian matrix of a deep ReLU network, we prove that, for a classification problem with a large number of classes, if an input has a high confidence classification score, the inclusion of the Hessian term has small impacts in the final solution. We prove this result by showing that in this case the Hessian matrix is approximately of rank one and its leading eigenvector is almost parallel to the gradient of the loss function. Our empirical experiments on ImageNet samples are consistent with our theory. This result can have implications in other related problems such as adversarial examples as well. Second, we compute the importance of group-features in deep learning interpretation by introducing a sparsity regularization term. We use the L_0-L_1 relaxation technique along with the proximal gradient descent to have an efficient computation of group feature importance scores. Our empirical results indicate that considering group features can improve deep learning interpretation significantly.

READ FULL TEXT

page 5

page 7

page 8

page 16

page 17

page 18

research
05/28/2019

Certifiably Robust Interpretation in Deep Learning

Although gradient-based saliency maps are popular methods for deep learn...
research
06/30/2021

Analytic Insights into Structure and Rank of Neural Network Hessian Maps

The Hessian of a neural network captures parameter interactions through ...
research
05/21/2018

Small steps and giant leaps: Minimal Newton solvers for Deep Learning

We propose a fast second-order method that can be used as a drop-in repl...
research
04/29/2020

WoodFisher: Efficient second-order approximations for model compression

Second-order information, in the form of Hessian- or Inverse-Hessian-vec...
research
06/08/2018

Noise-adding Methods of Saliency Map as Series of Higher Order Partial Derivative

SmoothGrad and VarGrad are techniques that enhance the empirical quality...
research
08/16/2023

Convergence of Two-Layer Regression with Nonlinear Units

Large language models (LLMs), such as ChatGPT and GPT4, have shown outst...
research
04/08/2015

A Group Theoretic Perspective on Unsupervised Deep Learning

Why does Deep Learning work? What representations does it capture? How d...

Please sign up or login with your details

Forgot password? Click here to reset