DeepAI AI Chat
Log In Sign Up

Learning how to explain neural networks: PatternNet and PatternAttribution

05/16/2017
by   Pieter-Jan Kindermans, et al.
Google
Berlin Institute of Technology (Technische Universität Berlin)
0

DeConvNet, Guided BackProp, LRP, were invented to better understand deep neural networks. We show that these methods do not produce the theoretically correct explanation for a linear model. Yet they are used on multi-layer networks with millions of parameters. This is a cause for concern since linear models are simple neural networks. We argue that explanation methods for neural nets should work reliably in the limit of simplicity, the linear models. Based on our analysis of linear models we propose a generalization that yields two explanation techniques (PatternNet and PatternAttribution) that are theoretically sound for linear models and produce improved explanations for deep networks.

READ FULL TEXT
01/20/2022

Kernel Methods and Multi-layer Perceptrons Learn Linear Models in High Dimensions

Empirical observation of high dimensional phenomena, such as the double ...
02/02/2023

The Contextual Lasso: Sparse Linear Models via Deep Neural Networks

Sparse linear models are a gold standard tool for interpretable machine ...
02/11/2021

What does LIME really see in images?

The performance of modern algorithms on certain computer vision tasks su...
08/28/2021

Limiting free energy of multi-layer generalized linear models

We compute the high-dimensional limit of the free energy associated with...
03/25/2023

Learning with Explanation Constraints

While supervised learning assumes the presence of labeled data, we may h...
05/01/2020

Generalization Error of Generalized Linear Models in High Dimensions

At the heart of machine learning lies the question of generalizability o...
01/27/2018

Towards an Understanding of Neural Networks in Natural-Image Spaces

Two major uncertainties, dataset bias and perturbation, prevail in state...

Code Repositories