Explaining a prediction in some nonlinear models

04/21/2019
by   Cosimo Izzo, et al.
0

In this article we will analyse how to compute the contribution of each input value to its aggregate output in some nonlinear models. Regression and classification applications, together with related algorithms for deep neural networks are presented. The proposed approach merges two methods currently present in the literature: integrated gradient and deep Taylor decomposition.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/08/2015

Explaining NonLinear Classification Decisions with Deep Taylor Decomposition

Nonlinear methods such as Deep Neural Networks (DNNs) are the gold stand...
research
05/05/2019

Nonlinear Approximation and (Deep) ReLU Networks

This article is concerned with the approximation and expressive powers o...
research
07/23/2021

Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks

With the rise of deep neural networks, the challenge of explaining the p...
research
03/15/2019

Algorithms for Verifying Deep Neural Networks

Deep neural networks are widely used for nonlinear function approximatio...
research
10/30/2018

Nonlinear Prediction of Multidimensional Signals via Deep Regression with Applications to Image Coding

Deep convolutional neural networks (DCNN) have enjoyed great successes i...
research
09/27/2017

Case Study: Explaining Diabetic Retinopathy Detection Deep CNNs via Integrated Gradients

In this report, we applied integrated gradients to explaining a neural n...
research
04/14/2020

Learning from Aggregate Observations

We study the problem of learning from aggregate observations where super...

Please sign up or login with your details

Forgot password? Click here to reset