Investigating the Compositional Structure Of Deep Neural Networks

02/17/2020
by   Francesco Craighero, et al.
15

The current understanding of deep neural networks can only partially explain how input structure, network parameters and optimization algorithms jointly contribute to achieve the strong generalization power that is typically observed in many real-world applications. In order to improve the comprehension and interpretability of deep neural networks, we here introduce a novel theoretical framework based on the compositional structure of piecewise linear activation functions. By defining a direct acyclic graph representing the composition of activation patterns through the network layers, it is possible to characterize the instances of the input data with respect to both the predicted label and the specific (linear) transformation used to perform predictions. Preliminary tests on the MNIST dataset show that our method can group input instances with regard to their similarity in the internal representation of the neural network, providing an intuitive measure of input complexity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/08/2014

On the Number of Linear Regions of Deep Neural Networks

We study the complexity of functions computable by deep feedforward neur...
research
09/24/2020

Theoretical Analysis of the Advantage of Deepening Neural Networks

We propose two new criteria to understand the advantage of deepening neu...
research
04/09/2020

Mehler's Formula, Branching Process, and Compositional Kernels of Deep Neural Networks

In this paper, we utilize a connection between compositional kernels and...
research
06/02/2019

NeuralDivergence: Exploring and Understanding Neural Networks by Comparing Activation Distributions

As deep neural networks are increasingly used in solving high-stake prob...
research
03/22/2023

Fixed points of arbitrarily deep 1-dimensional neural networks

In this paper, we introduce a new class of functions on ℝ that is closed...
research
09/25/2019

Switched linear projections and inactive state sensitivity for deep neural network interpretability

We introduce switched linear projections for expressing the activity of ...
research
06/01/2022

Composition of Relational Features with an Application to Explaining Black-Box Predictors

Relational machine learning programs like those developed in Inductive L...

Please sign up or login with your details

Forgot password? Click here to reset