Neural Network Layer Algebra: A Framework to Measure Capacity and Compression in Deep Learning

07/02/2021
by   Alberto Badias, et al.
0

We present a new framework to measure the intrinsic properties of (deep) neural networks. While we focus on convolutional networks, our framework can be extrapolated to any network architecture. In particular, we evaluate two network properties, namely, capacity (related to expressivity) and compression, both of which depend only on the network structure and are independent of the training and test data. To this end, we propose two metrics: the first one, called layer complexity, captures the architectural complexity of any network layer; and, the second one, called layer intrinsic power, encodes how data is compressed along the network. The metrics are based on the concept of layer algebra, which is also introduced in this paper. This concept is based on the idea that the global properties depend on the network topology, and the leaf nodes of any neural network can be approximated using local transfer functions, thereby, allowing a simple computation of the global metrics. We also compare the properties of the state-of-the art architectures using our metrics and use the properties to analyze the classification accuracy on benchmark datasets.

READ FULL TEXT

page 3

page 11

research
06/03/2020

Assessing Intelligence in Artificial Neural Networks

The purpose of this work was to develop of metrics to assess network arc...
research
05/03/2015

ReNet: A Recurrent Neural Network Based Alternative to Convolutional Networks

In this paper, we propose a deep neural network architecture for object ...
research
11/30/2018

A Framework for Fast and Efficient Neural Network Compression

Network compression reduces the computational complexity and memory cons...
research
03/30/2020

Dataless Model Selection with the Deep Frame Potential

Choosing a deep neural network architecture is a fundamental problem in ...
research
01/14/2020

Understanding Generalization in Deep Learning via Tensor Methods

Deep neural networks generalize well on unseen data though the number of...
research
03/13/2021

Conceptual capacity and effective complexity of neural networks

We propose a complexity measure of a neural network mapping function bas...
research
09/03/2020

A Partial Regularization Method for Network Compression

Deep Neural Networks have achieved remarkable success relying on the dev...

Please sign up or login with your details

Forgot password? Click here to reset