Computational Complexity of Detecting Proximity to Losslessly Compressible Neural Network Parameters

06/05/2023
by   Matthew Farrugia-Roberts, et al.
0

To better understand complexity in neural networks, we theoretically investigate the idealised phenomenon of lossless network compressibility, whereby an identical function can be implemented with a smaller network. We give an efficient formal algorithm for optimal lossless compression in the setting of single-hidden-layer hyperbolic tangent networks. To measure lossless compressibility, we define the rank of a parameter as the minimum number of hidden units required to implement the same function. Losslessly compressible parameters are atypical, but their existence has implications for nearby parameters. We define the proximate rank of a parameter as the rank of the most compressible parameter within a small L^∞ neighbourhood. Unfortunately, detecting nearby losslessly compressible parameters is not so easy: we show that bounding the proximate rank is an NP-complete problem, using a reduction from Boolean satisfiability via a geometric problem involving covering points in the plane with small squares. These results underscore the computational complexity of measuring neural network complexity, laying a foundation for future theoretical and empirical work in this direction.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/30/2018

A Framework for Fast and Efficient Neural Network Compression

Network compression reduces the computational complexity and memory cons...
research
06/13/2012

The Computational Complexity of Sensitivity Analysis and Parameter Tuning

While known algorithms for sensitivity analysis and parameter tuning in ...
research
02/01/2021

Sampling and Complexity of Partition Function

The number partition problem is a well-known problem, which is one of 21...
research
03/22/2023

Low Rank Optimization for Efficient Deep Learning: Making A Balance between Compact Architecture and Fast Training

Deep neural networks have achieved great success in many data processing...
research
08/01/1998

The Computational Complexity of Probabilistic Planning

We examine the computational complexity of testing and finding small pla...
research
06/28/2018

Automatic Rank Selection for High-Speed Convolutional Neural Network

Low-rank decomposition plays a central role in accelerating convolutiona...
research
05/08/2023

Functional Equivalence and Path Connectivity of Reducible Hyperbolic Tangent Networks

Understanding the learning process of artificial neural networks require...

Please sign up or login with your details

Forgot password? Click here to reset