Experiments on Properties of Hidden Structures of Sparse Neural Networks

07/27/2021
by   Julian Stier, et al.
0

Sparsity in the structure of Neural Networks can lead to less energy consumption, less memory usage, faster computation times on convenient hardware, and automated machine learning. If sparsity gives rise to certain kinds of structure, it can explain automatically obtained features during learning. We provide insights into experiments in which we show how sparsity can be achieved through prior initialization, pruning, and during learning, and answer questions on the relationship between the structure of Neural Networks and their performance. This includes the first work of inducing priors from network theory into Recurrent Neural Networks and an architectural performance prediction during a Neural Architecture Search. Within our experiments, we show how magnitude class blinded pruning achieves 97.5 compression and re-training, which is 0.5 points more than without compression, that magnitude class uniform pruning is significantly inferior to it and how a genetic search enhanced with performance prediction achieves 82.4 Further, performance prediction for Recurrent Networks learning the Reber grammar shows an R^2 of up to 0.81 given only structural information.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/07/2023

Distilled Pruning: Using Synthetic Data to Win the Lottery

This work introduces a novel approach to pruning deep learning models by...
research
05/25/2019

Dynamic Cell Structure via Recursive-Recurrent Neural Networks

In a recurrent setting, conventional approaches to neural architecture s...
research
05/13/2019

BayesNAS: A Bayesian Approach for Neural Architecture Search

One-Shot Neural Architecture Search (NAS) is a promising method to signi...
research
10/14/2022

Neural Network Compression by Joint Sparsity Promotion and Redundancy Reduction

Compression of convolutional neural network models has recently been dom...
research
02/20/2023

Multiobjective Evolutionary Pruning of Deep Neural Networks with Transfer Learning for improving their Performance and Robustness

Evolutionary Computation algorithms have been used to solve optimization...
research
01/13/2023

Adaptive Neural Networks Using Residual Fitting

Current methods for estimating the required neural-network size for a gi...
research
06/29/2019

Dissecting Pruned Neural Networks

Pruning is a standard technique for removing unnecessary structure from ...

Please sign up or login with your details

Forgot password? Click here to reset