Synthesis and Pruning as a Dynamic Compression Strategy for Efficient Deep Neural Networks

11/23/2020
by   Alastair Finlinson, et al.
0

The brain is a highly reconfigurable machine capable of task-specific adaptations. The brain continually rewires itself for a more optimal configuration to solve problems. We propose a novel strategic synthesis algorithm for feedforward networks that draws directly from the brain's behaviours when learning. The proposed approach analyses the network and ranks weights based on their magnitude. Unlike existing approaches that advocate random selection, we select highly performing nodes as starting points for new edges and exploit the Gaussian distribution over the weights to select corresponding endpoints. The strategy aims only to produce useful connections and result in a smaller residual network structure. The approach is complemented with pruning to further the compression. We demonstrate the techniques to deep feedforward networks. The residual sub-networks that are formed from the synthesis approaches in this work form common sub-networks with similarities up to  90 synthesis approach, we observe improvements in compression.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/03/2023

Self-building Neural Networks

During the first part of life, the brain develops while it learns throug...
research
09/22/2022

Vanilla feedforward neural networks as a discretization of dynamic systems

Deep learning has made significant applications in the field of data sci...
research
06/28/2020

ESPN: Extremely Sparse Pruned Networks

Deep neural networks are often highly overparameterized, prohibiting the...
research
02/28/2023

Fast as CHITA: Neural Network Pruning with Combinatorial Optimization

The sheer size of modern neural networks makes model serving a serious c...
research
05/11/2022

Revisiting Random Channel Pruning for Neural Network Compression

Channel (or 3D filter) pruning serves as an effective way to accelerate ...
research
03/09/2022

The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks

Neural networks tend to achieve better accuracy with training if they ar...
research
05/10/2020

Compact Neural Representation Using Attentive Network Pruning

Deep neural networks have evolved to become power demanding and conseque...

Please sign up or login with your details

Forgot password? Click here to reset