A Framework for Neural Network Pruning Using Gibbs Distributions

06/08/2020
by   Alex Labach, et al.
0

Neural network pruning is an important technique for creating efficient machine learning models that can run on edge devices. We propose a new, highly flexible approach to neural network pruning based on Gibbs distributions. We apply it with Hamiltonians that are based on weight magnitude, using the annealing capabilities of Gibbs distributions to smoothly move from regularization to adaptive pruning during an ordinary neural network training schedule. This method can be used for either unstructured or structured pruning, and we provide explicit formulations for both. We compare our proposed method to several established pruning methods on ResNet variants and find that it outperforms them for unstructured, kernel-wise, and filter-wise pruning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/14/2020

On Iterative Neural Network Pruning, Reinitialization, and the Similarity of Masks

We examine how recently documented, fundamental phenomena in deep learni...
research
03/26/2023

Exploring the Performance of Pruning Methods in Neural Networks: An Empirical Study of the Lottery Ticket Hypothesis

In this paper, we explore the performance of different pruning methods i...
research
01/22/2022

Iterative Activation-based Structured Pruning

Deploying complex deep learning models on edge devices is challenging be...
research
12/11/2018

A Main/Subsidiary Network Framework for Simplifying Binary Neural Network

To reduce memory footprint and run-time latency, techniques such as neur...
research
12/10/2022

Weakest link pruning of a dendrogram

Hierarchical clustering is a popular method for identifying distinct gro...
research
03/30/2020

How Not to Give a FLOP: Combining Regularization and Pruning for Efficient Inference

The challenge of speeding up deep learning models during the deployment ...
research
01/09/2020

Campfire: Compressable, Regularization-Free, Structured Sparse Training for Hardware Accelerators

This paper studies structured sparse training of CNNs with a gradual pru...

Please sign up or login with your details

Forgot password? Click here to reset