COPS: Controlled Pruning Before Training Starts

07/27/2021
by   Paul Wimmer, et al.
0

State-of-the-art deep neural network (DNN) pruning techniques, applied one-shot before training starts, evaluate sparse architectures with the help of a single criterion – called pruning score. Pruning weights based on a solitary score works well for some architectures and pruning rates but may also fail for other ones. As a common baseline for pruning scores, we introduce the notion of a generalized synaptic score (GSS). In this work we do not concentrate on a single pruning criterion, but provide a framework for combining arbitrary GSSs to create more powerful pruning strategies. These COmbined Pruning Scores (COPS) are obtained by solving a constrained optimization problem. Optimizing for more than one score prevents the sparse network to overly specialize on an individual task, thus COntrols Pruning before training Starts. The combinatorial optimization problem given by COPS is relaxed on a linear program (LP). This LP is solved analytically and determines a solution for COPS. Furthermore, an algorithm to compute it for two scores numerically is proposed and evaluated. Solving COPS in such a way has lower complexity than the best general LP solver. In our experiments we compared pruning with COPS against state-of-the-art methods for different network architectures and image classification tasks and obtained improved results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/13/2023

Structured Pruning for Multi-Task Deep Neural Networks

Although multi-task deep neural network (DNN) models have computation an...
research
10/04/2018

SNIP: Single-shot Network Pruning based on Connection Sensitivity

Pruning large neural networks while maintaining the performance is often...
research
11/28/2020

FreezeNet: Full Performance by Reduced Storage Costs

Pruning generates sparse networks by setting parameters to zero. In this...
research
03/30/2021

Training Sparse Neural Network by Constraining Synaptic Weight on Unit Lp Sphere

Sparse deep neural networks have shown their advantages over dense model...
research
08/13/2023

Neural Networks at a Fraction with Pruned Quaternions

Contemporary state-of-the-art neural networks have increasingly large nu...
research
07/01/2020

Single Shot Structured Pruning Before Training

We introduce a method to speed up training by 2x and inference by 3x in ...
research
06/16/2016

Pruning Random Forests for Prediction on a Budget

We propose to prune a random forest (RF) for resource-constrained predic...

Please sign up or login with your details

Forgot password? Click here to reset