Fast Convex Pruning of Deep Neural Networks

06/17/2018
by   Alireza Aghasi, et al.
0

We develop a fast, tractable technique called Net-Trim for simplifying a trained neural network. The method is a convex post-processing module, which prunes (sparsifies) a trained network layer by layer, while preserving the internal responses. We present a comprehensive analysis of Net-Trim from both the algorithmic and sample complexity standpoints, centered on a fast, scalable convex optimization program. Our analysis includes consistency results between the initial and retrained models before and after Net-Trim application and guarantees on the number of training samples needed to discover a network that can be expressed using a certain number of nonzero terms. Specifically, if there is a set of weights that uses at most s terms that can re-create the layer outputs from the layer inputs, we can find these weights from O(s N/s) samples, where N is the input size. These theoretical results are similar to those for sparse regression using the Lasso, and our analysis uses some of the same recently-developed tools (namely recent results on the concentration of measure and convex analysis). Finally, we propose an algorithmic framework based on the alternating direction method of multipliers (ADMM), which allows a fast and simple implementation of Net-Trim for network pruning and compression.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/16/2016

Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee

We introduce and analyze a new technique for model reduction for deep ne...
research
02/15/2018

Systematic Weight Pruning of DNNs using Alternating Direction Method of Multipliers

We present a systematic weight pruning framework of deep neural networks...
research
10/12/2021

Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks

The lottery ticket hypothesis (LTH) states that learning on a properly p...
research
11/01/2019

Deep Learning for space-variant deconvolution in galaxy surveys

Deconvolution of large survey images with millions of galaxies requires ...
research
08/19/2020

On the Approximation Lower Bound for Neural Nets with Random Weights

A random net is a shallow neural network where the hidden layer is froze...
research
07/26/2019

Momentum-Net: Fast and convergent iterative neural network for inverse problems

Iterative neural networks (INN) are rapidly gaining attention for solvin...
research
09/17/2018

Self Configuration in Machine Learning

In this paper we first present a class of algorithms for training multi-...

Please sign up or login with your details

Forgot password? Click here to reset