ESPN: Extremely Sparse Pruned Networks

06/28/2020
by   Minsu Cho, et al.
17

Deep neural networks are often highly overparameterized, prohibiting their use in compute-limited systems. However, a line of recent works has shown that the size of deep networks can be considerably reduced by identifying a subset of neuron indicators (or mask) that correspond to significant weights prior to training. We demonstrate that an simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks. Our algorithm represents a hybrid approach between single shot network pruning methods (such as SNIP) with Lottery-Ticket type approaches. We validate our approach on several datasets and outperform several existing pruning approaches in both test accuracy and compression ratio.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/21/2017

Neuron Pruning for Compressing Deep Networks using Maxout Architectures

This paper presents an efficient and robust approach for reducing the si...
11/23/2020

Synthesis and Pruning as a Dynamic Compression Strategy for Efficient Deep Neural Networks

The brain is a highly reconfigurable machine capable of task-specific ad...
06/22/2020

Rapid Structural Pruning of Neural Networks with Set-based Task-Adaptive Meta-Pruning

As deep neural networks are growing in size and being increasingly deplo...
04/30/2020

Pruning artificial neural networks: a way to find well-generalizing, high-entropy sharp minima

Recently, a race towards the simplification of deep networks has begun, ...
08/24/2020

Hierarchical Adaptive Lasso: Learning Sparse Neural Networks with Shrinkage via Single Stage Training

Deep neural networks achieve state-of-the-art performance in a variety o...
09/26/2019

Drawing early-bird tickets: Towards more efficient training of deep networks

(Frankle & Carbin, 2019) shows that there exist winning tickets (small b...
07/30/2020

Growing Efficient Deep Networks by Structured Continuous Sparsification

We develop an approach to training deep networks while dynamically adjus...