Distilled Pruning: Using Synthetic Data to Win the Lottery

07/07/2023
by   Luke McDermott, et al.
0

This work introduces a novel approach to pruning deep learning models by using distilled data. Unlike conventional strategies which primarily focus on architectural or algorithmic optimization, our method reconsiders the role of data in these scenarios. Distilled datasets capture essential patterns from larger datasets, and we demonstrate how to leverage this capability to enable a computationally efficient pruning process. Our approach can find sparse, trainable subnetworks (a.k.a. Lottery Tickets) up to 5x faster than Iterative Magnitude Pruning at comparable sparsity on CIFAR-10. The experimental results highlight the potential of using distilled data for resource-efficient neural network pruning, model compression, and neural architecture search.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2021

Experiments on Properties of Hidden Structures of Sparse Neural Networks

Sparsity in the structure of Neural Networks can lead to less energy con...
research
08/16/2023

Reproducing Kernel Hilbert Space Pruning for Sparse Hyperspectral Abundance Prediction

Hyperspectral measurements from long range sensors can give a detailed p...
research
10/08/2022

Advancing Model Pruning via Bi-level Optimization

The deployment constraints in practical applications necessitate the pru...
research
05/31/2023

Lottery Tickets in Evolutionary Optimization: On Sparse Backpropagation-Free Trainability

Is the lottery ticket phenomenon an idiosyncrasy of gradient-based train...
research
01/30/2022

Win the Lottery Ticket via Fourier Analysis: Frequencies Guided Network Pruning

With the remarkable success of deep learning recently, efficient network...
research
02/12/2019

Effective Network Compression Using Simulation-Guided Iterative Pruning

Existing high-performance deep learning models require very intensive co...
research
06/14/2022

Zeroth-Order Topological Insights into Iterative Magnitude Pruning

Modern-day neural networks are famously large, yet also highly redundant...

Please sign up or login with your details

Forgot password? Click here to reset