Theoretical Characterization of How Neural Network Pruning Affects its Generalization

01/01/2023
by   Hongru Yang, et al.
0

It has been observed in practice that applying pruning-at-initialization methods to neural networks and training the sparsified networks can not only retain the testing performance of the original dense models, but also sometimes even slightly boost the generalization performance. Theoretical understanding for such experimental observations are yet to be developed. This work makes the first attempt to study how different pruning fractions affect the model's gradient descent dynamics and generalization. Specifically, this work considers a classification task for overparameterized two-layer neural networks, where the network is randomly pruned according to different rates at the initialization. It is shown that as long as the pruning fraction is below a certain threshold, gradient descent can drive the training loss toward zero and the network exhibits good generalization performance. More surprisingly, the generalization bound gets better as the pruning fraction gets larger. To complement this positive result, this work further shows a negative result: there exists a large pruning fraction such that while gradient descent is still able to drive the training loss toward zero (by memorizing noise), the generalization performance is no better than random guessing. This further suggests that pruning can change the feature learning process, which leads to the performance drop of the pruned neural network.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/15/2022

Random Feature Amplification: Feature Learning and Generalization in Neural Networks

In this work, we provide a characterization of the feature-learning proc...
research
10/07/2020

Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win

Sparse Neural Networks (NNs) can match the generalization of dense NNs u...
research
03/03/2020

Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection

Recent empirical works show that large deep neural networks are often hi...
research
03/27/2022

On the Neural Tangent Kernel Analysis of Randomly Pruned Wide Neural Networks

We study the behavior of ultra-wide neural networks when their weights a...
research
10/21/2021

Towards strong pruning for lottery tickets with non-zero biases

The strong lottery ticket hypothesis holds the promise that pruning rand...
research
05/12/2021

Dynamical Isometry: The Missing Ingredient for Neural Network Pruning

Several recent works [40, 24] observed an interesting phenomenon in neur...
research
01/26/2021

A Unified Paths Perspective for Pruning at Initialization

A number of recent approaches have been proposed for pruning neural netw...

Please sign up or login with your details

Forgot password? Click here to reset