Pruning Convolutional Filters using Batch Bridgeout

09/23/2020
by   Najeeb Khan, et al.
0

State-of-the-art computer vision models are rapidly increasing in capacity, where the number of parameters far exceeds the number required to fit the training set. This results in better optimization and generalization performance. However, the huge size of contemporary models results in large inference costs and limits their use on resource-limited devices. In order to reduce inference costs, convolutional filters in trained neural networks could be pruned to reduce the run-time memory and computational requirements during inference. However, severe post-training pruning results in degraded performance if the training algorithm results in dense weight vectors. We propose the use of Batch Bridgeout, a sparsity inducing stochastic regularization scheme, to train neural networks so that they could be pruned efficiently with minimal degradation in performance. We evaluate the proposed method on common computer vision models VGGNet, ResNet, and Wide-ResNet on the CIFAR image classification task. For all the networks, experimental results show that Batch Bridgeout trained networks achieve higher accuracy across a wide range of pruning intensities compared to Dropout and weight decay regularization.

READ FULL TEXT

page 7

page 10

research
11/29/2020

Layer Pruning via Fusible Residual Convolutional Block for Deep Neural Networks

In order to deploy deep convolutional neural networks (CNNs) on resource...
research
08/13/2023

Neural Networks at a Fraction with Pruned Quaternions

Contemporary state-of-the-art neural networks have increasingly large nu...
research
05/04/2018

Enhancing the Regularization Effect of Weight Pruning in Artificial Neural Networks

Artificial neural networks (ANNs) may not be worth their computational/m...
research
09/10/2021

On the Compression of Neural Networks Using ℓ_0-Norm Regularization and Weight Pruning

Despite the growing availability of high-capacity computational platform...
research
01/29/2018

Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks

It is desirable to train convolutional networks (CNNs) to run more effic...
research
06/11/2019

Simultaneously Learning Architectures and Features of Deep Neural Networks

This paper presents a novel method which simultaneously learns the numbe...
research
12/01/2022

ResNet Structure Simplification with the Convolutional Kernel Redundancy Measure

Deep learning, especially convolutional neural networks, has triggered a...

Please sign up or login with your details

Forgot password? Click here to reset