Smaller Models, Better Generalization

by   Mayank Sharma, et al.
Indian Institute of Technology Delhi

Reducing network complexity has been a major research focus in recent years with the advent of mobile technology. Convolutional Neural Networks that perform various vision tasks without memory overhaul is the need of the hour. This paper focuses on qualitative and quantitative analysis of reducing the network complexity using an upper bound on the Vapnik-Chervonenkis dimension, pruning, and quantization. We observe a general trend in improvement of accuracies as we quantize the models. We propose a novel loss function that helps in achieving considerable sparsity at comparable accuracies to that of dense models. We compare various regularizations prevalent in the literature and show the superiority of our method in achieving sparser models that generalize well.


page 1

page 2

page 3

page 4


PRUNIX: Non-Ideality Aware Convolutional Neural Network Pruning for Memristive Accelerators

In this work, PRUNIX, a framework for training and pruning convolutional...

Efficient and Effective Quantization for Sparse DNNs

Deep convolutional neural networks (CNNs) are powerful tools for a wide ...

Neural Network Quantization for Efficient Inference: A Survey

As neural networks have become more powerful, there has been a rising de...

Activation Density driven Energy-Efficient Pruning in Training

The process of neural network pruning with suitable fine-tuning and retr...

Putting 3D Spatially Sparse Networks on a Diet

3D neural networks have become prevalent for many 3D vision tasks includ...

Low Complexity Convolutional Neural Networks for Equalization in Optical Fiber Transmission

A convolutional neural network is proposed to mitigate fiber transmissio...

Nonconvex Regularization for Network Slimming:Compressing CNNs Even More

In the last decade, convolutional neural networks (CNNs) have evolved to...

Please sign up or login with your details

Forgot password? Click here to reset