Approximating Continuous Convolutions for Deep Network Compression

10/17/2022
by   Theo W. Costain, et al.
0

We present ApproxConv, a novel method for compressing the layers of a convolutional neural network. Reframing conventional discrete convolution as continuous convolution of parametrised functions over space, we use functional approximations to capture the essential structures of CNN filters with fewer parameters than conventional operations. Our method is able to reduce the size of trained CNN layers requiring only a small amount of fine-tuning. We show that our method is able to compress existing deep network models by half whilst losing only 1.86 compatible with other compression methods like quantisation allowing for further reductions in model size.

READ FULL TEXT

page 1

page 7

page 15

page 16

page 17

research
05/21/2021

Compressing Deep CNNs using Basis Representation and Spectral Fine-tuning

We propose an efficient and straightforward method for compressing deep ...
research
01/15/2018

Deep Net Triage: Assessing the Criticality of Network Layers by Structural Compression

Deep network compression seeks to reduce the number of parameters in the...
research
07/15/2021

Recurrent Parameter Generators

We present a generic method for recurrently using the same parameters fo...
research
09/29/2020

Self-grouping Convolutional Neural Networks

Although group convolution operators are increasingly used in deep convo...
research
07/25/2018

Coreset-Based Neural Network Compression

We propose a novel Convolutional Neural Network (CNN) compression algori...
research
09/22/2019

Blocking and sparsity for optimization of convolution calculation algorithm on GPUs

Convolution neural network (CNN) plays a paramount role in machine learn...
research
11/28/2018

Partial Convolution based Padding

In this paper, we present a simple yet effective padding scheme that can...

Please sign up or login with your details

Forgot password? Click here to reset