Gabor filter incorporated CNN for compression

10/29/2021
by   Akihiro Imamura, et al.
0

Convolutional neural networks (CNNs) are remarkably successful in many computer vision tasks. However, the high cost of inference is problematic for embedded and real-time systems, so there are many studies on compressing the networks. On the other hand, recent advances in self-attention models showed that convolution filters are preferable to self-attention in the earlier layers, which indicates that stronger inductive biases are better in the earlier layers. As shown in convolutional filters, strong biases can train specific filters and construct unnecessarily filters to zero. This is analogous to classical image processing tasks, where choosing the suitable filters makes a compact dictionary to represent features. We follow this idea and incorporate Gabor filters in the earlier layers of CNNs for compression. The parameters of Gabor filters are learned through backpropagation, so the features are restricted to Gabor filters. We show that the first layer of VGG-16 for CIFAR-10 has 192 kernels/features, but learning Gabor filters requires an average of 29.4 kernels. Also, using Gabor filters, an average of 83 of kernels in the first and the second layer, respectively, can be removed on the altered ResNet-20, where the first five layers are exchanged with two layers of larger kernels for CIFAR-10.

READ FULL TEXT
research
10/31/2022

Studying inductive biases in image classification task

Recently, self-attention (SA) structures became popular in computer visi...
research
07/26/2019

LinearConv: Regenerating Redundancy in Convolution Filters as Linear Combinations for Parameter Reduction

Convolutional Neural Networks (CNNs) show state-of-the-art performance i...
research
10/04/2022

Towards Flexible Inductive Bias via Progressive Reparameterization Scheduling

There are two de facto standard architectures in recent computer vision:...
research
11/08/2019

On the Relationship between Self-Attention and Convolutional Layers

Recent trends of incorporating attention mechanisms in vision have led r...
research
07/13/2020

PSConv: Squeezing Feature Pyramid into One Compact Poly-Scale Convolutional Layer

Despite their strong modeling capacities, Convolutional Neural Networks ...
research
06/06/2022

Why do CNNs Learn Consistent Representations in their First Layer Independent of Labels and Architecture?

It has previously been observed that the filters learned in the first la...
research
05/08/2018

Learning on the Edge: Explicit Boundary Handling in CNNs

Convolutional neural networks (CNNs) handle the case where filters exten...

Please sign up or login with your details

Forgot password? Click here to reset