Separable Convolutional Eigen-Filters (SCEF): Building Efficient CNNs Using Redundancy Analysis

10/21/2019
by   Samuel Scheidegger, et al.
0

The high model complexity of deep learning algorithms enables remarkable learning capacity in many application domains. However, a large number of trainable parameters comes with a high cost. For example, during both the training and inference phases, the numerous trainable parameters consume a large amount of resources, such as CPU/GPU cores, memory and electric power. In addition, from a theoretical statistical learning perspective, the high complexity of the network can result in a high variance in its generalization performance. One way to reduce the complexity of a network without sacrificing its accuracy is to define and identify redundancies in order to remove them. In this work, we propose a method to observe and analyze redundancies in the weights of a 2D convolutional (Conv2D) network. Based on the proposed analysis, we construct a new layer called Separable Convolutional Eigen-Filters (SCEF) as an alternative parameterization to Conv2D layers. A SCEF layer can be easily implemented using depthwise separable convolution, which are known to be computationally effective. To verify our hypothesis, experiments are conducted on the CIFAR-10 and ImageNet datasets by replacing the Conv2D layers with SCEF and the results have shown an increased accuracy using about 2/3 of the original parameters and reduce the number of FLOPs to 2/3 of the original net. Implementation-wise, our method is highly modular, easy to use, fast to process and does not require any additional dependencies.

READ FULL TEXT

page 4

page 5

research
01/09/2020

Compression of convolutional neural networks for high performance imagematching tasks on mobile devices

Deep neural networks have demonstrated state-of-the-art performance for ...
research
02/18/2020

Computational optimization of convolutional neural networks using separated filters architecture

This paper considers a convolutional neural network transformation that ...
research
03/03/2017

Deep Collaborative Learning for Visual Recognition

Deep neural networks are playing an important role in state-of-the-art v...
research
12/27/2021

Learning Robust and Lightweight Model through Separable Structured Transformations

With the proliferation of mobile devices and the Internet of Things, dee...
research
10/21/2021

Memory Efficient Adaptive Attention For Multiple Domain Learning

Training CNNs from scratch on new domains typically demands large number...
research
07/30/2021

Manipulating Identical Filter Redundancy for Efficient Pruning on Deep and Complicated CNN

The existence of redundancy in Convolutional Neural Networks (CNNs) enab...
research
11/21/2017

Efficient Implementation of a Recognition System Using the Cortex Ventral Stream Model

In this paper, an efficient implementation for a recognition system base...

Please sign up or login with your details

Forgot password? Click here to reset