Separable Convolutional Eigen-Filters (SCEF): Building Efficient CNNs Using Redundancy Analysis

10/21/2019
by   Samuel Scheidegger, et al.
0

The high model complexity of deep learning algorithms enables remarkable learning capacity in many application domains. However, a large number of trainable parameters comes with a high cost. For example, during both the training and inference phases, the numerous trainable parameters consume a large amount of resources, such as CPU/GPU cores, memory and electric power. In addition, from a theoretical statistical learning perspective, the high complexity of the network can result in a high variance in its generalization performance. One way to reduce the complexity of a network without sacrificing its accuracy is to define and identify redundancies in order to remove them. In this work, we propose a method to observe and analyze redundancies in the weights of a 2D convolutional (Conv2D) network. Based on the proposed analysis, we construct a new layer called Separable Convolutional Eigen-Filters (SCEF) as an alternative parameterization to Conv2D layers. A SCEF layer can be easily implemented using depthwise separable convolution, which are known to be computationally effective. To verify our hypothesis, experiments are conducted on the CIFAR-10 and ImageNet datasets by replacing the Conv2D layers with SCEF and the results have shown an increased accuracy using about 2/3 of the original parameters and reduce the number of FLOPs to 2/3 of the original net. Implementation-wise, our method is highly modular, easy to use, fast to process and does not require any additional dependencies.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset