Compressing CNN Kernels for Videos Using Tucker Decompositions: Towards Lightweight CNN Applications

Convolutional Neural Networks (CNN) are the state-of-the-art in the field of visual computing. However, a major problem with CNNs is the large number of floating point operations (FLOPs) required to perform convolutions for large inputs. When considering the application of CNNs to video data, convolutional filters become even more complex due to the extra temporal dimension. This leads to problems when respective applications are to be deployed on mobile devices, such as smart phones, tablets, micro-controllers or similar, indicating less computational power. Kim et al. (2016) proposed using a Tucker-decomposition to compress the convolutional kernel of a pre-trained network for images in order to reduce the complexity of the network, i.e. the number of FLOPs. In this paper, we generalize the aforementioned method for application to videos (and other 3D signals) and evaluate the proposed method on a modified version of the THETIS data set, which contains videos of individuals performing tennis shots. We show that the compressed network reaches comparable accuracy, while indicating a memory compression by a factor of 51. However, the actual computational speed-up (factor 1.4) does not meet our theoretically derived expectation (factor 6).

READ FULL TEXT
research
05/31/2021

Continual 3D Convolutional Neural Networks for Real-time Processing of Videos

This paper introduces Continual 3D Convolutional Neural Networks (Co3D C...
research
07/25/2017

Towards Evolutional Compression

Compressing convolutional neural networks (CNNs) is essential for transf...
research
07/30/2018

Extreme Network Compression via Filter Group Approximation

In this paper we propose a novel decomposition method based on filter gr...
research
09/28/2017

Improving Efficiency in Convolutional Neural Network with Multilinear Filters

The excellent performance of deep neural networks has enabled us to solv...
research
11/11/2018

HSD-CNN: Hierarchically self decomposing CNN architecture using class specific filter sensitivity analysis

Conventional Convolutional neural networks (CNN) are trained on large do...
research
03/27/2020

Acceleration of Convolutional Neural Network Using FFT-Based Split Convolutions

Convolutional neural networks (CNNs) have a large number of variables an...
research
04/30/2015

PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions

We propose a novel approach to reduce the computational cost of evaluati...

Please sign up or login with your details

Forgot password? Click here to reset