New pointwise convolution in Deep Neural Networks through Extremely Fast and Non Parametric Transforms

06/25/2019
by   Joonhyun Jeong, et al.
0

Some conventional transforms such as Discrete Walsh-Hadamard Transform (DWHT) and Discrete Cosine Transform (DCT) have been widely used as feature extractors in image processing but rarely applied in neural networks. However, we found that these conventional transforms have the ability to capture the cross-channel correlations without any learnable parameters in DNNs. This paper firstly proposes to apply conventional transforms to pointwise convolution, showing that such transforms significantly reduce the computational complexity of neural networks without accuracy performance degradation. Especially for DWHT, it requires no floating point multiplications but only additions and subtractions, which can considerably reduce computation overheads. In addition, its fast algorithm further reduces complexity of floating point addition from O(n^2) to O(n n). These nice properties construct extremely efficient networks in the number parameters and operations, enjoying accuracy gain. Our proposed DWHT-based model gained 1.49% accuracy increase with 79.1% reduced parameters and 48.4% reduced FLOPs compared with its baseline model (MoblieNet-V1) on the CIFAR 100 dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/19/2018

Building Efficient Deep Neural Networks with Unitary Group Convolutions

We propose unitary group convolutions (UGConvs), a building block for CN...
research
04/14/2021

Fast Walsh-Hadamard Transform and Smooth-Thresholding Based Binary Layers in Deep Neural Networks

In this paper, we propose a novel layer based on fast Walsh-Hadamard tra...
research
03/13/2023

Orthogonal Transform Domain Approaches for the Convolutional Layer

In this paper, we propose a set of transform-based neural network layers...
research
11/18/2018

Transform-Based Multilinear Dynamical System for Tensor Time Series Analysis

We propose a novel multilinear dynamical system (MLDS) in a transform do...
research
03/29/2018

Error Analysis and Improving the Accuracy of Winograd Convolution for Deep Neural Networks

Modern deep neural networks (DNNs) spend a large amount of their executi...
research
03/29/2018

Improving accuracy of Winograd convolution for DNNs

Modern deep neural networks (DNNs) spend a large amount of their executi...
research
11/10/2021

Data Compression: Multi-Term Approach

In terms of signal samples, we propose and justify a new rank reduced mu...

Please sign up or login with your details

Forgot password? Click here to reset