Fast Walsh-Hadamard Transform and Smooth-Thresholding Based Binary Layers in Deep Neural Networks

04/14/2021
by   Hongyi Pan, et al.
0

In this paper, we propose a novel layer based on fast Walsh-Hadamard transform (WHT) and smooth-thresholding to replace 1× 1 convolution layers in deep neural networks. In the WHT domain, we denoise the transform domain coefficients using the new smooth-thresholding non-linearity, a smoothed version of the well-known soft-thresholding operator. We also introduce a family of multiplication-free operators from the basic 2×2 Hadamard transform to implement 3× 3 depthwise separable convolution layers. Using these two types of layers, we replace the bottleneck layers in MobileNet-V2 to reduce the network's number of parameters with a slight loss in accuracy. For example, by replacing the final third bottleneck layers, we reduce the number of parameters from 2.270M to 947K. This reduces the accuracy from 95.21% to 92.88% on the CIFAR-10 dataset. Our approach significantly improves the speed of data processing. The fast Walsh-Hadamard transform has a computational complexity of O(mlog_2 m). As a result, it is computationally more efficient than the 1×1 convolution layer. The fast Walsh-Hadamard layer processes a tensor in ℝ^10×32×32×1024 about 2 times faster than 1×1 convolution layer on NVIDIA Jetson Nano computer board.

READ FULL TEXT
research
01/07/2022

Block Walsh-Hadamard Transform Based Binary Layers in Deep Neural Networks

Convolution has been the core operation of modern deep neural networks. ...
research
03/06/2016

Fast calculation of correlations in recognition systems

Computationally efficient classification system architecture is proposed...
research
06/25/2019

New pointwise convolution in Deep Neural Networks through Extremely Fast and Non Parametric Transforms

Some conventional transforms such as Discrete Walsh-Hadamard Transform (...
research
09/11/2018

Parallel Separable 3D Convolution for Video and Volumetric Data Understanding

For video and volumetric data understanding, 3D convolution layers are w...
research
10/29/2021

FC2T2: The Fast Continuous Convolutional Taylor Transform with Applications in Vision and Graphics

Series expansions have been a cornerstone of applied mathematics and eng...
research
09/03/2019

PSDNet and DPDNet: Efficient channel expansion, Depthwise-Pointwise-Depthwise Inverted Bottleneck Block

In many real-time applications, the deployment of deep neural networks i...
research
04/14/2023

Convex Dual Theory Analysis of Two-Layer Convolutional Neural Networks with Soft-Thresholding

Soft-thresholding has been widely used in neural networks. Its basic net...

Please sign up or login with your details

Forgot password? Click here to reset