Frequency Regularization: Restricting Information Redundancy of Convolutional Neural Networks

04/17/2023
by   Chenqiu Zhao, et al.
0

Convolutional neural networks have demonstrated impressive results in many computer vision tasks. However, the increasing size of these networks raises concerns about the information overload resulting from the large number of network parameters. In this paper, we propose Frequency Regularization to restrict the non-zero elements of the network parameters in frequency domain. The proposed approach operates at the tensor level, and can be applied to almost all network architectures. Specifically, the tensors of parameters are maintained in the frequency domain, where high frequency components can be eliminated by zigzag setting tensor elements to zero. Then, the inverse discrete cosine transform (IDCT) is used to reconstruct the spatial tensors for matrix operations during network training. Since high frequency components of images are known to be less critical, a large proportion of these parameters can be set to zero when networks are trained with the proposed frequency regularization. Comprehensive evaluations on various state-of-the-art network architectures, including LeNet, Alexnet, VGG, Resnet, ViT, UNet, GAN, and VAE, demonstrate the effectiveness of the proposed frequency regularization. Under the condition of a very small accuracy decrease (less than 2%), a LeNet5 with 0.4M parameters can be represented by only 776 float16 numbers(over 1100×), and a UNet with 34M parameters can be represented by only 759 float16 numbers (over 80000×).

READ FULL TEXT

page 1

page 5

page 6

research
04/03/2022

Improving Vision Transformers by Revisiting High-frequency Components

The transformer models have shown promising effectiveness in dealing wit...
research
06/14/2015

Compressing Convolutional Neural Networks

Convolutional neural networks (CNN) are increasingly used in many areas ...
research
12/06/2020

Rethinking FUN: Frequency-Domain Utilization Networks

The search for efficient neural network architectures has gained much fo...
research
11/14/2022

Group-Equivariant Neural Networks with Fusion Diagrams

Many learning tasks in physics and chemistry involve global spatial symm...
research
02/27/2020

Learning in the Frequency Domain

Deep neural networks have achieved remarkable success in computer vision...
research
01/23/2020

DCT-Conv: Coding filters in convolutional networks with Discrete Cosine Transform

Convolutional neural networks are based on a huge number of trained weig...
research
06/04/2021

Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators

This paper presents a novel pre-trained language models (PLM) compressio...

Please sign up or login with your details

Forgot password? Click here to reset