IGCV2: Interleaved Structured Sparse Convolutional Neural Networks

04/17/2018
by   Guotian Xie, et al.
0

In this paper, we study the problem of designing efficient convolutional neural network architectures with the interest in eliminating the redundancy in convolution kernels. In addition to structured sparse kernels, low-rank kernels and the product of low-rank kernels, the product of structured sparse kernels, which is a framework for interpreting the recently-developed interleaved group convolutions (IGC) and its variants (e.g., Xception), has been attracting increasing interests. Motivated by the observation that the convolutions contained in a group convolution in IGC can be further decomposed in the same manner, we present a modularized building block, IGCV2: interleaved structured sparse convolutions. It generalizes interleaved group convolutions, which is composed of two structured sparse kernels, to the product of more structured sparse kernels, further eliminating the redundancy. We present the complementary condition and the balance condition to guide the design of structured sparse kernels, obtaining a balance among three aspects: model size, computation complexity and classification accuracy. Experimental results demonstrate the advantage on the balance among these three aspects compared to interleaved group convolutions and Xception, and competitive performance compared to other state-of-the-art architecture design methods.

READ FULL TEXT
research
06/01/2018

IGCV3: Interleaved Low-Rank Group Convolutions for Efficient Deep Neural Networks

In this paper, we are interested in building lightweight and efficient c...
research
08/06/2020

Structured Convolutions for Efficient Neural Network Design

In this work, we tackle model efficiency by exploiting redundancy in the...
research
04/04/2023

Neural Field Convolutions by Repeated Differentiation

Neural fields are evolving towards a general-purpose continuous represen...
research
09/04/2020

ACDC: Weight Sharing in Atom-Coefficient Decomposed Convolution

Convolutional Neural Networks (CNNs) are known to be significantly over-...
research
04/29/2021

Hardware Architecture of Embedded Inference Accelerator and Analysis of Algorithms for Depthwise and Large-Kernel Convolutions

In order to handle modern convolutional neural networks (CNNs) efficient...
research
10/19/2021

Understanding Convolutional Neural Networks from Theoretical Perspective via Volterra Convolution

This study proposes a general and unified perspective of convolutional n...
research
12/24/2021

Fast 2D Convolutions and Cross-Correlations Using Scalable Architectures

The manuscript describes fast and scalable architectures and associated ...

Please sign up or login with your details

Forgot password? Click here to reset