IGCV3: Interleaved Low-Rank Group Convolutions for Efficient Deep Neural Networks

06/01/2018
by   Ke Sun, et al.
0

In this paper, we are interested in building lightweight and efficient convolutional neural networks. Inspired by the success of two design patterns, composition of structured sparse kernels, e.g., interleaved group convolutions (IGC), and composition of low-rank kernels, e.g., bottle-neck modules, we study the combination of such two design patterns, using the composition of structured sparse low-rank kernels, to form a convolutional kernel. Rather than introducing a complementary condition over channels, we introduce a loose complementary condition, which is formulated by imposing the complementary condition over super-channels, to guide the design for generating a dense convolutional kernel. The resulting network is called IGCV3. We empirically demonstrate that the combination of low-rank and sparse kernels boosts the performance and the superiority of our proposed approach to the state-of-the-arts, IGCV2 and MobileNetV2 over image classification on CIFAR and ImageNet and object detection on COCO.

READ FULL TEXT
research
04/17/2018

IGCV2: Interleaved Structured Sparse Convolutional Neural Networks

In this paper, we study the problem of designing efficient convolutional...
research
08/07/2018

Efficient Fusion of Sparse and Complementary Convolutions for Object Recognition and Detection

We propose a new method for exploiting sparsity in convolutional kernels...
research
02/11/2017

Group Scissor: Scaling Neuromorphic Computing Design to Large Neural Networks

Synapse crossbar is an elementary structure in Neuromorphic Computing Sy...
research
01/05/2021

Kernel optimization for Low-Rank Multi-Fidelity Algorithms

One of the major challenges for low-rank multi-fidelity (MF) approaches ...
research
01/29/2019

Sparse Least Squares Low Rank Kernel Machines

A general framework of least squares support vector machine with low ran...
research
01/26/2023

Low-Rank Winograd Transformation for 3D Convolutional Neural Networks

This paper focuses on Winograd transformation in 3D convolutional neural...
research
05/15/2023

SKI to go Faster: Accelerating Toeplitz Neural Networks via Asymmetric Kernels

Toeplitz Neural Networks (TNNs) (Qin et. al. 2023) are a recent sequence...

Please sign up or login with your details

Forgot password? Click here to reset