Pre-defined Sparsity for Low-Complexity Convolutional Neural Networks

01/29/2020
by   Souvik Kundu, et al.
20

The high energy cost of processing deep convolutional neural networks impedes their ubiquitous deployment in energy-constrained platforms such as embedded systems and IoT devices. This work introduces convolutional layers with pre-defined sparse 2D kernels that have support sets that repeat periodically within and across filters. Due to the efficient storage of our periodic sparse kernels, the parameter savings can translate into considerable improvements in energy efficiency due to reduced DRAM accesses, thus promising significant improvements in the trade-off between energy consumption and accuracy for both training and inference. To evaluate this approach, we performed experiments with two widely accepted datasets, CIFAR-10 and Tiny ImageNet in sparse variants of the ResNet18 and VGG16 architectures. Compared to baseline models, our proposed sparse variants require up to 82 5.6times fewer FLOPs with negligible loss in accuracy for ResNet18 on CIFAR-10. For VGG16 trained on Tiny ImageNet, our approach requires 5.8times fewer FLOPs and up to 83.3 only 1.2 architectures with that of ShuffleNet andMobileNetV2. Using similar hyperparameters and FLOPs, our ResNet18 variants yield an average accuracy improvement of 2.8

READ FULL TEXT

page 5

page 12

page 13

page 14

research
10/02/2019

A Pre-defined Sparse Kernel Based Convolutionfor Deep CNNs

The high demand for computational and storage resources severely impede ...
research
10/02/2019

A Pre-defined Sparse Kernel Based Convolution for Deep CNNs

The high demand for computational and storage resources severely impede ...
research
07/20/2020

Learning Sparse Filters in Deep Convolutional Neural Networks with a l1/l2 Pseudo-Norm

While deep neural networks (DNNs) have proven to be efficient for numero...
research
06/29/2020

EmotionNet Nano: An Efficient Deep Convolutional Neural Network Design for Real-time Facial Expression Recognition

While recent advances in deep learning have led to significant improveme...
research
04/16/2019

Processing-In-Memory Acceleration of Convolutional Neural Networks for Energy-Efficiency, and Power-Intermittency Resilience

Herein, a bit-wise Convolutional Neural Network (CNN) in-memory accelera...
research
03/23/2018

SqueezeNext: Hardware-Aware Neural Network Design

One of the main barriers for deploying neural networks on embedded syste...
research
09/11/2020

SoFAr: Shortcut-based Fractal Architectures for Binary Convolutional Neural Networks

Binary Convolutional Neural Networks (BCNNs) can significantly improve t...

Please sign up or login with your details

Forgot password? Click here to reset