Crossbar-aware neural network pruning

07/25/2018
by   Ling Liang, et al.
0

Crossbar architecture based devices have been widely adopted in neural network accelerators by taking advantage of the high efficiency on vector-matrix multiplication (VMM) operations. However, in the case of convolutional neural networks (CNNs), the efficiency is compromised dramatically due to the large amounts of data reuse. Although some mapping methods have been designed to achieve a balance between the execution throughput and resource overhead, the resource consumption cost is still huge while maintaining the throughput. Network pruning is a promising and widely studied leverage to shrink the model size. Whereas, previous work didn`t consider the crossbar architecture and the corresponding mapping method, which cannot be directly utilized by crossbar-based neural network accelerators. Tightly combining the crossbar structure and its mapping, this paper proposes a crossbar-aware pruning framework based on a formulated L0-norm constrained optimization problem. Specifically, we design an L0-norm constrained gradient descent (LGD) with relaxant probabilistic projection (RPP) to solve this problem. Two grains of sparsity are successfully achieved: i) intuitive crossbar-grain sparsity and ii) column-grain sparsity with output recombination, based on which we further propose an input feature maps (FMs) reorder method to improve the model accuracy. We evaluate our crossbar-aware pruning framework on median-scale CIFAR10 dataset and large-scale ImageNet dataset with VGG and ResNet models. Our method is able to reduce the crossbar overhead by 44 accuracy degradation. This work greatly saves the resource and the related energy cost, which provides a new co-design solution for mapping CNNs onto various crossbar devices with significantly higher efficiency.

READ FULL TEXT

page 3

page 5

page 7

research
10/24/2020

MARS: Multi-macro Architecture SRAM CIM-Based Accelerator with Co-designed Compressed Neural Networks

Convolutional neural networks (CNNs) play a key role in deep learning ap...
research
12/10/2018

Reliable Identification of Redundant Kernels for Convolutional Neural Network Compression

To compress deep convolutional neural networks (CNNs) with large memory ...
research
10/06/2020

RANP: Resource Aware Neuron Pruning at Initialization for 3D CNNs

Although 3D Convolutional Neural Networks (CNNs) are essential for most ...
research
02/01/2022

Sense: Model Hardware Co-design for Accelerating Sparse Neural Networks

Sparsity is an intrinsic property of neural network(NN). Many software r...
research
09/06/2018

2PFPCE: Two-Phase Filter Pruning Based on Conditional Entropy

Deep Convolutional Neural Networks (CNNs) offer remarkable performance o...
research
06/11/2019

A Taxonomy of Channel Pruning Signals in CNNs

Convolutional neural networks (CNNs) are widely used for classification ...
research
12/19/2021

Elastic-Link for Binarized Neural Network

Recent work has shown that Binarized Neural Networks (BNNs) are able to ...

Please sign up or login with your details

Forgot password? Click here to reset