Sense: Model Hardware Co-design for Accelerating Sparse Neural Networks

02/01/2022
by   Wenhao Sun, et al.
0

Sparsity is an intrinsic property of neural network(NN). Many software researchers have attempted to improve sparsity through pruning, for reduction on weight storage and computation workload, while hardware architects are working on how to skip redundant computations for higher energy efciency, but there always exists overhead, causing many architectures suffering from only minor proft. Therefrom, systolic array becomes a promising candidate for the advantages of low fanout and high throughput. However, sparsity is irregular, making it tricky to ft in with the rigid systolic tempo. Thus, this paper proposed a systolic-based architecture, called Sense, for both sparse input feature map(IFM) and weight processing, achieving large performance improvement with relatively small resource and power consumption. Meanwhile, we applied channel rearrangement to gather IFMs with approximate sparsity and co-designed an adaptive weight training method to keep the sparsity ratio(zero element percentage) of each kernel at 1/2, with little accuracy loss. This treatment can effectively reduce the irregularity of sparsity and help better ft with systolic dataflow. Additionally, a dataflow called Partition Reuse is mapped to our architecture, enhancing data reuse, lowering 1.9x-2.6x DRAM access reduction compared with Eyeriss and further reducing system energy consumption. The whole design is implemented on ZynqZCU102 and performs at a peak throughput of 409.6 GOP/s, with power consumption of 11.2W; compared with previous sparse NN accelerators based on FPGA, Sense takes up 1/5 less LUTs and 3/4 less BRAMs, reaches 2.1x peak energy efciency and achieves 1.15x-1.49x speedup.

READ FULL TEXT

page 9

page 13

page 17

page 18

research
02/01/2023

Bit-balance: Model-Hardware Co-design for Accelerating NNs by Exploiting Bit-level Sparsity

Bit-serial architectures can handle Neural Networks (NNs) with different...
research
04/18/2018

UCNN: Exploiting Computational Reuse in Deep Neural Networks via Weight Repetition

Convolutional Neural Networks (CNNs) have begun to permeate all corners ...
research
11/25/2022

Signed Binary Weight Networks

Efficient inference of Deep Neural Networks (DNNs) is essential to makin...
research
09/04/2020

Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration

Convolutional neural network (CNN) inference on mobile devices demands e...
research
07/25/2018

Crossbar-aware neural network pruning

Crossbar architecture based devices have been widely adopted in neural n...
research
08/02/2021

RFC-HyPGCN: A Runtime Sparse Feature Compress Accelerator for Skeleton-Based GCNs Action Recognition Model with Hybrid Pruning

Skeleton-based Graph Convolutional Networks (GCNs) models for action rec...
research
10/04/2018

Towards Fast and Energy-Efficient Binarized Neural Network Inference on FPGA

Binarized Neural Network (BNN) removes bitwidth redundancy in classical ...

Please sign up or login with your details

Forgot password? Click here to reset