Memory-Efficient CNN Accelerator Based on Interlayer Feature Map Compression

10/12/2021
by   Zhuang Shao, et al.
0

Existing deep convolutional neural networks (CNNs) generate massive interlayer feature data during network inference. To maintain real-time processing in embedded systems, large on-chip memory is required to buffer the interlayer feature maps. In this paper, we propose an efficient hardware accelerator with an interlayer feature compression technique to significantly reduce the required on-chip memory size and off-chip memory access bandwidth. The accelerator compresses interlayer feature maps through transforming the stored data into frequency domain using hardware-implemented 8x8 discrete cosine transform (DCT). The high-frequency components are removed after the DCT through quantization. Sparse matrix compression is utilized to further compress the interlayer feature maps. The on-chip memory allocation scheme is designed to support dynamic configuration of the feature map buffer size and scratch pad size according to different network-layer requirements. The hardware accelerator combines compression, decompression, and CNN acceleration into one computing stream, achieving minimal compressing and processing delay. A prototype accelerator is implemented on an FPGA platform and also synthesized in TSMC 28-nm COMS technology. It achieves 403GOPS peak throughput and 1.4x 3.3x interlayer feature map reduction by adding light hardware area overhead, making it a promising hardware accelerator for intelligent IoT devices.

READ FULL TEXT

page 3

page 4

page 5

page 6

page 8

page 10

page 13

page 14

research
01/18/2018

On-Chip CNN Accelerator for Image Super-Resolution

To implement convolutional neural networks (CNN) in hardware, the state-...
research
06/15/2021

ShortcutFusion: From Tensorflow to FPGA-based accelerator with reuse-aware memory allocation for shortcut data

Residual block is a very common component in recent state-of-the art CNN...
research
10/27/2022

Improved Projection Learning for Lower Dimensional Feature Maps

The requirement to repeatedly move large feature maps off- and on-chip d...
research
09/25/2019

CAT: Compression-Aware Training for bandwidth reduction

Convolutional neural networks (CNNs) have become the dominant neural net...
research
12/16/2019

A flexible FPGA accelerator for convolutional neural networks

Though CNNs are highly parallel workloads, in the absence of efficient o...
research
07/08/2017

A Reconfigurable Streaming Deep Convolutional Neural Network Accelerator for Internet of Things

Convolutional neural network (CNN) offers significant accuracy in image ...
research
07/21/2022

Hardware-Efficient Template-Based Deep CNNs Accelerator Design

Acceleration of Convolutional Neural Network (CNN) on edge devices has r...

Please sign up or login with your details

Forgot password? Click here to reset