OCCAM: Optimal Data Reuse for Convolutional Neural Networks

06/27/2021
by   Ashish Gondimalla, et al.
0

Convolutional neural networks (CNNs) are emerging as powerful tools for image processing in important commercial applications. We focus on the important problem of improving the latency of image recognition. CNNs' large data at each layer's input, filters, and output poses a memory bandwidth problem. While previous work captures only some of the enormous data reuse, full reuse implies that the initial input image and filters are read once from off chip and the final output is written once off chip without spilling the intermediate layers' data to off-chip. We propose Occam to capture full reuse via four contributions. (1) We identify the necessary condition for full reuse. (2) We identify the dependence closure as the sufficient condition to capture full reuse using the least on-chip memory. (3) Because the dependence closure is often too large to fit in on-chip memory, we propose a dynamic programming algorithm that optimally partitions a given CNN to guarantee the least off-chip traffic at the partition boundaries for a given on-chip capacity. Occam's partitions reside on different chips forming a pipeline so that a partition's filters and dependence closure remain on-chip as different images pass through (i.e., each partition incurs off-chip traffic only for its inputs and outputs). (4) because the optimal partitions may result in an unbalanced pipeline, we propose staggered asynchronous pipelines (STAP) which replicates the bottleneck stages to improve throughput by staggering the mini-batches across the replicas. Importantly, STAP achieves balanced pipelines without changing Occam's optimal partitioning. Our simulations show that Occam cuts off-chip transfers by 21x and achieves 2.06x and 1.36x better performance, and 33% and 24% better energy than the base case and Layer Fusion, respectively. On an FPGA implementation, Occam performs 5.1x better than the base case.

READ FULL TEXT

page 1

page 9

research
09/30/2018

Mini-batch Serialization: CNN Training with Inter-layer Data Reuse

Training convolutional neural networks (CNNs) requires intense computati...
research
06/18/2018

Partitioning Compute Units in CNN Acceleration for Statistical Memory Traffic Shaping

The design complexity of CNNs has been steadily increasing to improve ac...
research
01/28/2019

A Simple Method to Reduce Off-chip Memory Accesses on Convolutional Neural Networks

For convolutional neural networks, a simple algorithm to reduce off-chip...
research
04/18/2021

Barrier-Free Large-Scale Sparse Tensor Accelerator (BARISTA) For Convolutional Neural Networks

Convolutional neural networks (CNNs) are emerging as powerful tools for ...
research
08/01/2021

Improving the Performance of a NoC-based CNN Accelerator with Gather Support

The increasing application of deep learning technology drives the need f...
research
02/23/2022

Shisha: Online scheduling of CNN pipelines on heterogeneous architectures

Chiplets have become a common methodology in modern chip design. Chiplet...
research
04/10/2020

SMART Paths for Latency Reduction in ReRAM Processing-In-Memory Architecture for CNN Inference

This research work proposes a design of an analog ReRAM-based PIM (proce...

Please sign up or login with your details

Forgot password? Click here to reset