Partitioning Compute Units in CNN Acceleration for Statistical Memory Traffic Shaping

06/18/2018
by   Daejin Jung, et al.
0

The design complexity of CNNs has been steadily increasing to improve accuracy. To cope with the massive amount of computation needed for such complex CNNs, the latest solutions utilize blocking of an image over the available dimensions and batching of multiple input images to improve data reuse in the memory hierarchy. While there has been numerous works on maximizing data reuse, only a few studies have focused on the memory bottleneck caused by limited bandwidth. Bandwidth bottleneck can easily occur in CNN acceleration as CNN layers have different sizes with varying computation needs and as batching is typically performed over each CNN layer for an ideal data reuse. In this case, the data transfer demand for a layer can be relatively low or high compared to the computation requirement of the layer, and hence temporal fluctuations in memory access can be induced eventually causing bandwidth problems. In this paper, we first show that there exists a high degree of fluctuation in memory access to computation ratio depending on CNN layers and functions in the layer being processed by the compute units (cores), where the units are tightly synchronized to maximize data reuse. Then we propose a strategy of partitioning the compute units where the cores within each partition process a batch of input data synchronously to maximize data reuse but different partitions run asynchronously. As the partitions stay asynchronous and typically process different CNN layers at any given moment, the memory access traffic sizes of the partitions become statistically shuffled. Thus, the partitioning of compute units and asynchronous use of them make the total memory access traffic size be smoothened over time. We call this smoothing statistical memory traffic shaping, and we show that it can lead to 8.0 percent of performance gain on a commercial 64-core processor when running ResNet-50.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2018

Mini-batch Serialization: CNN Training with Inter-layer Data Reuse

Training convolutional neural networks (CNNs) requires intense computati...
research
06/27/2021

OCCAM: Optimal Data Reuse for Convolutional Neural Networks

Convolutional neural networks (CNNs) are emerging as powerful tools for ...
research
04/02/2019

DeLTA: GPU Performance Model for Deep Learning Applications with In-depth Memory System Traffic Analysis

Training convolutional neural networks (CNNs) requires intense compute t...
research
04/20/2021

CoDR: Computation and Data Reuse Aware CNN Accelerator

Computation and Data Reuse is critical for the resource-limited Convolut...
research
12/22/2022

A Moveable Beast: Partitioning Data and Compute for Computational Storage

Over the years, hardware trends have introduced various heterogeneous co...
research
10/31/2019

Run-time Parameter Sensitivity Analysis Optimizations

Efficient execution of parameter sensitivity analysis (SA) is critical t...
research
08/01/2021

Versa: A Dataflow-Centric Multiprocessor with 36 Systolic ARM Cortex-M4F Cores and a Reconfigurable Crossbar-Memory Hierarchy in 28nm

We present Versa, an energy-efficient processor with 36 systolic ARM Cor...

Please sign up or login with your details

Forgot password? Click here to reset