On the Impact of Partial Sums on Interconnect Bandwidth and Memory Accesses in a DNN Accelerator

11/02/2020
by   Mahesh Chandra, et al.
0

Dedicated accelerators are being designed to address the huge resource requirement of the deep neural network (DNN) applications. The power, performance and area (PPA) constraints limit the number of MACs available in these accelerators. The convolution layers which require huge number of MACs are often partitioned into multiple iterative sub-tasks. This puts huge pressure on the available system resources such as interconnect and memory bandwidth. The optimal partitioning of the feature maps for these sub-tasks can reduce the bandwidth requirement substantially. Some accelerators avoid off-chip or interconnect transfers by implementing local memories; however, the memory accesses are still performed and a reduced bandwidth can help in saving power in such architectures. In this paper, we propose a first order analytical method to partition the feature maps for optimal bandwidth and evaluate the impact of such partitioning on the bandwidth. This bandwidth can be saved by designing an active memory controller which can perform basic arithmetic operations. It is shown that the optimal partitioning and active memory controller can achieve up to 40

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/30/2020

Dataflow-Architecture Co-Design for 2.5D DNN Accelerators using Wireless Network-on-Package

Deep neural network (DNN) models continue to grow in size and complexity...
research
05/02/2022

Zebra: Memory Bandwidth Reduction for CNN Accelerators With Zero Block Regularization of Activation Maps

The large amount of memory bandwidth between local buffer and external D...
research
02/04/2019

Optimally Scheduling CNN Convolutions for Efficient Memory Access

Embedded inference engines for convolutional networks must be parsimonio...
research
07/21/2019

Achieving Super-Linear Speedup across Multi-FPGA for Real-Time DNN Inference

Real-time Deep Neural Network (DNN) inference with low-latency requireme...
research
07/13/2021

FLAT: An Optimized Dataflow for Mitigating Attention Performance Bottlenecks

Attention mechanisms form the backbone of state-of-the-art machine learn...
research
02/11/2022

Increasing FPGA Accelerators Memory Bandwidth with a Burst-Friendly Memory Layout

Offloading compute-intensive kernels to hardware accelerators relies on ...
research
11/08/2022

Iris: Automatic Generation of Efficient Data Layouts for High Bandwidth Utilization

Optimizing data movements is becoming one of the biggest challenges in h...

Please sign up or login with your details

Forgot password? Click here to reset