CHARM: Composing Heterogeneous Accelerators for Matrix Multiply on Versal ACAP Architecture

01/06/2023
by   Jinming Zhuang, et al.
0

Dense matrix multiply (MM) serves as one of the most heavily used kernels in deep learning applications. To cope with the high computation demands of these applications, heterogeneous architectures featuring both FPGA and dedicated ASIC accelerators have emerged as promising platforms. For example, the AMD/Xilinx Versal ACAP architecture combines general-purpose CPU cores and programmable logic with AI Engine processors optimized for AI/ML. With 400 AIEs, it provides up to 6.4 TFLOPs performance for 32-bit floating-point data. However, machine learning models often contain both large and small MM operations. While large MM operations can be parallelized efficiently across many cores, small MM operations typically cannot. We observe that executing some small MM layers from the BERT natural language processing model on a large, monolithic MM accelerator in Versal ACAP achieved less than 5 theoretical peak performance. Therefore, one key question arises: How can we design accelerators to fully use the abundant computation resources under limited communication bandwidth for applications with multiple MM layers of diverse sizes? We identify the biggest system throughput bottleneck resulting from the mismatch of massive computation resources of one monolithic accelerator and the various MM layers of small sizes in the application. To resolve this problem, we propose the CHARM framework to compose multiple diverse MM accelerator architectures working concurrently towards different layers in one application. We deploy the CHARM framework for four different applications, including BERT, ViT, NCF, MLP, on the AMD Versal ACAP VCK190 evaluation board. Our experiments show that we achieve 1.46 TFLOPs, 1.61 TFLOPs, 1.74 TFLOPs, and 2.94 TFLOPs inference throughput for BERT, ViT, NCF and MLP, which obtain 5.40x, 32.51x, 1.00x and 1.00x throughput gains compared to one monolithic accelerator.

READ FULL TEXT

page 4

page 5

page 9

research
05/30/2023

AutoMM: Energy-Efficient Multi-Data-Type Matrix Multiply Design on Heterogeneous Programmable System-on-Chip

As the increasing complexity of Neural Network(NN) models leads to high ...
research
10/26/2017

The implementation of a Deep Recurrent Neural Network Language Model on a Xilinx FPGA

Recently, FPGA has been increasingly applied to problems such as speech ...
research
01/29/2019

PUMA: A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference

Memristor crossbars are circuits capable of performing analog matrix-vec...
research
08/28/2020

DNNExplorer: A Framework for Modeling and Exploring a Novel Paradigm of FPGA-based DNN Accelerator

Existing FPGA-based DNN accelerators typically fall into two design para...
research
12/30/2018

ORIGAMI: A Heterogeneous Split Architecture for In-Memory Acceleration of Learning

Memory bandwidth bottleneck is a major challenges in processing machine ...
research
02/10/2020

Co-Exploration of Neural Architectures and Heterogeneous ASIC Accelerator Designs Targeting Multiple Tasks

Neural Architecture Search (NAS) has demonstrated its power on various A...
research
06/27/2023

Retrospective: A Scalable Processing-in-Memory Accelerator for Parallel Graph Processing

Our ISCA 2015 paper provides a new programmable processing-in-memory (PI...

Please sign up or login with your details

Forgot password? Click here to reset