Co-Exploration of Neural Architectures and Heterogeneous ASIC Accelerator Designs Targeting Multiple Tasks

02/10/2020
by   Lei Yang, et al.
0

Neural Architecture Search (NAS) has demonstrated its power on various AI accelerating platforms such as Field Programmable Gate Arrays (FPGAs) and Graphic Processing Units (GPUs). However, it remains an open problem, how to integrate NAS with Application-Specific Integrated Circuits (ASICs), despite them being the most powerful AI accelerating platforms. The major bottleneck comes from the large design freedom associated with ASIC designs. Moreover, with the consideration that multiple DNNs will run in parallel for different workloads with diverse layer operations and sizes, integrating heterogeneous ASIC sub-accelerators for distinct DNNs in one design can significantly boost performance, and at the same time further complicate the design space. To address these challenges, in this paper we build ASIC template set based on existing successful designs, described by their unique dataflows, so that the design space is significantly reduced. Based on the templates, we further propose a framework, namely NASAIC, which can simultaneously identify multiple DNN architectures and the associated heterogeneous ASIC accelerator design, such that the design specifications (specs) can be satisfied, while the accuracy can be maximized. Experimental results show that compared with successive NAS and ASIC design optimizations which lead to design spec violations, NASAIC can guarantee the results to meet the design specs with 17.77 accuracy loss. To the best of the authors' knowledge, this is the first work on neural architecture and ASIC accelerator design co-exploration.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

research
07/06/2019

Hardware/Software Co-Exploration of Neural Architectures

We propose a novel hardware and software co-exploration framework for ef...
research
01/31/2019

Accuracy vs. Efficiency: Achieving Both through FPGA-Implementation Aware Neural Architecture Search

A fundamental question lies in almost every application of deep neural n...
research
10/31/2019

Device-Circuit-Architecture Co-Exploration for Computing-in-Memory Neural Accelerators

Co-exploration of neural architectures and hardware design is promising ...
research
02/11/2020

Best of Both Worlds: AutoML Codesign of a CNN and its Hardware Accelerator

Neural architecture search (NAS) has been very successful at outperformi...
research
09/12/2023

Accelerating Edge AI with Morpher: An Integrated Design, Compilation and Simulation Framework for CGRAs

Coarse-Grained Reconfigurable Arrays (CGRAs) hold great promise as power...
research
06/11/2021

Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators

While maximizing deep neural networks' (DNNs') acceleration efficiency r...
research
01/06/2023

CHARM: Composing Heterogeneous Accelerators for Matrix Multiply on Versal ACAP Architecture

Dense matrix multiply (MM) serves as one of the most heavily used kernel...

Please sign up or login with your details

Forgot password? Click here to reset