A Semi-Decoupled Approach to Fast and Optimal Hardware-Software Co-Design of Neural Accelerators

03/25/2022
by   Bingqian Lu, et al.
0

In view of the performance limitations of fully-decoupled designs for neural architectures and accelerators, hardware-software co-design has been emerging to fully reap the benefits of flexible design spaces and optimize neural network performance. Nonetheless, such co-design also enlarges the total search space to practically infinity and presents substantial challenges. While the prior studies have been focusing on improving the search efficiency (e.g., via reinforcement learning), they commonly rely on co-searches over the entire architecture-accelerator design space. In this paper, we propose a semi-decoupled approach to reduce the size of the total design space by orders of magnitude, yet without losing optimality. We first perform neural architecture search to obtain a small set of optimal architectures for one accelerator candidate. Importantly, this is also the set of (close-to-)optimal architectures for other accelerator designs based on the property that neural architectures' ranking orders in terms of inference latency and energy consumption on different accelerator designs are highly similar. Then, instead of considering all the possible architectures, we optimize the accelerator design only in combination with this small set of architectures, thus significantly reducing the total search cost. We validate our approach by conducting experiments on various architecture spaces for accelerator designs with different dataflows. Our results highlight that we can obtain the optimal design by only navigating over the reduced search space. The source code of this work is at <https://github.com/Ren-Research/CoDesign>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2021

RHNAS: Realizable Hardware and Neural Architecture Search

The rapidly evolving field of Artificial Intelligence necessitates autom...
research
02/17/2021

Rethinking Co-design of Neural Architectures and Hardware Accelerators

Neural architectures and hardware accelerators have been two driving for...
research
10/05/2020

Learned Hardware/Software Co-Design of Neural Accelerators

The use of deep learning has grown at an exponential rate, giving rise t...
research
05/27/2021

NAAS: Neural Accelerator Architecture Search

Data-driven, automatic design space exploration of neural accelerator ar...
research
06/11/2021

Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators

While maximizing deep neural networks' (DNNs') acceleration efficiency r...
research
06/22/2021

Randomness In Neural Network Training: Characterizing The Impact of Tooling

The quest for determinism in machine learning has disproportionately foc...
research
01/08/2023

A Multi-Site Accelerator-Rich Processing Fabric for Scalable Brain-Computer Interfacing

Hull is an accelerator-rich distributed implantable Brain-Computer Inter...

Please sign up or login with your details

Forgot password? Click here to reset