RHNAS: Realizable Hardware and Neural Architecture Search

06/17/2021
by   Yash Akhauri, et al.
15

The rapidly evolving field of Artificial Intelligence necessitates automated approaches to co-design neural network architecture and neural accelerators to maximize system efficiency and address productivity challenges. To enable joint optimization of this vast space, there has been growing interest in differentiable NN-HW co-design. Fully differentiable co-design has reduced the resource requirements for discovering optimized NN-HW configurations, but fail to adapt to general hardware accelerator search spaces. This is due to the existence of non-synthesizable (invalid) designs in the search space of many hardware accelerators. To enable efficient and realizable co-design of configurable hardware accelerators with arbitrary neural network search spaces, we introduce RHNAS. RHNAS is a method that combines reinforcement learning for hardware optimization with differentiable neural architecture search. RHNAS discovers realizable NN-HW designs with 1.84x lower latency and 1.86x lower energy-delay product (EDP) on ImageNet and 2.81x lower latency and 3.30x lower EDP on CIFAR-10 over the default hardware accelerator design.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/25/2022

A Semi-Decoupled Approach to Fast and Optimal Hardware-Software Co-Design of Neural Accelerators

In view of the performance limitations of fully-decoupled designs for ne...
research
01/17/2020

Latency-Aware Differentiable Neural Architecture Search

Differentiable neural architecture search methods became popular in auto...
research
05/27/2021

NAAS: Neural Accelerator Architecture Search

Data-driven, automatic design space exploration of neural accelerator ar...
research
08/22/2023

DeepBurning-MixQ: An Open Source Mixed-Precision Neural Network Accelerator Design Framework for FPGAs

Mixed-precision neural networks (MPNNs) that enable the use of just enou...
research
09/14/2020

DANCE: Differentiable Accelerator/Network Co-Exploration

To cope with the ever-increasing computational demand of the DNN executi...
research
12/07/2022

CODEBench: A Neural Architecture and Hardware Accelerator Co-Design Framework

Recently, automated co-design of machine learning (ML) models and accele...
research
01/23/2023

Enabling Hard Constraints in Differentiable Neural Network and Accelerator Co-Exploration

Co-exploration of an optimal neural architecture and its hardware accele...

Please sign up or login with your details

Forgot password? Click here to reset