RHNAS: Realizable Hardware and Neural Architecture Search

06/17/2021
by   Yash Akhauri, et al.
15

The rapidly evolving field of Artificial Intelligence necessitates automated approaches to co-design neural network architecture and neural accelerators to maximize system efficiency and address productivity challenges. To enable joint optimization of this vast space, there has been growing interest in differentiable NN-HW co-design. Fully differentiable co-design has reduced the resource requirements for discovering optimized NN-HW configurations, but fail to adapt to general hardware accelerator search spaces. This is due to the existence of non-synthesizable (invalid) designs in the search space of many hardware accelerators. To enable efficient and realizable co-design of configurable hardware accelerators with arbitrary neural network search spaces, we introduce RHNAS. RHNAS is a method that combines reinforcement learning for hardware optimization with differentiable neural architecture search. RHNAS discovers realizable NN-HW designs with 1.84x lower latency and 1.86x lower energy-delay product (EDP) on ImageNet and 2.81x lower latency and 3.30x lower EDP on CIFAR-10 over the default hardware accelerator design.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/17/2021

Rethinking Co-design of Neural Architectures and Hardware Accelerators

Neural architectures and hardware accelerators have been two driving for...
09/14/2020

DANCE: Differentiable Accelerator/Network Co-Exploration

To cope with the ever-increasing computational demand of the DNN executi...
01/17/2020

Latency-Aware Differentiable Neural Architecture Search

Differentiable neural architecture search methods became popular in auto...
05/27/2021

NAAS: Neural Accelerator Architecture Search

Data-driven, automatic design space exploration of neural accelerator ar...
03/02/2021

Mind Mappings: Enabling Efficient Algorithm-Accelerator Mapping Space Search

Modern day computing increasingly relies on specialization to satiate gr...
10/23/2019

Sidebar: Scratchpad Based Communication Between CPUs and Accelerators

Hardware accelerators for neural networks have shown great promise for b...
02/10/2021

Searching for Fast Model Families on Datacenter Accelerators

Neural Architecture Search (NAS), together with model scaling, has shown...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.