CODEBench: A Neural Architecture and Hardware Accelerator Co-Design Framework

12/07/2022
by   Shikhar Tuli, et al.
0

Recently, automated co-design of machine learning (ML) models and accelerator architectures has attracted significant attention from both the industry and academia. However, most co-design frameworks either explore a limited search space or employ suboptimal exploration techniques for simultaneous design decision investigations of the ML model and the accelerator. Furthermore, training the ML model and simulating the accelerator performance is computationally expensive. To address these limitations, this work proposes a novel neural architecture and hardware accelerator co-design framework, called CODEBench. It is composed of two new benchmarking sub-frameworks, CNNBench and AccelBench, which explore expanded design spaces of convolutional neural networks (CNNs) and CNN accelerators. CNNBench leverages an advanced search technique, BOSHNAS, to efficiently train a neural heteroscedastic surrogate model to converge to an optimal CNN architecture by employing second-order gradients. AccelBench performs cycle-accurate simulations for a diverse set of accelerator architectures in a vast design space. With the proposed co-design method, called BOSHCODE, our best CNN-accelerator pair achieves 1.4 accuracy on the CIFAR-10 dataset compared to the state-of-the-art pair, while enabling 59.1 ImageNet dataset, it achieves 3.7 and 11.2 framework, i.e., Auto-NBA, by achieving 1.5 throughput, while enabling 11.0x lower energy-delay product (EDP) and 4.0x lower chip area on CIFAR-10.

READ FULL TEXT

page 9

page 15

page 16

page 22

page 23

research
02/11/2020

Best of Both Worlds: AutoML Codesign of a CNN and its Hardware Accelerator

Neural architecture search (NAS) has been very successful at outperformi...
research
06/17/2021

RHNAS: Realizable Hardware and Neural Architecture Search

The rapidly evolving field of Artificial Intelligence necessitates autom...
research
10/16/2018

Morph: Flexible Acceleration for 3D CNN-based Video Understanding

The past several years have seen both an explosion in the use of Convolu...
research
08/10/2023

Machine Learning aided Computer Architecture Design for CNN Inferencing Systems

Efficient and timely calculations of Machine Learning (ML) algorithms ar...
research
03/27/2023

TransCODE: Co-design of Transformers and Accelerators for Efficient Training and Inference

Automated co-design of machine learning models and evaluation hardware i...
research
10/30/2018

A mixed signal architecture for convolutional neural networks

Deep neural network (DNN) accelerators with improved energy and delay ar...
research
02/13/2023

The Framework Tax: Disparities Between Inference Efficiency in Research and Deployment

Increased focus on the deployment of machine learning systems has led to...

Please sign up or login with your details

Forgot password? Click here to reset