Standing on the Shoulders of Giants: Hardware and Neural Architecture Co-Search with Hot Start

07/17/2020
by   Weiwen Jiang, et al.
9

Hardware and neural architecture co-search that automatically generates Artificial Intelligence (AI) solutions from a given dataset is promising to promote AI democratization; however, the amount of time that is required by current co-search frameworks is in the order of hundreds of GPU hours for one target hardware. This inhibits the use of such frameworks on commodity hardware. The root cause of the low efficiency in existing co-search frameworks is the fact that they start from a "cold" state (i.e., search from scratch). In this paper, we propose a novel framework, namely HotNAS, that starts from a "hot" state based on a set of existing pre-trained models (a.k.a. model zoo) to avoid lengthy training time. As such, the search time can be reduced from 200 GPU hours to less than 3 GPU hours. In HotNAS, in addition to hardware design space and neural architecture search space, we further integrate a compression space to conduct model compressing during the co-search, which creates new opportunities to reduce latency but also brings challenges. One of the key challenges is that all of the above search spaces are coupled with each other, e.g., compression may not work without hardware design support. To tackle this issue, HotNAS builds a chain of tools to design hardware to support compression, based on which a global optimizer is developed to automatically co-search all the involved search spaces. Experiments on ImageNet dataset and Xilinx FPGA show that, within the timing constraint of 5ms, neural architectures generated by HotNAS can achieve up to 5.79 accuracy gain, compared with the existing ones.

READ FULL TEXT

page 3

page 4

page 7

page 8

page 9

page 11

page 12

page 13

research
06/13/2023

Flexible Channel Dimensions for Differentiable Architecture Search

Finding optimal channel dimensions (i.e., the number of filters in DNN l...
research
09/25/2021

Profiling Neural Blocks and Design Spaces for Mobile Neural Architecture Search

Neural architecture search automates neural network design and has achie...
research
12/02/2018

ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware

Neural architecture search (NAS) has a great impact by automatically des...
research
09/07/2021

ISyNet: Convolutional Neural Networks design for AI accelerator

In recent years Deep Learning reached significant results in many practi...
research
04/29/2019

Progressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation

Recently, differentiable search methods have made major progress in redu...
research
05/06/2020

EDD: Efficient Differentiable DNN Architecture and Implementation Co-search for Embedded AI Solutions

High quality AI solutions require joint optimization of AI algorithms an...
research
05/25/2021

AutoReCon: Neural Architecture Search-based Reconstruction for Data-free Compression

Data-free compression raises a new challenge because the original traini...

Please sign up or login with your details

Forgot password? Click here to reset