FLASH: Fast Neural Architecture Search with Hardware Optimization

08/01/2021
by   Guihong Li, et al.
0

Neural architecture search (NAS) is a promising technique to design efficient and high-performance deep neural networks (DNNs). As the performance requirements of ML applications grow continuously, the hardware accelerators start playing a central role in DNN design. This trend makes NAS even more complicated and time-consuming for most real applications. This paper proposes FLASH, a very fast NAS methodology that co-optimizes the DNN accuracy and performance on a real hardware platform. As the main theoretical contribution, we first propose the NN-Degree, an analytical metric to quantify the topological characteristics of DNNs with skip connections (e.g., DenseNets, ResNets, Wide-ResNets, and MobileNets). The newly proposed NN-Degree allows us to do training-free NAS within one second and build an accuracy predictor by training as few as 25 samples out of a vast search space with more than 63 billion configurations. Second, by performing inference on the target hardware, we fine-tune and validate our analytical models to estimate the latency, area, and energy consumption of various DNN architectures while executing standard ML datasets. Third, we construct a hierarchical algorithm based on simplicial homology global optimization (SHGO) to optimize the model-architecture co-design process, while considering the area, latency, and energy consumption of the target hardware. We demonstrate that, compared to the state-of-the-art NAS approaches, our proposed hierarchical SHGO-based algorithm enables more than four orders of magnitude speedup (specifically, the execution time of the proposed algorithm is about 0.1 seconds). Finally, our experimental evaluations show that FLASH is easily transferable to different hardware architectures, thus enabling us to do NAS on a Raspberry Pi-3B processor in less than 3 seconds.

READ FULL TEXT

page 11

page 20

research
03/23/2022

U-Boost NAS: Utilization-Boosted Differentiable Neural Architecture Search

Optimizing resource utilization in target platforms is key to achieving ...
research
05/25/2022

MAPLE-X: Latency Prediction with Explicit Microprocessor Prior Knowledge

Deep neural network (DNN) latency characterization is a time-consuming p...
research
11/24/2021

Algorithm and Hardware Co-design for Reconfigurable CNN Accelerator

Recent advances in algorithm-hardware co-design for deep neural networks...
research
08/19/2020

NASCaps: A Framework for Neural Architecture Search to Optimize the Accuracy and Hardware Efficiency of Convolutional Capsule Networks

Deep Neural Networks (DNNs) have made significant improvements to reach ...
research
11/29/2018

TEA-DNN: the Quest for Time-Energy-Accuracy Co-optimized Deep Neural Networks

Embedded deep learning platforms have witnessed two simultaneous improve...
research
11/30/2021

MAPLE: Microprocessor A Priori for Latency Estimation

Modern deep neural networks must demonstrate state-of-the-art accuracy w...
research
04/09/2022

Searching for Efficient Neural Architectures for On-Device ML on Edge TPUs

On-device ML accelerators are becoming a standard in modern mobile syste...

Please sign up or login with your details

Forgot password? Click here to reset