MAPLE-X: Latency Prediction with Explicit Microprocessor Prior Knowledge

05/25/2022
by   Saad Abbasi, et al.
0

Deep neural network (DNN) latency characterization is a time-consuming process and adds significant cost to Neural Architecture Search (NAS) processes when searching for efficient convolutional neural networks for embedded vision applications. DNN Latency is a hardware dependent metric and requires direct measurement or inference on target hardware. A recently introduced latency estimation technique known as MAPLE predicts DNN execution time on previously unseen hardware devices by using hardware performance counters. Leveraging these hardware counters in the form of an implicit prior, MAPLE achieves state-of-the-art performance in latency prediction. Here, we propose MAPLE-X which extends MAPLE by incorporating explicit prior knowledge of hardware devices and DNN architecture latency to better account for model stability and robustness. First, by identifying DNN architectures that exhibit a similar latency to each other, we can generate multiple virtual examples to significantly improve the accuracy over MAPLE. Secondly, the hardware specifications are used to determine the similarity between training and test hardware to emphasize training samples captured from comparable devices (domains) and encourages improved domain alignment. Experimental results using a convolution neural network NAS benchmark across different types of devices, including an Intel processor that is now used for embedded vision applications, demonstrate a 5 include ablation studies to independently assess the benefits of virtual examples and hardware-based sample importance.

READ FULL TEXT

page 2

page 3

research
08/01/2021

FLASH: Fast Neural Architecture Search with Hardware Optimization

Neural architecture search (NAS) is a promising technique to design effi...
research
11/30/2021

MAPLE: Microprocessor A Priori for Latency Estimation

Modern deep neural networks must demonstrate state-of-the-art accuracy w...
research
11/01/2021

One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search

Convolutional neural networks (CNNs) are used in numerous real-world app...
research
11/29/2018

TEA-DNN: the Quest for Time-Energy-Accuracy Co-optimized Deep Neural Networks

Embedded deep learning platforms have witnessed two simultaneous improve...
research
06/16/2021

HELP: Hardware-Adaptive Efficient Latency Predictor for NAS via Meta-Learning

For deployment, neural architecture search should be hardware-aware, in ...
research
04/27/2022

MAPLE-Edge: A Runtime Latency Predictor for Edge Devices

Neural Architecture Search (NAS) has enabled automatic discovery of more...
research
05/07/2021

ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked Models

With new accelerator hardware for DNN, the computing power for AI applic...

Please sign up or login with your details

Forgot password? Click here to reset