MAPLE: Microprocessor A Priori for Latency Estimation

11/30/2021
by   Saad Abbasi, et al.
0

Modern deep neural networks must demonstrate state-of-the-art accuracy while exhibiting low latency and energy consumption. As such, neural architecture search (NAS) algorithms take these two constraints into account when generating a new architecture. However, efficiency metrics such as latency are typically hardware dependent requiring the NAS algorithm to either measure or predict the architecture latency. Measuring the latency of every evaluated architecture adds a significant amount of time to the NAS process. Here we propose Microprocessor A Priori for Latency Estimation MAPLE that does not rely on transfer learning or domain adaptation but instead generalizes to new hardware by incorporating a prior hardware characteristics during training. MAPLE takes advantage of a novel quantitative strategy to characterize the underlying microprocessor by measuring relevant hardware performance metrics, yielding a fine-grained and expressive hardware descriptor. Moreover, the proposed MAPLE benefits from the tightly coupled I/O between the CPU and GPU and their dependency to predict DNN latency on GPUs while measuring microprocessor performance hardware counters from the CPU feeding the GPU hardware. Through this quantitative strategy as the hardware descriptor, MAPLE can generalize to new hardware via a few shot adaptation strategy where with as few as 3 samples it exhibits a 3 10 samples. Experimental results showed that, increasing the few shot adaptation samples to 10 improves the accuracy significantly over the state-of-the-art methods by 12 exhibiting 8-10 any number of adaptation samples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/04/2021

Generalized Latency Performance Estimation for Once-For-All Neural Architecture Search

Neural Architecture Search (NAS) has enabled the possibility of automate...
research
03/11/2021

HSCoNAS: Hardware-Software Co-Design of Efficient DNNs via Neural Architecture Search

In this paper, we present a novel multi-objective hardware-aware neural ...
research
05/25/2022

MAPLE-X: Latency Prediction with Explicit Microprocessor Prior Knowledge

Deep neural network (DNN) latency characterization is a time-consuming p...
research
08/01/2021

FLASH: Fast Neural Architecture Search with Hardware Optimization

Neural architecture search (NAS) is a promising technique to design effi...
research
06/16/2021

HELP: Hardware-Adaptive Efficient Latency Predictor for NAS via Meta-Learning

For deployment, neural architecture search should be hardware-aware, in ...
research
07/16/2020

BRP-NAS: Prediction-based NAS using GCNs

Neural architecture search (NAS) enables researchers to automatically ex...
research
04/27/2022

MAPLE-Edge: A Runtime Latency Predictor for Edge Devices

Neural Architecture Search (NAS) has enabled automatic discovery of more...

Please sign up or login with your details

Forgot password? Click here to reset