HELP: Hardware-Adaptive Efficient Latency Predictor for NAS via Meta-Learning

06/16/2021
by   Hayeon Lee, et al.
0

For deployment, neural architecture search should be hardware-aware, in order to satisfy the device-specific constraints (e.g., memory usage, latency and energy consumption) and enhance the model efficiency. Existing methods on hardware-aware NAS collect a large number of samples (e.g., accuracy and latency) from a target device, either builds a lookup table or a latency estimator. However, such approach is impractical in real-world scenarios as there exist numerous devices with different hardware specifications, and collecting samples from such a large number of devices will require prohibitive computational and monetary cost. To overcome such limitations, we propose Hardware-adaptive Efficient Latency Predictor (HELP), which formulates the device-specific latency estimation problem as a meta-learning problem, such that we can estimate the latency of a model's performance for a given task on an unseen device with a few samples. To this end, we introduce novel hardware embeddings to embed any devices considering them as black-box functions that output latencies, and meta-learn the hardware-adaptive latency predictor in a device-dependent manner, using the hardware embeddings. We validate the proposed HELP for its latency estimation performance on unseen platforms, on which it achieves high estimation performance with as few as 10 measurement samples, outperforming all relevant baselines. We also validate end-to-end NAS frameworks using HELP against ones without it, and show that it largely reduces the total time cost of the base NAS method, in latency-constrained settings.

READ FULL TEXT
research
11/01/2021

One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search

Convolutional neural networks (CNNs) are used in numerous real-world app...
research
04/27/2022

MAPLE-Edge: A Runtime Latency Predictor for Edge Devices

Neural Architecture Search (NAS) has enabled automatic discovery of more...
research
07/16/2020

BRP-NAS: Prediction-based NAS using GCNs

Neural architecture search (NAS) enables researchers to automatically ex...
research
06/04/2023

Multi-Predict: Few Shot Predictors For Efficient Neural Architecture Search

Many hardware-aware neural architecture search (NAS) methods have been d...
research
05/21/2020

AOWS: Adaptive and optimal network width search with latency constraints

Neural architecture search (NAS) approaches aim at automatically finding...
research
11/30/2021

MAPLE: Microprocessor A Priori for Latency Estimation

Modern deep neural networks must demonstrate state-of-the-art accuracy w...
research
05/25/2022

MAPLE-X: Latency Prediction with Explicit Microprocessor Prior Knowledge

Deep neural network (DNN) latency characterization is a time-consuming p...

Please sign up or login with your details

Forgot password? Click here to reset