DeepAI AI Chat
Log In Sign Up

Searching Toward Pareto-Optimal Device-Aware Neural Architectures

by   An-Chieh Cheng, et al.

Recent breakthroughs in Neural Architectural Search (NAS) have achieved state-of-the-art performance in many tasks such as image classification and language understanding. However, most existing works only optimize for model accuracy and largely ignore other important factors imposed by the underlying hardware and devices, such as latency and energy, when making inference. In this paper, we first introduce the problem of NAS and provide a survey on recent works. Then we deep dive into two recent advancements on extending NAS into multiple-objective frameworks: MONAS and DPP-Net. Both MONAS and DPP-Net are capable of optimizing accuracy and other objectives imposed by devices, searching for neural architectures that can be best deployed on a wide spectrum of devices: from embedded systems and mobile devices to workstations. Experimental results are poised to show that architectures found by MONAS and DPP-Net achieves Pareto optimality w.r.t the given objectives for various devices.


page 1

page 2

page 3

page 4


DPP-Net: Device-aware Progressive Search for Pareto-optimal Neural Architectures

Recent breakthroughs in Neural Architectural Search (NAS) have achieved ...

Inference Latency Prediction at the Edge

With the growing workload of inference tasks on mobile devices, state-of...

EASNet: Searching Elastic and Accurate Network Architecture for Stereo Matching

Recent advanced studies have spent considerable human efforts on optimiz...

A Comprehensive Survey on Hardware-Aware Neural Architecture Search

Neural Architecture Search (NAS) methods have been growing in popularity...

Searching for Efficient Neural Architectures for On-Device ML on Edge TPUs

On-device ML accelerators are becoming a standard in modern mobile syste...

NSGA-NET: A Multi-Objective Genetic Algorithm for Neural Architecture Search

This paper introduces NSGA-Net, an evolutionary approach for neural arch...

Learning Where To Look – Generative NAS is Surprisingly Efficient

The efficient, automated search for well-performing neural architectures...