A Hardware-Aware System for Accelerating Deep Neural Network Optimization

02/25/2022
by   Anthony Sarah, et al.
9

Recent advances in Neural Architecture Search (NAS) which extract specialized hardware-aware configurations (a.k.a. "sub-networks") from a hardware-agnostic "super-network" have become increasingly popular. While considerable effort has been employed towards improving the first stage, namely, the training of the super-network, the search for derivative high-performing sub-networks is still largely under-explored. For example, some recent network morphism techniques allow a super-network to be trained once and then have hardware-specific networks extracted from it as needed. These methods decouple the super-network training from the sub-network search and thus decrease the computational burden of specializing to different hardware platforms. We propose a comprehensive system that automatically and efficiently finds sub-networks from a pre-trained super-network that are optimized to different performance metrics and hardware configurations. By combining novel search tactics and algorithms with intelligent use of predictors, we significantly decrease the time needed to find optimal sub-networks from a given super-network. Further, our approach does not require the super-network to be refined for the target task a priori, thus allowing it to interface with any super-network. We demonstrate through extensive experiments that our system works seamlessly with existing state-of-the-art super-network training methods in multiple domains. Moreover, we show how novel search tactics paired with evolutionary algorithms can accelerate the search process for ResNet50, MobileNetV3 and Transformer while maintaining objective space Pareto front diversity and demonstrate an 8x faster search result than the state-of-the-art Bayesian optimization WeakNAS approach.

READ FULL TEXT
research
05/19/2022

A Hardware-Aware Framework for Accelerating Neural Architecture Search Across Modalities

Recent advances in Neural Architecture Search (NAS) such as one-shot NAS...
research
12/20/2021

Enabling NAS with Automated Super-Network Generation

Recent Neural Architecture Search (NAS) solutions have produced impressi...
research
07/03/2023

Neural Architecture Transfer 2: A Paradigm for Improving Efficiency in Multi-Objective Neural Architecture Search

Deep learning is increasingly impacting various aspects of contemporary ...
research
07/07/2020

Discretization-Aware Architecture Search

The search cost of neural architecture search (NAS) has been largely red...
research
05/25/2023

Towards Automatic Neural Architecture Search within General Super-Networks

Existing neural architecture search (NAS) methods typically rely on pre-...
research
01/23/2020

Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation

Achieving faster execution with shorter compilation time can foster furt...

Please sign up or login with your details

Forgot password? Click here to reset