NAR-Former: Neural Architecture Representation Learning towards Holistic Attributes Prediction

11/15/2022
by   Yun Yi, et al.
0

With the wide and deep adoption of deep learning models in real applications, there is an increasing need to model and learn the representations of the neural networks themselves. These models can be used to estimate attributes of different neural network architectures such as the accuracy and latency, without running the actual training or inference tasks. In this paper, we propose a neural architecture representation model that can be used to estimate these attributes holistically. Specifically, we first propose a simple and effective tokenizer to encode both the operation and topology information of a neural network into a single sequence. Then, we design a multi-stage fusion transformer to build a compact vector representation from the converted sequence. For efficient model training, we further propose an information flow consistency augmentation and correspondingly design an architecture consistency loss, which brings more benefits with less augmentation samples compared with previous random augmentation strategies. Experiment results on NAS-Bench-101, NAS-Bench-201, DARTS search space and NNLQP show that our proposed framework can be used to predict the aforementioned latency and accuracy attributes of both cell architectures and whole deep neural networks, and achieves promising performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2023

NAR-Former V2: Rethinking Transformer for Universal Neural Network Representation Learning

As more deep learning models are being applied in real-world application...
research
01/04/2021

Generalized Latency Performance Estimation for Once-For-All Neural Architecture Search

Neural Architecture Search (NAS) has enabled the possibility of automate...
research
07/28/2021

Homogeneous Architecture Augmentation for Neural Predictor

Neural Architecture Search (NAS) can automatically design well-performed...
research
04/26/2022

GPUNet: Searching the Deployable Convolution Neural Networks for GPUs

Customizing Convolution Neural Networks (CNN) for production use has bee...
research
10/21/2020

MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers

Executing machine learning workloads locally on resource constrained mic...
research
11/05/2018

An Efficient Network for Predicting Time-Varying Distributions

While deep neural networks have achieved groundbreaking prediction resul...
research
09/02/2020

Adversarially Robust Neural Architectures

Deep Neural Network (DNN) are vulnerable to adversarial attack. Existing...

Please sign up or login with your details

Forgot password? Click here to reset