Neural Architecture Optimization with Graph VAE

06/18/2020
by   Jian Li, et al.
0

Due to their high computational efficiency on a continuous space, gradient optimization methods have shown great potential in the neural architecture search (NAS) domain. The mapping of network representation from the discrete space to a latent space is the key to discovering novel architectures, however, existing gradient-based methods fail to fully characterize the networks. In this paper, we propose an efficient NAS approach to optimize network architectures in a continuous space, where the latent space is built upon variational autoencoder (VAE) and graph neural networks (GNN). The framework jointly learns four components: the encoder, the performance predictor, the complexity predictor and the decoder in an end-to-end manner. The encoder and the decoder belong to a graph VAE, mapping architectures between continuous representations and network architectures. The predictors are two regression models, fitting the performance and computational complexity, respectively. Those predictors ensure the discovered architectures characterize both excellent performance and high computational efficiency. Extensive experiments demonstrate our framework not only generates appropriate continuous representations but also discovers powerful neural architectures.

READ FULL TEXT

page 2

page 7

research
08/22/2018

Neural Architecture Optimization

Automatic neural architecture design has shown its potential in discover...
research
06/12/2020

Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?

Existing Neural Architecture Search (NAS) methods either encode neural a...
research
05/14/2020

A Semi-Supervised Assessor of Neural Architectures

Neural architecture search (NAS) aims to automatically design deep neura...
research
12/11/2019

A Variational-Sequential Graph Autoencoder for Neural Architecture Performance Prediction

In computer vision research, the process of automating architecture engi...
research
10/17/2020

DIFER: Differentiable Automated Feature Engineering

Feature engineering, a crucial step of machine learning, aims to extract...
research
05/31/2021

Variational Autoencoders: A Harmonic Perspective

In this work we study Variational Autoencoders (VAEs) from the perspecti...
research
03/16/2022

Learning Where To Look – Generative NAS is Surprisingly Efficient

The efficient, automated search for well-performing neural architectures...

Please sign up or login with your details

Forgot password? Click here to reset