DeepAI AI Chat
Log In Sign Up

Contrastive Embeddings for Neural Architectures

by   Daniel Hesslow, et al.

The performance of algorithms for neural architecture search strongly depends on the parametrization of the search space. We use contrastive learning to identify networks across different initializations based on their data Jacobians, and automatically produce the first architecture embeddings independent from the parametrization of the search space. Using our contrastive embeddings, we show that traditional black-box optimization algorithms, without modification, can reach state-of-the-art performance in Neural Architecture Search. As our method provides a unified embedding space, we perform for the first time transfer learning between search spaces. Finally, we show the evolution of embeddings during training, motivating future studies into using embeddings at different training stages to gain a deeper understanding of the networks in a search space.


page 1

page 2

page 3

page 4


Smooth Variational Graph Embeddings for Efficient Neural Architecture Search

In this paper, we propose an approach to neural architecture search (NAS...

Evolving Search Space for Neural Architecture Search

The automation of neural architecture design has been a coveted alternat...

Neural Architecture Search Over a Graph Search Space

Neural architecture search (NAS) enabled the discovery of state-of-the-a...

Neural Inheritance Relation Guided One-Shot Layer Assignment Search

Layer assignment is seldom picked out as an independent research topic i...

Towards modular and programmable architecture search

Neural architecture search methods are able to find high performance dee...

Learnable Embedding Space for Efficient Neural Architecture Compression

We propose a method to incrementally learn an embedding space over the d...