Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data

12/17/2019
by   Felipe Petroski Such, et al.
36

This paper investigates the intriguing question of whether we can create learning algorithms that automatically generate training data, learning environments, and curricula in order to help AI agents rapidly learn. We show that such algorithms are possible via Generative Teaching Networks (GTNs), a general approach that is, in theory, applicable to supervised, unsupervised, and reinforcement learning, although our experiments only focus on the supervised case. GTNs are deep neural networks that generate data and/or training environments that a learner (e.g. a freshly initialized neural network) trains on for a few SGD steps before being tested on a target task. We then differentiate through the entire learning process via meta-gradients to update the GTN parameters to improve performance on the target task. GTNs have the beneficial property that they can theoretically generate any type of data or training environment, making their potential impact large. This paper introduces GTNs, discusses their potential, and showcases that they can substantially accelerate learning. We also demonstrate a practical and exciting application of GTNs: accelerating the evaluation of candidate architectures for neural architecture search (NAS), which is rate-limited by such evaluations, enabling massive speed-ups in NAS. GTN-NAS improves the NAS state of the art, finding higher performing architectures when controlling for the search proposal mechanism. GTN-NAS also is competitive with the overall state of the art approaches, which achieve top performance while using orders of magnitude less computation than typical NAS methods. Speculating forward, GTNs may represent a first step toward the ambitious goal of algorithms that generate their own training data and, in doing so, open a variety of interesting new research questions and directions.

READ FULL TEXT

page 6

page 16

page 18

page 19

page 22

page 23

page 24

page 26

research
02/15/2020

Neural Architecture Search over Decentralized Data

To preserve user privacy while enabling mobile intelligence, techniques ...
research
01/17/2019

EAT-NAS: Elastic Architecture Transfer for Accelerating Large-scale Neural Architecture Search

Neural architecture search (NAS) methods have been proposed to release h...
research
11/30/2020

Optimizing the Neural Architecture of Reinforcement Learning Agents

Reinforcement learning (RL) enjoyed significant progress over the last y...
research
02/16/2021

EPE-NAS: Efficient Performance Estimation Without Training for Neural Architecture Search

Neural Architecture Search (NAS) has shown excellent results in designin...
research
10/30/2020

AgEBO-Tabular: Joint Neural Architecture and Hyperparameter Search with Autotuned Data-Parallel Training for Tabular Data

Developing high-performing predictive models for large tabular data sets...
research
10/21/2022

Neural Architectural Backdoors

This paper asks the intriguing question: is it possible to exploit neura...
research
05/27/2020

Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search

Neural Architecture Search (NAS) explores a large space of architectural...

Please sign up or login with your details

Forgot password? Click here to reset