Meta-Learning to Improve Pre-Training

11/02/2021
by   Aniruddh Raghu, et al.
14

Pre-training (PT) followed by fine-tuning (FT) is an effective method for training neural networks, and has led to significant performance improvements in many domains. PT can incorporate various design choices such as task and data reweighting strategies, augmentation policies, and noise models, all of which can significantly impact the quality of representations learned. The hyperparameters introduced by these strategies therefore must be tuned appropriately. However, setting the values of these hyperparameters is challenging. Most existing methods either struggle to scale to high dimensions, are too slow and memory-intensive, or cannot be directly applied to the two-stage PT and FT learning process. In this work, we propose an efficient, gradient-based algorithm to meta-learn PT hyperparameters. We formalize the PT hyperparameter optimization problem and propose a novel method to obtain PT hyperparameter gradients by combining implicit differentiation and backpropagation through unrolled optimization. We demonstrate that our method improves predictive performance on two real-world domains. First, we optimize high-dimensional task weighting hyperparameters for multitask pre-training on protein-protein interaction graphs and improve AUROC by up to 3.9 optimize a data augmentation neural network for self-supervised PT with SimCLR on electrocardiography data and improve AUROC by up to 1.9

READ FULL TEXT
research
11/06/2019

Optimizing Millions of Hyperparameters by Implicit Differentiation

We propose an algorithm for inexpensive gradient-based hyperparameter op...
research
10/06/2021

Online Hyperparameter Meta-Learning with Hypergradient Distillation

Many gradient-based meta-learning methods assume a set of parameters tha...
research
07/16/2022

On the Importance of Hyperparameters and Data Augmentation for Self-Supervised Learning

Self-Supervised Learning (SSL) has become a very active area of Deep Lea...
research
11/05/2020

Teaching with Commentaries

Effective training of deep neural networks can be challenging, and there...
research
04/28/2023

Hyperparameter Optimization through Neural Network Partitioning

Well-tuned hyperparameters are crucial for obtaining good generalization...
research
06/19/2021

EvoGrad: Efficient Gradient-Based Meta-Learning and Hyperparameter Optimization

Gradient-based meta-learning and hyperparameter optimization have seen s...
research
09/10/2020

Prototype Completion with Primitive Knowledge for Few-Shot Learning

Few-shot learning is a challenging task, which aims to learn a classifie...

Please sign up or login with your details

Forgot password? Click here to reset