Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm

10/31/2017
by   Chelsea Finn, et al.
0

Learning to learn is a powerful paradigm for enabling models to learn from data more effectively and efficiently. A popular approach to meta-learning is to train a recurrent model to read in a training dataset as input and output the parameters of a learned model, or output predictions for new test inputs. Alternatively, a more recent approach to meta-learning aims to acquire deep representations that can be effectively fine-tuned, via standard gradient descent, to new tasks. In this paper, we consider the meta-learning problem from the perspective of universality, formalizing the notion of learning algorithm approximation and comparing the expressive power of the aforementioned recurrent models to the more recent approaches that embed gradient descent into the meta-learner. In particular, we seek to answer the following question: does deep representation combined with standard gradient descent have sufficient capacity to approximate any learning algorithm? We find that this is indeed true, and further find, in our experiments, that gradient-based meta-learning consistently leads to learning strategies that generalize more widely compared to those represented by recurrent models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/17/2018

Meta-Learning with Adaptive Layerwise Metric and Subspace

Recent advances in meta-learning demonstrate that deep representations c...
research
02/20/2022

A History of Meta-gradient: Gradient Methods for Meta-learning

The history of meta-learning methods based on gradient descent is review...
research
03/05/2022

Meta Mirror Descent: Optimiser Learning for Fast Convergence

Optimisers are an essential component for training machine learning mode...
research
06/25/2020

Global Convergence and Induced Kernels of Gradient-Based Meta-Learning with Neural Nets

Gradient-based meta-learning (GBML) with deep neural nets (DNNs) has bec...
research
12/09/2016

Learning Representations by Stochastic Meta-Gradient Descent in Neural Networks

Representations are fundamental to artificial intelligence. The performa...
research
12/29/2020

Meta Learning Backpropagation And Improving It

Many concepts have been proposed for meta learning with neural networks ...
research
11/06/2019

Fair Meta-Learning: Learning How to Learn Fairly

Data sets for fairness relevant tasks can lack examples or be biased acc...

Please sign up or login with your details

Forgot password? Click here to reset