Meta Learning by the Baldwin Effect

The scope of the Baldwin effect was recently called into question by two papers that closely examined the seminal work of Hinton and Nowlan. To this date there has been no demonstration of its necessity in empirically challenging tasks. Here we show that the Baldwin effect is capable of evolving few-shot supervised and reinforcement learning mechanisms, by shaping the hyperparameters and the initial parameters of deep learning algorithms. Furthermore it can genetically accommodate strong learning biases on the same set of problems as a recent machine learning algorithm called MAML "Model Agnostic Meta-Learning" which uses second-order gradients instead of evolution to learn a set of reference parameters (initial weights) that can allow rapid adaptation to tasks sampled from a distribution. Whilst in simple cases MAML is more data efficient than the Baldwin effect, the Baldwin effect is more general in that it does not require gradients to be backpropagated to the reference parameters or hyperparameters, and permits effectively any number of gradient updates in the inner loop. The Baldwin effect learns strong learning dependent biases, rather than purely genetically accommodating fixed behaviours in a learning independent manner.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/20/2020

Does MAML really want feature reuse only?

Meta-learning, the effort to solve new tasks with only a few samples, ha...
research
07/07/2014

Recommending Learning Algorithms and Their Associated Hyperparameters

The success of machine learning on a given task dependson, among other t...
research
07/31/2023

MetaDiff: Meta-Learning with Conditional Diffusion for Few-Shot Learning

Equipping a deep model the abaility of few-shot learning, i.e., learning...
research
10/18/2018

Gradient Agreement as an Optimization Objective for Meta-Learning

This paper presents a novel optimization method for maximizing generaliz...
research
03/02/2023

Model agnostic methods meta-learn despite misspecifications

Due to its empirical success on few shot classification and reinforcemen...
research
02/25/2021

Multi-Domain Learning by Meta-Learning: Taking Optimal Steps in Multi-Domain Loss Landscapes by Inner-Loop Learning

We consider a model-agnostic solution to the problem of Multi-Domain Lea...
research
12/10/2002

How to Shift Bias: Lessons from the Baldwin Effect

An inductive learning algorithm takes a set of data as input and generat...

Please sign up or login with your details

Forgot password? Click here to reset