DeepAI
Log In Sign Up

Testing the Genomic Bottleneck Hypothesis in Hebbian Meta-Learning

11/13/2020
by   Rasmus Berg Palm, et al.
0

Recent work has shown promising results using Hebbian meta-learning to solve hard reinforcement learning problems and adapt-to a limited degree-to changes in the environment. In previous works each synapse has its own learning rule. This allows each synapse to learn very specific learning rules and we hypothesize this limits the ability to discover generally useful Hebbian learning rules. We hypothesize that limiting the number of Hebbian learning rules through a "genomic bottleneck" can act as a regularizer leading to better generalization across changes to the environment. We test this hypothesis by decoupling the number of Hebbian learning rules from the number of synapses and systematically varying the number of Hebbian learning rules. We thoroughly explore how well these Hebbian meta-learning networks adapt to changes in their environment.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/27/2021

Learning where to learn: Gradient sparsity in meta and continual learning

Finding neural network weights that generalize well from small datasets ...
07/11/2017

Meta-Learning with Temporal Convolutions

Deep neural networks excel in regimes with large amounts of data, but te...
06/04/2019

Neuromorphic Architecture Optimization for Task-Specific Dynamic Learning

The ability to learn and adapt in real time is a central feature of biol...
03/11/2021

Population-Based Evolution Optimizes a Meta-Learning Objective

Meta-learning models, or models that learn to learn, have been a long-de...
07/16/2020

Collision Avoidance Robotics Via Meta-Learning (CARML)

This paper presents an approach to exploring a multi-objective reinforce...
08/19/2020

Query Twice: Dual Mixture Attention Meta Learning for Video Summarization

Video summarization aims to select representative frames to retain high-...