Log In Sign Up

Testing the Genomic Bottleneck Hypothesis in Hebbian Meta-Learning

by   Rasmus Berg Palm, et al.

Recent work has shown promising results using Hebbian meta-learning to solve hard reinforcement learning problems and adapt-to a limited degree-to changes in the environment. In previous works each synapse has its own learning rule. This allows each synapse to learn very specific learning rules and we hypothesize this limits the ability to discover generally useful Hebbian learning rules. We hypothesize that limiting the number of Hebbian learning rules through a "genomic bottleneck" can act as a regularizer leading to better generalization across changes to the environment. We test this hypothesis by decoupling the number of Hebbian learning rules from the number of synapses and systematically varying the number of Hebbian learning rules. We thoroughly explore how well these Hebbian meta-learning networks adapt to changes in their environment.


page 1

page 2

page 3

page 4


Learning where to learn: Gradient sparsity in meta and continual learning

Finding neural network weights that generalize well from small datasets ...

Meta-Learning with Temporal Convolutions

Deep neural networks excel in regimes with large amounts of data, but te...

Neuromorphic Architecture Optimization for Task-Specific Dynamic Learning

The ability to learn and adapt in real time is a central feature of biol...

Population-Based Evolution Optimizes a Meta-Learning Objective

Meta-learning models, or models that learn to learn, have been a long-de...

Collision Avoidance Robotics Via Meta-Learning (CARML)

This paper presents an approach to exploring a multi-objective reinforce...

Query Twice: Dual Mixture Attention Meta Learning for Video Summarization

Video summarization aims to select representative frames to retain high-...