Rapid Adaptation with Conditionally Shifted Neurons

12/28/2017
by   Tsendsuren Munkhdalai, et al.
0

We describe a mechanism by which artificial neural networks can learn rapid adaptation - the ability to adapt on the fly, with little data, to new tasks - that we call conditionally shifted neurons. We apply this mechanism in the framework of metalearning, where the aim is to replicate some of the flexibility of human learning in machines. Conditionally shifted neurons modify their activation values with task-specific shifts retrieved from a memory module, which is populated rapidly based on limited task experience. On metalearning benchmarks from the vision and language domains, models augmented with conditionally shifted neurons achieve state-of-the-art results.

READ FULL TEXT
research
12/28/2017

Learning Rapid-Temporal Adaptations

A hallmark of human intelligence and cognition is its flexibility. One o...
research
04/09/2022

Neural networks embrace learned diversity

Diversity conveys advantages in nature, yet homogeneous neurons typicall...
research
04/07/2021

The Emergence of Abstract and Episodic Neurons in Episodic Meta-RL

In this work, we analyze the reinstatement mechanism introduced by Ritte...
research
02/28/2018

Memory-based Parameter Adaptation

Deep neural networks have excelled on a wide range of problems, from vis...
research
03/29/2021

Self-Constructing Neural Networks Through Random Mutation

The search for neural architecture is producing many of the most excitin...
research
09/21/2023

On the Relationship between Skill Neurons and Robustness in Prompt Tuning

Prompt Tuning is a popular parameter-efficient finetuning method for pre...
research
06/14/2018

Selfless Sequential Learning

Sequential learning studies the problem of learning tasks in a sequence ...

Please sign up or login with your details

Forgot password? Click here to reset