Adaptive Posterior Learning: few-shot learning with a surprise-based memory module

02/07/2019
by   Tiago Ramalho, et al.
0

The ability to generalize quickly from few observations is crucial for intelligent systems. In this paper we introduce APL, an algorithm that approximates probability distributions by remembering the most surprising observations it has encountered. These past observations are recalled from an external memory module and processed by a decoder network that can combine information from different memory slots to generalize beyond direct recall. We show this algorithm can perform as well as state of the art baselines on few-shot classification benchmarks with a smaller memory footprint. In addition, its memory compression allows it to scale to thousands of unknown labels. Finally, we introduce a meta-learning reasoning task which is more challenging than direct classification. In this setting, APL is able to generalize with fewer than one example per class via deductive reasoning.

READ FULL TEXT
research
06/08/2023

EMO: Episodic Memory Optimization for Few-Shot Meta-Learning

Few-shot meta-learning presents a challenge for gradient descent optimiz...
research
05/20/2022

BayesPCN: A Continually Learnable Predictive Coding Associative Memory

Associative memory plays an important role in human intelligence and its...
research
08/13/2019

Meta Reasoning over Knowledge Graphs

The ability to reason over learned knowledge is an innate ability for hu...
research
10/20/2020

Learning to Learn Variational Semantic Memory

In this paper, we introduce variational semantic memory into meta-learni...
research
06/02/2021

Few-Shot Partial-Label Learning

Partial-label learning (PLL) generally focuses on inducing a noise-toler...
research
05/12/2020

Dynamic Memory Induction Networks for Few-Shot Text Classification

This paper proposes Dynamic Memory Induction Networks (DMIN) for few-sho...

Please sign up or login with your details

Forgot password? Click here to reset