Meta Cyclical Annealing Schedule: A Simple Approach to Avoiding Meta-Amortization Error

03/04/2020
by   Yusuke Hayashi, et al.
10

The ability to learn new concepts with small amounts of data is a crucial aspect of intelligence that has proven challenging for deep learning methods. Meta-learning for few-shot learning offers a potential solution to this problem: by learning to learn across data from many previous tasks, few-shot learning algorithms can discover the structure among tasks to enable fast learning of new tasks. However, a critical challenge in few-shot learning is task ambiguity: even when a powerful prior can be meta-learned from a large number of prior tasks, a small dataset for a new task can simply be very ambiguous to acquire a single model for that task. The Bayesian meta-learning models can naturally resolve this problem by putting a sophisticated prior distribution and let the posterior well regularized through Bayesian decision theory. However, currently known Bayesian meta-learning procedures such as VERSA suffer from the so-called information preference problem, that is, the posterior distribution is degenerated to one point and is far from the exact one. To address this challenge, we design a novel meta-regularization objective using cyclical annealing schedule and maximum mean discrepancy (MMD) criterion. The cyclical annealing schedule is quite effective at avoiding such degenerate solutions. This procedure includes a difficult KL-divergence estimation, but we resolve the issue by employing MMD instead of KL-divergence. The experimental results show that our approach substantially outperforms standard meta-learning algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/07/2018

Probabilistic Model-Agnostic Meta-Learning

Meta-learning for few-shot learning entails acquiring a prior over previ...
research
12/09/2019

Meta-Learning without Memorization

The ability to learn new concepts with small amounts of data is a critic...
research
12/14/2020

Variable-Shot Adaptation for Online Meta-Learning

Few-shot meta-learning methods consider the problem of learning new task...
research
05/24/2018

Decision-Theoretic Meta-Learning: Versatile and Efficient Amortization of Few-Shot Learning

This paper develops a general framework for data efficient and versatile...
research
07/18/2023

Learning to Sample Tasks for Meta Learning

Through experiments on various meta-learning methods, task samplers, and...
research
04/27/2023

On the Generalization Error of Meta Learning for the Gibbs Algorithm

We analyze the generalization ability of joint-training meta learning al...
research
10/06/2022

Hypernetwork approach to Bayesian MAML

The main goal of Few-Shot learning algorithms is to enable learning from...

Please sign up or login with your details

Forgot password? Click here to reset