Uncertainty in Model-Agnostic Meta-Learning using Variational Inference

07/27/2019
by   Cuong Nguyen, et al.
4

We introduce a new, rigorously-formulated Bayesian meta-learning algorithm that learns a probability distribution of model parameter prior for few-shot learning. The proposed algorithm employs a gradient-based variational inference to infer the posterior of model parameters to a new task. Our algorithm can be applied to any model architecture and can be implemented in various machine learning paradigms, including regression and classification. We show that the models trained with our proposed meta-learning algorithm are well calibrated and accurate, with state-of-the-art calibration and classification results on two few-shot classification benchmarks (Omniglot and Mini-ImageNet), and competitive results in a multi-modal task-distribution regression.

READ FULL TEXT
research
03/05/2020

PAC-Bayesian Meta-learning with Implicit Prior

We introduce a new and rigorously-formulated PAC-Bayes few-shot meta-lea...
research
07/06/2020

Covariate Distribution Aware Meta-learning

Meta-learning has proven to be successful at few-shot learning across th...
research
08/02/2021

Learning to Learn to Demodulate with Uncertainty Quantification via Bayesian Meta-Learning

Meta-learning, or learning to learn, offers a principled framework for f...
research
04/27/2020

Empirical Bayes Transductive Meta-Learning with Synthetic Gradients

We propose a meta-learning approach that learns from multiple tasks in a...
research
07/06/2020

Meta-Learning for Variational Inference

Variational inference (VI) plays an essential role in approximate Bayesi...
research
08/27/2020

Meta-Learning with Shared Amortized Variational Inference

We propose a novel amortized variational inference scheme for an empiric...
research
10/10/2022

Multi-Modal Fusion by Meta-Initialization

When experience is scarce, models may have insufficient information to a...

Please sign up or login with your details

Forgot password? Click here to reset