DeepAI AI Chat
Log In Sign Up

Decoder Choice Network for Meta-Learning

09/25/2019
by   Jialin Liu, et al.
16

Meta-learning has been widely used for implementing few-shot learning and fast model adaptation. One kind of meta-learning methods attempt to learn how to control the gradient descent process in order to make the gradient-based learning have high speed and generalization. This work proposes a method that controls the gradient descent process of the model parameters of a neural network by limiting the model parameters in a low-dimensional latent space. The main challenge of this idea is that a decoder with too many parameters is required. This work designs a decoder with typical structure and shares a part of weights in the decoder to reduce the number of the required parameters. Besides, this work has introduced ensemble learning to work with the proposed approach for improving performance. The results show that the proposed approach is witnessed by the superior performance over the Omniglot classification and the miniImageNet classification tasks.

READ FULL TEXT

page 4

page 5

page 8

page 13

07/16/2018

Meta-Learning with Latent Embedding Optimization

Gradient-based meta-learning techniques are both widely applicable and p...
08/30/2019

Meta-Learning with Warped Gradient Descent

A versatile and effective approach to meta-learning is to infer a gradie...
07/08/2022

On the Subspace Structure of Gradient-Based Meta-Learning

In this work we provide an analysis of the distribution of the post-adap...
12/05/2019

MetaFun: Meta-Learning with Iterative Functional Updates

Few-shot supervised learning leverages experience from previous learning...
06/17/2020

MetaSDF: Meta-learning Signed Distance Functions

Neural implicit shape representations are an emerging paradigm that offe...
07/15/2023

Generative Meta-Learning Robust Quality-Diversity Portfolio

This paper proposes a novel meta-learning approach to optimize a robust ...