A Framework of Meta Functional Learning for Regularising Knowledge Transfer

03/28/2022
by   Pan Li, et al.
7

Machine learning classifiers' capability is largely dependent on the scale of available training data and limited by the model overfitting in data-scarce learning tasks. To address this problem, this work proposes a novel framework of Meta Functional Learning (MFL) by meta-learning a generalisable functional model from data-rich tasks whilst simultaneously regularising knowledge transfer to data-scarce tasks. The MFL computes meta-knowledge on functional regularisation generalisable to different learning tasks by which functional training on limited labelled data promotes more discriminative functions to be learned. Based on this framework, we formulate three variants of MFL: MFL with Prototypes (MFL-P) which learns a functional by auxiliary prototypes, Composite MFL (ComMFL) that transfers knowledge from both functional space and representational space, and MFL with Iterative Updates (MFL-IU) which improves knowledge transfer regularisation from MFL by progressively learning the functional regularisation in knowledge transfer. Moreover, we generalise these variants for knowledge transfer regularisation from binary classifiers to multi-class classifiers. Extensive experiments on two few-shot learning scenarios, Few-Shot Learning (FSL) and Cross-Domain Few-Shot Learning (CD-FSL), show that meta functional learning for knowledge transfer regularisation can improve FSL classifiers.

READ FULL TEXT

page 1

page 11

research
12/06/2018

Meta-Transfer Learning for Few-Shot Learning

Meta-learning has been proposed as a framework to address the challengin...
research
12/05/2019

MetaFun: Meta-Learning with Iterative Functional Updates

Few-shot supervised learning leverages experience from previous learning...
research
09/13/2021

Meta Navigator: Search for a Good Adaptation Policy for Few-shot Learning

Few-shot learning aims to adapt knowledge learned from previous tasks to...
research
03/22/2023

Meta-augmented Prompt Tuning for Better Few-shot Learning

Prompt tuning is a parameter-efficient method, which freezes all PLM par...
research
08/28/2023

Fair Few-shot Learning with Auxiliary Sets

Recently, there has been a growing interest in developing machine learni...
research
06/09/2021

Attentional meta-learners are polythetic classifiers

Polythetic classifications, based on shared patterns of features that ne...
research
08/27/2021

Binocular Mutual Learning for Improving Few-shot Classification

Most of the few-shot learning methods learn to transfer knowledge from d...

Please sign up or login with your details

Forgot password? Click here to reset